Hunting the Giant Centipede - Rhysida

Background

My name is Leland but, you can call me c0mmrade if you'd like. I'm a Security Researcher and Senior SecOps Engineer for CTS. I've worked in the tech space in varying capacities for over 24 years, and exclusively in Infosec the past 3 years. I've considered myself a hacker since the first time I encountered the word, though I was certainly hacking things around me well before that. Most of my knowledge has been self-earned with some formal training during my time in the U.S. Army and various structured resources like HackTheBox.

Recently I've had the opportunity to expand CTS' Security Operations Center both in capability and personnel. I'm looking forward to learning so much more as we grow.

Over the past 2 months I have been investigating the activity of the Rhysida ransomware operation. The following is an informal recounting of my experience.

About the Attacker

Rhysida is a ransomware operation that emerged in mid-2023, quickly establishing itself as a significant threat to the Internet. First identified in May 2023, the group is named after a centipede genus and features a distinctive centipede logo.

The group's sudden appearance suggests they may be experienced cybercriminals, potentially an evolution of the Vice Society ransomware operation. By June 2023, they had already set up a professional dark web data leak site employing double-extortion tactics—encrypting data and threatening to publish stolen information.

A notable attack against London-based talent agency The Agency highlighted their capability to cause substantial financial and personal damage, leaking sensitive information like passports and other confidential documents.

Despite their short history, Rhysida has distinguished itself through technical sophistication, strategic targeting, and a typically well-organized operational approach.

Their leak-site on TOR boasts 172 victims as of this writing but, I know for certain they've hit many more organizations during recent months. I've witnessed this firsthand, managing to gain a foothold in one of their previously unknown Command-and-Control (C2) servers, and trying my best to notify victims as I identified them.

Victim_Zero

Like most days I was up a little too early for my liking. I was just getting in my truck for my morning commute when a text came in from Alex, our CTO, "Have an emergency can you call me when you get a second". One of our unmanaged clients (not contracted for SOC monitoring, updates, etc.) was contacting us for help. They'd been hit with ransomware, most of their servers were unresponsive, and their business was completely offline.

By the time I got to the office, Alex had already shutdown most of their Hyper-V virtual machines and isolated the server. But it was clear that the damage was extensive. The attackers targeted the most common, high-value files; SQL databases, PDFs, Excel spreadsheets, etc. All appended with the extension .rhysida. Every directory was littered with PDF ransom notes identifying the attacker as the Rhysida ransomware operation and explaining how to contact them.

Recovery

We immediately collected forensic memory and disk images of the Hypervisor in hopes of finding out more about our attackers and the malware they were using (thanks FTK!). If you've dealt with ransomware before you know the drill: isolate infected hosts, gather forensic images, and restore from backups. If you don't have backups then be ready to pony up the ransom or kiss your data goodbye. Thankfully, they had backups to restore from right!? Well, sort of. Victim_Zero was not following best practices. They were backing up all their VMs with Veeam, but they had no off-site replication. Further complicating things, our client was not regularly updating their operating systems or software; so getting to the Veeam server's soft underbelly would have been trivial. I'm sure you can see where this is going.

Most ransomware groups make it a point to encrypt or delete obvious backups, increasing their chances of a payout. We checked their Veeam backups and confirmed that most of the datastores had been encrypted. So our next step was to check the Volume Shadow Service for copies of the data that we could restore from. One of the known Indicators of Compromise (IOCs) pertaining to Rhysida is an attempt to run cmd.exe vssadmin delete shadows /all /quiet (MITRE ATT&CK - T1490) to inhibit system recovery after they've exfiltrated the data. I mentioned earlier that this client was not under contract with our SOC, but thankfully they had bought SentinelOne through us for their servers, and this is the exact activity that our trusty EDR first identified and stopped.

So, good news! We still had shadow copies of VHDs that we could recover the environment from. The trouble was, they were living on an isolated machine potentially still infected with malware. We had our on-site contact pick up a few external drives large enough to store the recovered VHDs and got to work making copies, while Alex started spinning up a crisp, clean new infrastructure in the background. With a plan for recovery of their servers in action, I could turn my attention to threat hunting, and hopefully identify who to block.

Threat Hunting

I was unfamiliar with this group, so I took a moment to research them online. Like Rage said, "know your enemy". Honestly, they all start to blend together with their terrible names and ridiculous posturing. After reading some of the great work published by CISA, Sophos, and Picus I was armed with some useful information about how Rhysida operates, common IOCs, and a few details about their history. I searched the firewall logs for any of the IOCs I had found but came up short. But just to be safe I added policies blocking related domains and IPs.

From the incidents reported by SentinelOne I could see that the final phase of their attack on our client began at roughly 2:04AM on 20FEB2025 from one of their Domain Controllers, putting the initial access phase somewhere between 3-7 days prior by rough estimation. Our client gave us the name of at least 1 workstation they were certain had been infected, so I decided to start there. Not long into digging I found the first signs of credential dumping, lateral movement, and C2 activity:

Without access to the client's email logs and a proper EDR on their workstations, working out the initial access vector would be tough and consume time that we just didn't have. With our managed clients, I have access to a wealth of tools and resources like Barracuda for spam/phish filtering, Sentinel One and Huntress for EDR, and Blumira as our SIEM. These tools would have sped my work up considerably. But I did have some new timestamps, and a pathological inability to let go of anything. So I turned my attention back to the firewall logs, looking for any abnormal traffic. After channeling my inner Angela Lansbury and cranking anything 173bpm or higher, I started seeing a pattern of activity. Thousands of log entries showing traffic from Victim_Zero's domain controller, over port 4001, to a public IP address of 66.85.173.11 and a related domain name of patterson.pureskin.cloud. Each session moving 50MB or more at a time. That looked an awful lot like data exfiltration to me.

Say what you will about Fortinet (and I have a lot to say) but they have a fantastic traffic log UI, and it makes work like this so much easier than some other vendors.

I scoured the logs multiple times over looking for other public IPs with similar behavior but found nothing. So, did we have a smoking gun? Was the city of Townsville once again safe? Hardly. But what I did have was an IOC that, as far as I can tell, has never been published before now.

Strap-in, this is where it gets silly.

Kick in the Door, Wavin' the .44

Rhysida, what's your deal? Why are you lashing out against non-profits, modelling agencies, and Police departments like an angry teenager? Who hurt you?

First I ran a search against WHOIS, expecting the IP to be registered in some former-Eastern Bloc territory, or maybe Southeast Asia. But Tempe, Arizona? The astute reader will no doubt expect this information might be actionable at a Federal level. And they would be right. Oooooh, foreshadowing.

Now I knew this IP related to our Rhysida attacker belonged to a hosting provider in the states, and I had contact info for them. Contacting the hosting company's abuse team to report the activity would likely cause the attackers to scatter and just move their activity to another server. And I much preferred taking this opportunity to learn some more about our little centipedes. It was time for some enumeration with Nmap!

Nmap scan report for 66.85.173.11
Host is up (0.028s latency).
Not shown: 995 closed tcp ports (reset)
PORT     STATE    SERVICE    VERSION
22/tcp   open     ssh        OpenSSH 8.9p1 Ubuntu 3ubuntu0.11 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
|   256 97:a2:d2:c3:e5:66:d6:f5:28:b0:51:f8:fe:04:57:fb (ECDSA)
|_  256 39:1e:c3:e5:6f:bc:b7:47:a1:6a:89:f7:7f:e5:b6:9c (ED25519)
80/tcp   open     http       Apache httpd 2.4.52
| http-ls: Volume /
| SIZE  TIME              FILENAME
| -     2025-01-09 12:09  aafc820801b6c15b8ef17af5a833c547/
| -     2025-01-09 12:04  ed00852591680f1c3001d5ac0afc1d12-1736421822/
|_
|_http-title: Index of /
|_http-server-header: Apache/2.4.52 (Ubuntu)
111/tcp  filtered rpcbind
4001/tcp open     newoak?
4321/tcp open     tcpwrapped
Aggressive OS guesses: Linux 2.6.32 - 3.13 (96%), Linux 5.0 - 5.14 (96%), MikroTik RouterOS 7.2 - 7.5 (Linux 5.6.3) (96%), Linux 3.10 - 4.11 (95%), Linux 2.6.32 (94%), Linux 4.15 (94%), Linux 3.2 - 4.14 (94%), Linux 4.15 - 5.19 (94%), Linux 2.6.32 - 3.10 (93%), HP P2000 G3 NAS device (93%)
No exact OS matches for host (test conditions non-ideal).
Network Distance: 11 hops
Service Info: Host: 66.85.173.11; OS: Linux; CPE: cpe:/o:linux:linux_kernel

OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 188.42 seconds

It was listening for SSH but, unfortunately just a version past a viable Authentication Bypass exploit. Apache was listening on port 80, and it looked like there were some interesting directories listed. There was our exfil port on 4001, and not much else. I was feeling a little dangerous and thought, "Let's try kicking in the door"!

I fired up a clean VM outside our environment and connected to the web server to take a look. The first directory rendered a password prompt, and the second gave a string of random characters. The random string in the second directory looked an awful lot like a password that was left exposed, but why would they just leave the key under the mat like that? I'm still unsure if this was a case of incompetence or apathy, but regardless of the reason, it's terrible opsec. I pasted the password into the prompt and that was it, I had just logged into Rhysida's C2. Surely it could not be that easy!? I'm sure there are more that they can pivot to, which may be why they just don't seem to care about protecting it.

From here I could see a list of active connections between the C2 and their victims, with each row displaying:

This wasn't a C2 I was familiar with, so I made a note to research it a later. But it was fairly apparent that these were SOCKS sessions, with the C2 acting as a proxy. But what I was witnessing now was multiple robberies in slow-motion. People needed to be notified. Some of the sessions were new enough that the attackers on the other end might only be in their initial access or enumeration phases. That meant if I reached the right people quickly enough, they could shut down servers and restore from backups before it got bad.

Identifying a Body

I might not know who the victims were, but I had enough info to do some basic Open-Source Intelligence gathering (OSINT), and narrow down who to contact. I'll avoid discussing most of the victims I've spoken to in great detail to protect their privacy; omitting names where needed.

One of the first victims (victim_01 for those counting) I investigated had 2 hosts in the same domain, with the same public IP (65.255.81.224, US, Oklahoma) listed under the "Online" section of the C2 control panel. A quick WHOIS search for that IP confirmed the victim was Okeene Public Schools in Okeene, Oklahoma. A representative of Okeene Public Schools was kind enough to approve my sharing of specific details around their incident.

A quick web search turned up the homepage for the public school and a phone number for their Administration Office. I spoke to a very nice, if slightly confused receptionist, and tried to calmly explain the situation. I was trying not to be too alarmist, explaining that I was a security researcher investigating a ransomware attack on my client, and that led me to identifying more potential victims, I was sure that they we actively under threat, and asked to speak to their IT support staff. I'm sure she thought I was a scammer as she tried to brush me off at first. But persistence and a bit of concerned pleading paid off, she took my contact information, and a few hours later a representative from their MSP called me.

I gave their rep the technical details, showed him what I saw from the C2, and he confirmed for me that these were his client's servers. I walked him through collecting forensic images of one server's memory and disk before he shut down everything in hopes of restoring from backups. We even managed to collect a copy of a malicious DLL running on the server and the scheduled task that called it.

Was that a win!? Would my time and focus might really pay off!? I immediately got up from my desk and kissed my wife to celebrate.

But we were still left with the question, 'how did they get in in the first place'?

Initial Access - A working theory

By this point I had started running Nmap scans against every victim IP I saw, looking for obvious weak points in their attack surface, and thinking about how I would hack them. Most, but not all of the victims during this period seemed to be using Fortigate firewalls, running outdated firmware, with their management interfaces (web browser login prompt) exposed to the open Internet. I was aware of a recent Proof-of-Concept exploit released by Sonny of WatchTowr Labs (I cannot fanboy on these folks hard enough, you guys are awesome!) that exploited CVE-2024-55591 resulting in an authentication bypass on vulnerable Fortigate devices.

In fact, I had just spent a few days researching the vulnerability and the exploit about 2 weeks before this whole situation began. CTS has primarily been a Fortigate shop, so I try to stay on top of everything related to them. In my spare time I was playing around with Sonny's code in my lab to see how our SIEM responded to it. Being responsible hackers, the WatchTowr team release the code as PoC with a calling card built-in. Inserting a message into the logs which includes the IP/port combination [13.37.13.37]:1337. This would make it obvious to anyone checking their Firewall logs that some script kiddie downloaded the PoC and tried to use it to attack a real target. They also, intentionally I assume, included an infinite while loop, making the exploit hang after executing. So I made a few light modifications to make the script a bit more useful. I'm a mediocre Python programmer at best, so if I could identify and fix these with ease, I'm sure others have as well.

Using their PoC code as an example, I wrote a very basic vulnerability scanner and used it to identify any of our client's devices that might be vulnerable, and we started patching and mitigating immediately. Victim_Zero was on this list, but by this point Rhysida would have already been in the firewall and started their lateral movement phase. Unfortunately I didn't have access to Okeene's firewall to check the logs, but when I ran my vulnerability scanner against it I got a positive result. And the same went for every other victim that I identified as using Fortigate. Every single one would give up vital information, change configurations, even add new administrator accounts. All we had to do was ask nicely.

Rhysida seems to have favored phishing for initial access in the past, but with a ton of vulnerable Fortigate devices on the Internet and some easy to run exploit code, I suspect this was too easy an opportunity to pass up. It's worth noting that I don't have enough proof to present this a verifiable fact currently, this is just an educated guess.

Victim_Zero and Victim_02

I had contacted multiple victims by this point and left voicemails, sent emails, and even used a support ticket interface to alert one. Side-note: Have I Been Pwned and similar services are an overlooked OSINT resource for vetting email addresses. Trying to verify an email address using only a person's name and their business name? If the organization is big enough the chances are high that someone's address has been cataloged by HIBP. Make a list of a few potential addresses using common conventions (first.last@company.com) and you have a quick and dirty address verification. The responses varied from silence to dismissal mostly. But it was time to turn my attention back to our client. Their infrastructure was recovered thanks to Alex' hard work. Another tech and I would have to make the long drive out to them to check each workstation by hand for signs of the malware, clean up what we could, and scrap/replace the rest. This process took the better part of 2 days, across multiple locations, but it was well worth it.

At night I spent some more time poking at Rhysida's C2 from the hotel, trying to find more information about it, and getting increasingly louder with my attempts to determine who might be on the other side. I ran dirbuster with a few common wordlists against it, trying to enumerate any other directories but, all I really found was a directory holding the GeoIP database the C2 used to determine where the victim IPs were located. Attempts to bruteforce SSH bore even less fruit. But I did finally identify the C2 as a variant of SystemBC thanks to this awesome write-up by the folks at Kroll.

I was already 30 minutes into my long drive home, when I received an email back from one of the victims (we'll call them Victim_02) who was conveniently located in a nearby city. I had emailed with their staff after initially identifying them but, they felt they had the situation under control. Still driving I jumped into a call with the CEO of Victim_02 and explained who I was, how I found them, and what I've seen with the attacks so far. It turned out they had responded to the threat too late; Rhysida had encrypted most of their environment and most of their business was offline. Just like Victim_Zero, the attackers exfiltrated important data, deleted VSS copies, encrypted everything including Veeam backups, and dropped a ransom note. The CEO asked for any help we could offer, he'd already spent a sleepless night or two in their datacenter trying to recover things.

45 minutes later I was sitting across from Victim_02's CEO in a DataCenter conference room, walking them through what I saw from the C2, and demonstrating an exploit against their vulnerable Fortigate firewall. Since that meeting they've become one of our newest clients. I can't stress enough that my vigilante activities against Rhysida were borne out of an intense, near-pathological hatred of bullies and cowards. I was not then, and am not now, seeking business from this. I just want to make Rhysida hurt, if even just a little bit.

The Call is Coming from Inside the House

Shortly after getting home I was working with Victim_02's team to get a forensic image of one of their infected servers. This proved a little difficult since the memory alone was about 2TB. With no viable options for a disk image we opted for just the RAM and got that copied to a clean external disk. Unfortunately they had already factory reset their Fortigate, so any logs we might have had were lost. Part of me wishes I had hacked their firewall when I first identified them and locked Rhysida out. The thought certainly crossed my mind.

They were recovering data from cold storage tape at this point but it was certainly a better position than some victims were in. Over the next few days we began bringing their infrastructure back online, slowly, making sure each host had our remote access, SIEM, and EDR agents deployed before they were allowed to join the network. At least, that was the expectation.

I had already created policies blocking the Rhysida C2 and disabled the management interface on their WAN port, I just couldn't upgrade the firewall to a later firmware release since the model was end-of-life. But, it came as a surprise when our SIEM alerted the SOC team about 'external IP' 13.37.13.37 successfully logging in to the Fortigate. That's the artifact that WatchTowr's PoC code generates, remember? And all external access to the firewall was being denied. The attack had come from inside Victim_02's LAN.

Predictably whatever device ran the script deleted all the traffic logs, leaving little to go on. But I finally had some direct evidence of the attacker using watchTowr's fortigate exploit in the wild.

Between the C2 access, forensic data, logs, malicious DLLs, and anecdotal evidence I had collected from other victims I felt like I might have something useful to bring to the authorities.

Snitching to the Feds

Victim_02 had already started a dialog with the FBI about their situation, and we had made it clear that we were happy to share whatever information we could. After a nice, sobering talk with our attorney, I soon found myself on a conference call in the middle of the day, from my home office, showing Federal Agents around the C2 of an international ransomware operation. This whole situation still feels a little surreal. But it was reassuring to have some of my suspicions confirmed, and to exchange information with experts.

When I searched the internet for mentions of the C2's IP I couldn't find anything. It wasn't in any IoC lists or malicious IP search I could find. And it turned out the FBI hadn't seen it either. So I packaged up all my notes, evidence, etc. and sent it to them. As of this writing the FBI agents I've worked with intend to get a forensic copy of the server, and hopefully get closer to identifying Rhysida members.

It's worth noting that the server is still online as of this writing. I'm still monitoring it and contacting new victims as I see them. I estimate that we've successfully shut down around 14 attacks so far.

Conclusion

Rhysida's ransomware attack on our client was a stark reminder of the ever-evolving threat landscape in Infosec. It also reinforced the value of curiosity for me, both personal and professional. We could have just blocked their C2 for all our clients, and left it at that. Instead my team continues to help me see this through as far as I can take it. I owe a debt of gratitude and respect to the team I work with for their dedication to our clients, their incredibly hard work, and their support when I go off on tangents like this.

I could never have predicted where that first text message from Alex would lead. From a standard ransomware cleanup came a complex investigation spanning multiple organizations, and the infiltration of a command and control server providing rare insight into the inner workings of one of the most prolific ransomware operations in recent memory.

In the end, the battle against ransomware will go on for as long as it remains profitable, but with the right tools, knowledge, and collaboration, we can mitigate risks and safeguard our digital assets. Stay safe out there, and know your enemy.

Indicators of Compromise

C2 IP: 66.85.173.11
Domain: patterson.pureskin.com

Activity: Continuous, high-volume traffic over port 4001 to single IP

File(s): driver64.dll
SHA-1: b031ea666f4679f512b7e135aae31af755278b82
SHA-256: a8659f60b63b4ef781439de7676fded950a238ebd926997c70174e5d7ef43529

Malicious Scheduled Task
Name: {system}.xml
Contents: Attempts to load malicious DLL via rundll32 “%AppData%\Local\driver64.dll rundll”