How My Bank Got Hacked!

By

Feb 21st


Search for bad guys
I work for a US bank in their security operations team, a couple of years ago we got hacked. This is my story I how we got hacked, how we tracked down the breach, what we did wrong, what we did right and the lessons learnt.

I got the call on an early summer morning I was still asleep when all hell broke loose, we had been hacked, we had lost millions of our users personal identification details, including emails, addresses and phone numbers.

As we would find out later our users would be targeted in penny stock pump and dump scams, targeted emails for malware payload delivery, eventually leading to bank account compromises.
It was all hands on deck that morning, we had our entire SOC team in, a team from the FBI, who were the ones that informed us of the breach, and eventually a team from both the US secret service and the NSA.

The first order of the day was to stop the compromise, we were provided a list of known bad IP’s that were related to the team of hackers that were suspected of the breach. The FBI knew of the hacking team that had breached us and were monitoring their drop points, which is how they discovered our data. A quick firewall rule and we believed we had stopped or at least slowed the data leakage.

We knew where the data that was leaked resided, on one of five database servers, so our first step was to check these servers to see what activity we could see. On close inspection we could tell these five servers didn’t actually have any malware installed or in memory, so we knew they weren’t compromised, that implied that the data was being requested remotely, legitimately by an authorised database user from an authorised location.

Database users would be difficult to narrow down as pretty much everyone had read access to the data, but we did force request to go via middleware which should have limited the amount of data returned. You couldn’t for example do an entire user data dump, we restricted the amount of data returned per request.

That meant the data dump must have occurred from an approved system, which were servers only. We have two factor authentication on servers, so installing malware on these systems should be impossible. We found our first clue when we looked at the ARP table on the database servers. There was an arp entry for an unknown IP address, all be it an internal address. An nslookup on the IP did not return a response, but a connection request via remote desktop connection did.

It was a Windows 2003 server, we attempted a domain login, no luck, then cycled through our local administrator passwords. We eventually managed to login, and found a system we had not previously been aware off, it was clearly a standard build, but had no patching and Anti-Virus definition that was four years out of date. No two factor authentication, obviously.

It was riddled with Malware, from our count at least three different infections with their own Command and Controls and we suspect at least two different hacking sources. We now had loads of indicators of compromise, a bunch of executables, with hash values, registry keys entries and external command and control IP’s.

We cloned the machine so we could run a bunch of tests on it, the first was to update the antivirus dat file to see if it would pick anything up. It flagged five files out of some four hundred odd files that made up the three separate infections, all five files from the same malware.

Interestingly working in our favor was the lack of login activity on this machine, in five years since it was stood up there was less than two hundred logins to the machine and less than thirty thousand entries in the security log, which meant the log had not rolled over and luckily for us not been cleared by the malware/hackers. We can only surmise that the bad guys thought clearing the event log would flag the system as compromised so did not do so.

From the system log we could see all software installed, including dates so we knew when the machine was infected over the years. A quick look at the ARP cache show us other machines this one was communicating with, which led us to four other compromised servers and one compromised workstation.

Most infections were traced back to un-patched known vulnerabilities and one infection to a compromised user account, although how they got the password is still up for debate, we do suspect it was a network sniff using Lophtcrack on a compromised workstation, as we did eventually find eight workstations with this compromise.

We had an IOC scanning tool that had clients installed on all the servers and were able to scan for the now known IOC’s, this found a total of 48 servers that were compromised. Investigation on these servers led to another bunch of IOC’s and we eventually ended up with 92 compromised servers, all of which, with the exception of the original 2003 server were compromised via known exploits and vulnerabilities, thereby bypassing our two factor authentication. Patching is your friend, so is retirement of old servers no longer in support, 56 of the compromise servers were Windows 2003.

We still had a problem on the workstations, we had no way to scan these systems for IOC’s, the server software we were using was over $800 an endpoint at list price, this was not feasible for workstations. We ended up installing software from the Incident Response consultants that we had contracted in to help us clean up the infection, they had white listed a commercial tool, which we eventually invested in.

With the tool in place across the entire estate we scanned over one hundred thousand endpoints for the lists of IOC’s, by the end of the day we had to isolate just below four hundred infected workstations, rising to just over six hundred by the end of the week.

When we diagnosed the workstation infections we discovered that majority were infected over the weekend, it seemed like the bad guys were targeting users when they were at home, using their own broadband connections, most likely to ensure our sandboxing technology didn’t intercept the payload before it infected the laptops, including of course the secondary persistent infections.

On closer inspection one of the clever bits of malware, which according to the NSA was a nation state attack, would look at its external IP and if it didn’t resolve to a broadband connection wouldn’t communicate with a command and control. It waited until the user was at home before communicating out and receiving new instructions or malware updates. The bad guys obviously knew what version of AV we had so were able to test updates before updating the infection on the local machine.

We traced all root source infections back to user laptops, no infection had sourced directly from servers, although servers were used as foot holds for lateral spread. Even the original Windows 2003 server infection was traced back to a laptop via the system event logs, both on the server and from the SIEM that had collected the workstation logs.

So what lessons did we learn? First I would say you need the ability to scan your entire environment, workstations and servers, in close to real-time for IOC’s. Having server only software and security controls is a bad idea. Talking to counter parts in other organizations collecting event logs from workstations is not the norm for the industry, luckily we did and were able to get to root cause for infections.

Second, you need to know what you don’t know, for example what machines are actually on your network, machines that don’t currently have your AV on it, are not patched and have no security controls because you didn’t know about them so didn’t manage them. Find a way to scan for these unmanaged machines.

Third, once you start looking you will find stuff, bad stuff and it will take time to clean up. From original breach to hand on our heart “we are clean” took us just over eight months and millions spent on external consultants and internal resource time.

Fourth, the likely target and foot hold in to your network is going to be laptops, they are evil, you need control over them. Now, in our environment all new laptops issued have known applications on them and user have no right to install new apps, and we audit all laptops daily to ensure they have no new software or services installed.

Finally the same is true for servers, you should know what software is installed on your servers and violations or non-standard apps need to be flagged and dealt with, audited daily.
It takes hard work to get to a clean state, but it is possible and once there maintaining is much easier. Our Windows 10 upgrade/replacement program helped with cleaning up the environment and allowed us to start from a known good position, use your upgrade schedule to achieve the same.


NOTE: The anonymous series are by authors that have been verified by the seczine editors to be in the job role they specify with the experience they specify. They are anonymous as they are talking about extremely sensitive information or do not have the permission of the organisation they work for to discuss the issue/information, information that is clearly of interest to the cyber security (cysec) community. If you have a story to tell anonymously reach out to us on the contacts page.

Leave a Reply

 
© 2006-2024 Security Enterprise Cloud magazine.