Sometimes it pays to spend. The central bank of Bangladesh has found that out the hard way, as police are blaming its loss of $80m during a hack on crappy $10 routers.
I have spent (and still spend) a good bit of time operating on the blue team side of things and, especially now that I’m spending more and more time on the red team, it always amazes me at how little many people really know about and consider the security of their systems. I’ve worked in environments ranging from small, mom-and-pop shops to large, multi-national corporations in market verticals including everything from lawn care to heavy industrial to financial to medical to legal. Sometimes the client simply doesn’t realize the gaping holes that they’re leaving open and sometimes it’s a (poorly) calculated acceptance of risk (what are the chances that something bad will happen?). In the case of the investigation report at the Central Bank of Bangladesh, three things really stand out.
- No firewall – According to the article, the bank was using switches instead of a firewall to connect computers to their network. The article doesn’t state it explicitly, but I have to imagine that the bank received an IAD (internet access device like a cable modem, DSL modem, etc.) from their ISP that had NAT enabled and simply connected their computers to that. At the very least
- Second hand equipment – According to the article, the switches that the bank was using were second hand switches that they had purchased for $10. That may not be a problem for use in a lab or even a home but, a bank? The heist is to have failed because the attackers (who have still not been found) only got away with $80 million. We routinely use hardware hacks (keyloggers added to keyboards, embedded devices in UPSs, etc.) on engagements that we have to work hard to get in place. A bank using switches that they bought for $10 second hand, how difficult would it be for an attacker to write custom firmware to include a reverse shell from the switch, which would have been an option on a heist that, with an $80 million take, was considered a failure.
- No access control – Once the attackers ‘were in’, they immediately had access to the SWIFT network, the Society for Worldwide Interbank Financial Telecommunication.
- Install a firewall – A decent firewall (that will provide VLAN support, granular access control and logging, this should be considered the bare minimum), can be had for around $100. The article seems to indicate that the attacker gained access through the IAD (again, I’m deducing that based on the article, I could be way off on this) and gained access to the entire network from there. At the very least, a firewall (with WAN access disabled, etc.) would have been a speed bump and would presumably have given alarms if it detected someone trying to bypass it (failed logins, port scans, etc.).
- Commercial Equipment – We routinely buy cheap equipment second-hand if we’re trying to build out a lab to prepare for an engagement. Sometimes it’s to replicate the physical layout, sometimes it’s to try to shoehorn an embedded device into a printer, smoke detector, webcam, etc. and sometimes it’s to try modified firmware. When buying equipment or the office though, we buy new and immediately download the latest firmware from the manufacturer (if applicable), verify checksums / hashes and install that onto our hardware.
- Network Segmentation – There’s no indication of how the bank network was setup but, based on the article, I suspect that it was a single flat network where everyone had full access to everything. The general consensus is that we need to assume that we’re living in a post breach world and, rather than focus all of our efforts on preventing a breach (and calling it a day), we need to focus some efforts on containing a breach if (when?) it happens. Network segmentation is a good place to start as it can a) limit access from one segment to the other [i.e., workstations to servers or interns to HR, etc.] and b) provide logging (if someone is ‘crossing the line’, moving between segments could be logged).
- Monitoring – Once you’ve ‘secured all of the things’, segmented the network and setup logging on everything that supports it, have someone actually look at the log data. This is critical. There have been engagements where we knew we were busted, we made too much noise going in, someone decided to immediately add a user to the Domain Admins group, etc., but we weren’t caught because no one was watching the log. Having logs but not looking at them is as bad as not having logs.
There is no way to guarantee that you are 100% un-hackable but there area lot of small things that you can do to mitigate the risk (we’ve noted a few above). Once the basics have been covered (firewall, segmentation, logging, etc.), bringing in an unbiased third party to test the controls in place will show you what an attacker would see when surveying your network an an opportunity to resolve any problems first. If you would like more information on these or other services that Piratica provides, all of our contact information is available here.