What is Defence-in-Depth?
Defence in depth is an approach to cyber security in which highlights the important of having multiple layers of protection against cyber threats. In this model, if one layer fails to detect or prevent a cyber threat, other layers are present to combat the threat.
If you’ve ever turned a key and used a deadbolt on a door, you’re already familiar with the superiority of a redundant, multi-layer security system.
Even in IT terms, most people are familiar with this concept, simply in the form of having a built-in firewall on your computer and a basic firewall built into your router at home.
Given the increasing prevalence of cyber threats, and the increasing variety of ‘ransomware as a service‘, we know that the number, variety, complexity and sophistication of threats is constantly on the rise. It makes sense then to layer in as much protection as is commensurate with your risk profile, capability and budget. Good recommendations for defence in depth are tailored to the situation, but we regard the following layers to be a good approach for most SMEs. Please note that this post talks about a few technical layers of a cyber defence strategy and is not meant to be a comprehensive covering of the subject. Also, if you haven’t already – check out the five basic controls of the Cyber Essentials Scheme first.
Managed, Monitored & Enforced Endpoint Malware Protection
Sometimes just thought of as ‘having anti-virus’ but good security requires us to go beyond that. Good malware protection should:
- be able to detect and stop all manner of viruses, worms, trojans and droppers and should also detect and prevent ‘potentially unwanted’ applications which can sometimes result in breaches or loss of productivity without being classed a ‘virus’
- be controlled by centrally configured policies managed by the IT administrator, not leaving it to users to configure or choose any settings
- be ‘always on’. There should not be an option to disable protection
- be enforced by something else. This way if malware installs on a computer and manages to switch off the malware protection, something else on the device should notice this and immediately notify the IT administrator
- be updated all the time and technical measures should be put in place to enforce this
- be monitored by something other than itself and be overseen by trained IT staff. Any detections should be reviewed and not left to the user to deal with
Managed DNS Protection
DNS protection moves the detection of potential threats far away from your computer. This is done by intercepting DNS requests (which are how computers find websites and servers on the internet). In a typical malware infection, a computer needs to connect to a server on the internet to download and install malware. If you are lucky, the malware protection agent (anti-virus) installed on your computer would catch this file when it attempts to infect your computer.
By setting up DNS protection, we can identify and block the initial connection to the website that hosts the malware in the first instance. This means that we don’t have to wait until the virus is on your computer – we stop it getting there in the first place. This can be a very effective way of preventing malware infection earlier in the infection process, and crucially before it gets into your network. Part of the success depends on the quality and scale of the database you use to check if a request is safe, which is why we use a database that contains 750 million domains and 32 billion URLs (as of August 2019).
One of the oldest chapters in the security management playbook, and still one of the most important. The simple fact is that a lot of threats rely on exploiting vulnerabilities in your operating system or other applications. Once an exploit becomes known, most vulnerabilities are patched by the manufacturer of the software fairly quickly. It then becomes our job to ensure that the patches are installed as quickly as possible. Surprisingly, most IT estates we come across are quite far behind in patch management and frequently don’t that they aren’t fully up to date.
Although things are getting better with players like Microsoft providing less options around installation of updates, there are still millions of computers out there running outdated operating systems and applications like Microsoft Office, Adobe Reader, Java, and many others. Each of these unpatched applications represents an attack vector which can easily be exploited – and quite easily be patched.
This can be done for ‘free’ if you spend enough manpower on it, but as with a good malware protection system, patch management should be
- centrally configured,
- enforced by something other than itself,
- monitored by something other than itself,
- reported on by something other than itself, and
- alerted on by something other than itself.
Relying on a manual system is too time consuming and error-prone. Relying on settings configured on each device doesn’t provide enough oversight or control. Relying just on automated systems can still go wrong so we recommend a combination of automated policy enforcement systems along with human oversight and intervention.
Next-Gen (UTM) Perimeter Firewall
Chances are that there is a firewall built into your internet router, but we’re not talking about that. True, that is a perimeter firewall (in that it sits between your network and the internet) but it will likely only do ‘stateful packet inspection’. What this means is basically does two things.
First, it checks to ensure that connections originate from inside your network rather than originating from the internet. To use a familiar analogy, this is like making sure that everyone getting on a flight is in the right place by checking that they have a valid ticket for that journey.
Second, it checks the ‘advertised description’ (the packet headers) of the data flowing through it and confirms it looks ok. This is the equivalent of asking air passengers what’s in their bag, and confirming that the answer matches the rules.
The problem with this is that
- Most malware connections DO start from inside your network. You click a link, open an attachment, or visit a website, or get redirected from a compromised website etc. So this isn’t effective protection at all. If you were a baddie and wanted to do something bad on a flight, the least you could do is buy a ticket, right?
- Most malware pretends to be something else. So, just looking at the packet headers or ‘advertised description’ of the data doesn’t help at all either. You need to look within the data (i.e. the packet payload). After all if you were a baddie, you wouldn’t just declare that you were carrying a bomb when asked. You would disguise it as a laptop battery or a bottle of shampoo, right?
So what’s the solution? A next-gen / UTM / deep packet inspection firewall.
This type of firewall actually looks at the packet payload (the data itself) and checks it to see if it contains malware. So now we’re actually putting those bags through x-ray and looking inside – makes sense, right? A good UTM firewall will put the payload through multiple tests – all the basics of course, and then layers of additional protection. These layers include gateway malware protection which scans the data before it gets to your computer, and blocks the data if the data matches a known malware signature. Some firewalls go beyond this and will carry out additional checks such as intrusion detection and prevention, spyware, content filtering, SSL decryption etc. So these would be the equivalent of the TSA unlocking bags and actually looking inside them, or the additional swab you may get after x-ray to confirm your liquids are safe. If you’re thinking these checks might cause delays or an inconvenience – you’re right, they sometimes do. But they keep us safe!
Strong Passwords & MFA / 2FA
This should really go without saying, but must always be repeated:
- Use strong, complex passwords
- Never re-use passwords – one password, one use
- Never tell anyone your password
- Enable MFA / 2FA / 2-step verification on ALL your accounts
- Change passwords as soon as a breach is reported
- Use a password manager to help with many of the above including automated cycling of passwords
The human element can be the strongest or weakest part of your IT strategy. How you help train and educate your team has a huge effect on this. If we had to choose between the best security IT system and completely uneducated staff, and no security software but very smart, trained staff – well, we’re fortunate because we don’t have to! Just put in place good systems, AND train your staff!
A very effective way of doing this is to simulate phishing attacks. We send carefully crafted emails to staff pretending to be from Microsoft, Google and Dropbox asking them to reset their password or some such thing. Or send them an attachment saying it’s an overdue invoice. It’s the exact same sort of thing that attackers actually use. We then track what happens, do they click on the link, do they fill in their information? We can then use that to educate them and highlight some of the key things they need to look out for to stay safe online.
Policies and Procedures
Another essential part of the human element is having good policies and procedures in place. This isn’t a technical control so we won’t go into any more detail but this absolutely is an essential part of a sensible cybersecurity strategy for all businesses, large and small.
As you are probably aware, email remains a very important attack vector these days and so a very effective strategy is to layer in more email protection. In most cases this is easy to implement and can be invisible to your team, as we just send email through an additional security layer before it arrives in anyone’s inbox. Most email systems these days have reasonably good built-in protection, but no one system is ever expected to be 100% effective, so adding another layer can be very effective.
Backups and Business Continuity
The final line of protection against malware related outages – if malware does strike, having good backups is essential to allow a business to restore a clean copy of data quickly and keep operations running. A good business continuity + disaster recovery system can greatly accelerate the process of recovering from something like a large-scale ransomware infection.
What level of backup or business continuity + disaster recovery system is appropriate depends entirely on the business we’re looking to protect. We ask the following questions to help understand what the appropriate solution would look like:
- How much would your business suffer if a ransomware attack encrypted all your files or if a malware attack locked out all your computer?
- How long would it take you to recover from the attack?
- What would your staff and customers do in the meantime?
- How would this outage affect your contractual commitments, reputation, revenue and profits?
- How much data would you lose by restoring from a backup (i.e. how frequently do you take backups)?
- How confident are you that your backups are good and will work?
There isn’t a one-size-fits-all answer to what represents a good backup solution, but we encourage all businesses to think about RPO and RTO.
- RPO is your recovery point objective. Or to put it differently, how much data can you tolerate losing? Imagine your computer or server dies at 5pm on a Tuesday. Would it be acceptable to restore from Monday night’s backup? If so your RPO would be ~ 1 day. If you can’t tolerate the loss of that much data, and need to recover the file you saved at 1pm, your RPO may be 4 hours. If you definitely need to be able to recover the file you created an hour ago, your RPO may be closer to 30 minutes.
- RTO is your recovery time objective. Which means – how long can you wait before your data is restored? Can you tolerate waiting a day for your data to be restored? Or perhaps you’re running a business-critical process and need to recover your data within an hour or two.
The appropriate definition of RPO and RTO is very different for different business, roles within the business and the type of data within a role. Some data types may have very similar RPOs but very different RTOs. It is very important to think about the effect of data loss and impact and cost of downtime on your business when considering an appropriate backup system.
For businesses that are serious about staying up and running even in the event of an outage, we implement systems that have an RPO and RTO of between 15-60 minutes. For other businesses, an RPO of a few hours and RTO of as long as a few days may be fine – it really does depend on the impact of data loss and downtime.
What else can you do?
There’s no limit – there is a vast array of security services that can be implemented if you have the budget, but not all of these are proportionate. Beyond what we consider ‘standard’ level protection, here are additional or alternative layers of defence that we recommend to our clients depending on their risk profile, compliance requirements and budget.
There are a number of excellent upgrades and additional layers of security in this space that we work with and can recommend for organisations who have already implemented the above layers and have a security or compliance requirements that are above average.
Endpoint protection is another area that has seen a lot of advances recently. There are a number of excellent endpoint protection platforms out there which go well beyond standard anti-virus. These are sometimes termed next-gen AV, although today’s ‘next-gen’ quickly becomes tomorrow’s standard. Machine learning used to be a ‘next-gen’ but is fairly commonplace in today’s systems.
Advanced endpoint protection goes beyond next-gen AV and offers functionality such as EDR (Endpoint Detection and Response) can tie in with other parts of your security operations.
Another very effective way of protecting endpoints is application whitelisting. Implementing an application whitelist means that only a specific list of applications can be run on your computers, which again reduces the ability of malware to run on your network. This may be achieved in a number of ways depending on your IT setup, either based on policy defined across the network or using endpoint-based agents to enforce an application whitelist.
There is still a wide range of additional layers that can be added in, such as threat hunting, vulnerability management and advanced threat intelligence.
With all these platforms in place it would then make sense to tie your security operations together with a SIEM (security information and event management) platform. This way, you can pull in data from all your services and platform into a single platform which allows breaking down of information silos and analysing all data in real-time, providing better intelligence and event correlation which facilitates better decision making and coordinated action.
This leads directly to your SOC, or security operations centre. If you have a dedicated team of security staff who can run and manage your security operations, then you’re in good shape. If not, you may want to look at outsourcing to a SOC who can manage this on your behalf. The best security system in the world isn’t worth much if it isn’t configured and monitored properly – so this is a key element of any sensible security strategy. In many cases, someone on your team or your outsourced IT MSP may perform this function, but depending on your security and compliance environment, it may well be worth outsourcing to a SOC to manage with your security.
If you’ve made it this far – well done and thanks for sticking around! You’ve made an important investment by spending some time thinking about how to protect your business from the current range of cybersecurity threats.
Now it’s time to get to work – are you ready to get started?