I just finished reading an article on TechRepulic’s IT Security Blog which echoes my long-standing recommendation to maintain a “whitelist” for approved applications.
Michal Kassner, the author of the article, explains briefly the benifits that whitelisting could bring. I want to talk a little more about an actual implementation.
The difference between approaches
Blacklisting and whitelisting are common in many IT scenarios. We use it for mail servers, server-to-server communication and even internet traffic in some proxy server implementations.
Blacklisting is definitely the more relaxed version of the two: either you or third-party you trust maintains a list of domains, hosts or addresses that are not trusted. This list is usually tied to software of some kind that prevents users from accessing or receiving data or messages from entries on the list.
Whitelisting leans towards paranoia. I’m not saying it’s necessarily bad, but it can certainly be restrictive. Using the same approach of tying the list to software to protect users, only addresses on the list are allowed to be accessed or permitted into a system.
Depending on the level of security required there can be a mix of the two working together to protect users.
When to use which
We can’t just switch everything over to whitelists and expect the internet to hum along all tickity-boo (without problems). Imagine your friend registers his own custom domain and sends you an email (from that domain) to check out his new site (on that domain). First of all, you wouldn’t get the email. Secondly, you couldn’t access the site.
So, generally speaking, we should use blacklists where we need wide unannounced access to resources that may be abused. When and only if they are, the switch is thrown and the address is locked out.
Some examples of where using a blacklist can help a user without hindering their experience:
- Email servers pumping out spam
- Web servers serving malicious content
And, some examples of where you want ‘whitelist’ behaviour implemented for better security:
- Servers that only expect connections from certain IP addresses
- Configuration for your remote desktop
- Secure VPN access points (when the access points are fixed, such as office-to-office)
Personally, here’s a few other ways that I use these kinds of lists:
- RDP connections on my servers whitelist a central IP for access and by default block/ignore other connection attempts. These servers are on private IP addresses. The central IP has external RDP sessions routed in on a specific port and the router is configured for only a handful of IP addresses.
- My children have whitelist-only access to the internet. At age 3 they each got an account on the computer. My wife and I approve only websites that we preview (lego.com for example). We have google.com on there as well, which allows them to search, but if they want to access a search result they need our approval.
How applications fit in the mix
I think it’s important to recognize that we can’t anticipate every need of users in the IT space. The job descriptions of people who use computers has grown so diverse that you are equally likely to find a chef who uses a computer every day as you are a computer programmer.
So can you rely solely on the work of a handful of people to approve applications? What if one evaluator thinks that an application is malicious because of how it tracks your usage, whereas another finds it useful because it alters the behaviour of the application?
Who comes up with the criteria for whitelisting? Who approves applications? Who blacklists them?
Does the issue need to be a dichotomy?
Some of the above questions, to me, suggest that there should be “greylists” as well. Using heuristics to evaluate software, as submitted by developers, greylists would allow applications to have a level of trust associated with them. These apps could be sandboxed to protect users, and OS-level alerts could monitor these applications for excessive or abusive behaviours.
A case for whitelisting applications
I have worked with a good number of people who see a screen saver they like, or backgrounds, or icons, or mouse pointers and more recently email graphics and templates, and download and install these…treasures. I have also seen an entire network compromised in an afternoon with zero-day malware hidden in a toolbar install.
Corporate networks should be configured to prevent the unregulated installation of software. Even as a software developer who likes to download and try out apps all the time, I try to only do so in a sandbox (virtual machine) unless I trust the application provider.
Corporate machines do not need anything installed on them except the software that enables an employee to do their job. Mechanics don’t get waterslides near their workstations, likewise, we don’t need users installing Sudoku Extreme.
For home users it’s a little more difficult to lock down computers and I don’t feel as though they should be. I would, however, like to see the implementation of whitelist providers, coupled with a local service that I can maintain. This service should allow me to say “these users accounts can install these applications that are suitable for kids”. If it’s not on the list I can add it to the list (as an administrator on my machine). I could equally say, “these users can use these whitelists, and these users can use these greylists”.
Take it even further, now, and implement a system whereby the operating system alerts me when I’m launching a greylisted application whose signature has changed (suggesting the possibility of malware). Give me the option to restrict file or network access to greylisted applications, or limit their access based on user type.
In a corporate scenario, you could allow management-approved applications on the workstations, and greylisted apps on virtual machines (where users have them).
Making it work for users
Ultimately what we’re doing now is broken. Chasing malware that can move around the globe in hours is nearly a lost cause. We haven’t seen any big outbreaks lately – I credit smarter users and more responsible operating system behaviour – there could be one coming.
The implementation must not break anything we’re already able to do, and yet provide more security than we currently have.
That’s a tall order.