Monday, September 7, 2015

My Thoughts On "The Basic Principles Of Security and Why They Matter"

Hey everyone - I stumbled across this article entitled "The Basic Principles Of Security and Why They Matter."  I think it is a good read, and I wanted to share my thoughts on some of the topics raised in it.  So without further ado...


To many users, security begins and ends with anti-virus and malware protection and regular software updates. But there is much more to security, and the more you understand the reasoning behind it, the more you can make intelligent choices when applying system security measures.
 This is certainly true, but I believe that we are not at a point where the majority of users are taking the measures described above.  With all of the data breaches over the past year, I believe that security is on the minds of people who did not think about it before.  I think that awareness is a powerful tool in keeping a network secure.  Sometimes there is a stigma associated with reporting security problems.  The assumption is that you must have been doing something wrong in order to find the bug or vulnerability that you raise.  If you can incentivize bug discovery and reporting and encourage users to report things they see that are anomalous, you will have a force multiplier for your security staff.  Pretend you have a company of 100 people, and five take care of security.  If you encourage the 95 people who do not work security to report things like phishing attempts, strange behavior on their computer, or perhaps strange programs running, those 95 people could act like intrusion detection systems for your network.

Of course there are going to be false positives and false negatives, but as with any IDS, you have to teach it what to look for.  If they understand why they are looking out, they will help the company and themselves because they can use those skills in their daily lives.  This is important when many people bring their personally owned devices into work, potentially exposing the work network to new threats.

When people make general statements about Linux being more secure than Windows, know it or not, they are generally referring to architectural security. As a descendant of Unix, unlike Windows, Linux was designed from its earliest days as a multi-user system, which historically has meant that it is better adapted than Windows to modern computing.

That doesn’t mean, however, that all Linux installations are more secure than all Windows ones. As the shipping condition of many phones and tablets shows, it is all too easy for a Linux or Android system to be configured so that it is essentially wide open. Instead, what it means is that Linux has been easier to secure than Windows because, when you harden the system, you are working with it rather than against it, and with core parts of the system rather than add-ons.
With respect to "people" making general statements like "Linux is more secure than Windows," I do not think everything is about architectural security in this context.  I believe that part of it is those people's perceptions about threats and exposure to attack.  I often hear people saying OS X is more secure than Windows "because it does not get viruses."  Of course, that is a fallacy, but the perception comes from the fact that there is a smaller set of malware that targets Linux and OS X.  Since OS X and Linux make up a relatively small portion of the number of OSs running on personal computers, the amount of malware written to target them is going to be smaller.  If you are an APT, and most of your targets run Windows, then you are more likely to focus on writing malware targeting Windows systems than anything else.  This leads people to be more lax about the software they install on their Mac or Linux machine and their security posture in general because they believe that security through obscurity will protect them.

However, malware is not the only threat.  A browser can cause plenty of damage if the user is the victim of spearphishing or something similar.  If you are using a Mac or a Linux machine, and you type your credentials into a site that looks like your bank, but is not really your bank, the OS you use does not matter.  Operational security (OPSEC) is an important part of any security program.  OPSEC is about protecting the interactions you have with a system and the way your protect yourself and the information on the system.  It is why you do not write your password on stickies or make the answers to your security questions easily guessable from your Facebook page.

As Dan Razzell of Starfish Systems explained to me some years ago, making a system architecturally secure requires a clear understanding of its purpose. According to the Center of Education and Research in Information Assurance and Security (CERIAS) at Purdue University, an architecturally secure system should
  • include the bare minimum needed for a specific purpose.
  • protect data both when it is being used and not being used.
  • protect the confidentiality and integrity of data in use.
  • disable all unnecessary resources.
  • limit and record access to all resources in use.
Making a system architecturally secure also requires a clear understanding of the environment it will be operating in.  This might be rolled up into its purpose, but not always.  The requirements for securing a system in the datacenter that serves web pages is different than your personal computer at home that might also serve webpages (or a Minecraft server or whatever else).  For example, the system in the datacenter is going to have a different set of controls for physical access.  You might employ two-person integrity in the datacenter, while it is unlikely that you will have two person integrity in your house.

The application of these goals comes down to a handful of working principles. These principles are generally not summarized, and security experts have different names for them, but in my experience, most would agree on at least four:
  • Least Astonishment: Design should follow user’s expectations so they know what they are doing and where to find features. For example, Bash Shell commands generally use the -r option to make them recursive, and -v to make reporting verbose, while desktop applications generally start with the File menu on the left and the Help menu on the right. Least astonishment is really just common sense, since if a feature cannot be found, it cannot be used.
  • Containment of Failure: Should a system or program fail, or be compromised by an attack, the damage should be contained so that it does not crash the system or compromise the rest of the system. This is the main principle behind user accounts with limited access to the system. Because a user account cannot control core system resources and configuration files, gaining access to the account does not give an intruder control of the system.
  • Defense in Depth: An architecturally secure system does not depend on a single feature for protection. Should one feature fail to protect the system, another may instead. The concept of defense in depth explains why reactive security is not enough by itself: anti-virus software, for example, can only respond to what it knows about, and an outdated set of virus definitions can make it useless. However, in a system with limited user accounts and carefully selected permissions, other features may stop what the anti-virus software fails to detect.
  • Least Privilege / Least Access: Any software, hardware, or user should be allowed only to use absolutely necessary system resources — and no more. The least access, the less chance of providing an unexpected entrance for intrusions. This principle is the most widely acknowledged among experts, and applies almost everywhere. A relevant modern example is the use of encryption so that data stored in cloud storage can only be read by the owner, and not the company providing the storage.

These are great features to think about whenever you design a network.  I think these should be extended to an entire network, not just individual systems.  I would add sandboxing and virtual machines to the idea of containment of failure.  If something happens in a VM, and you have the ability to detect that quickly, you could throw the VM out and restore it from a snapshot or template.  Sometimes, you want to induce a system to be compromised, such as when you want to examine malware in a safe environment.  By isolating the test environment, you can mitigate damage to infrastructure and machines outside of the environment.

Defense in depth is not something that is implemented only in software and hardware.  A good defense in depth strategy includes a clear, concise security policy and security-conscious users.

There is also another widely recognized principle, Security Through Obscurity, which is sometimes practiced by corporations, but always denounced by security experts as ultimately ineffective. When Security Through Obscurity is applied, known bugs are kept secret so that they cannot be used by intruders. The trouble with this idea is that intruders are likely to find the bugs anyway, but no-one else will know if they do — a major consideration, since companies like Microsoft and Apple have been known to take months to patch a bug. Partly, the distrust of this principle depends on how the software is developed, because, in free and open source software, making a bug known often means that it will be patched faster.
I wanted to make one note about this paragraph.  I agree that security through obscurity is ultimately ineffective.  However, I get the sense that the author is implying that Microsoft and Apple take months to patch bugs because they have better things to do.  I do not think it is necessarily like that.  Patches and changes to software require validation and testing.  This cannot be done overnight.  Open source is not a magic cure-all.  While I appreciate that open source allows many eyes to look at code and vet it, someone actually has to undertake that effort.  Also, sometimes, bugs are hard to find and go unnoticed (like Heartbleed).  This is a problem with  closed source software as well.  I think all coding projects, regardless of the openness of their source, need to be designed with security in mind.  While this is much easier said than done, this should be a goal.

Yet, despite the frequent complaints about the unrealistic demands of security, today the problem is just as likely to be the insistence on convenience. With the rise of desktop Linux and the popularity of Android, the pressure to be as easy to use as Windows is almost irresistible. As a result, there is no question that the average distribution is less secure than those of a decade ago. That is the price we pay for automounting external devices and giving new users automatic access to printers and scanners — and will continue to pay.
However, if you understand security’s goals and principles, then maybe you will be better motivated to consider the requirements of security as much as the wishes of convenience. It is perfectly possible to find a balance between security and convenience — but pinpointing the balance is more effort than most of us are used to making.
I do not know if the average distribution is "less secure" than a decade ago.  The security threats a decade ago are much different now.  Systems are more complex, and that complexity often yields a larger attack surface for someone intent on compromising the security of a system.  Also, the amount of data passing through computer systems now is much greater than it was ten years ago which makes those systems into juicier targets.  Also, computing power has increased greatly in the past ten years which has allowed things like password cracking to go much faster.  The Pentium 4s and AMD Athlons of ten years ago have been shadowed by their much newer variants in terms of speed and efficiency, not to mention the advancement in GPU-based computing.  Also, security in software has come a long way.  Containers in Linux have been developed in the past ten years (LXC has been around since 2008), and they have helped isolate processes from each other in Linux systems.  In the Windows world, consider that NTLM and LM password hashes were the default up until Windows Vista (which came out in late 2006 / early 2007).

I think it is silly to imply that the attempt to emulate Windows has made Linux less secure.  Windows has been taking cues from the Linux world in terms of security with more recent versions of Windows.  UAC is an example of this.  UAC is vaguely similar to sudo on Linux machines.  Just as UAC is not secure if users are conditioned to press "Yes" whenever the UAC prompt pops up, the use of sudo is insecure if the user blindly types a command with sudo without understanding what that command does.  I think that both platforms are trying to keep up with the latest threats, but that is a tough challenge because the threats are constantly evolving.

In addition to the operating system, third-party software plays a big role in overall system security.  Sometimes the goal is not to compromise the security of the operating system or getting root or admin.  It can be enough to establish a backdoor on a system as a user depending on what the attacker's goals are.  If the user has all of the information or access that an attacker needs, there might not be a need to go after the OS if the attacker can get a user to click a link or visit a tainted website.

I agree that understanding security's goals and principles is definitely important. I think it is critical for users to understand why security should matter to them.  If security hits home, the user will be more likely to be curious about it and seek out ways to be more secure.  I believe that with all of the breaches that have happened over the past few years, security is just beginning to be a topic of conversation for people who previously did not give it a thought.

Speaking of thoughts, what are yours?  Share them in the comments.  This is a thought provoking article, and I really liked it, so I thought you might as well.

No comments:

Post a Comment