Thirteen principles to ensure enterprise system security

Designing sound enterprise system security is possible by following Gary McGraw's 13 principles, many of which have held true for decades.

Gary McGrawLong ago in a galaxy far away, two Jedi Knights by the names of Jerry Saltzer and Michael Schroeder published a paper titled "The Protection of Information in Computer Systems (PDF)."  Section three of that paper is a quick treatment of some essential design principles for information security.  To say that "Saltzer and Schroeder," as it has come to be known, was a seminal work in security is an understatement; everything they had to say back in 1975 is relevant today, nearly 40 years later.

Don’t try to go it alone if you can’t.  And don’t feel bad about asking for help; this stuff is tricky.

As your New Year’s resolution, I encourage you to adopt these 13 principles whenever you design a new system. Even though a baker's dozen isn't always considered lucky (note that I have expanded Saltzer and Schroeder’s original list just a smidgen for 2013), I'm confident you'll have good fortune by putting these principles into practice in your organization.

Thirteen security design principles

1) Secure the weakest link -- Spaf (that is, highly respected security expert Gene Spafford of Purdue University) teaches this principle with a funny story.   Imagine you are charged with transporting some gold securely from one homeless guy who lives in a park bench (we’ll call him Linux) to another homeless person who lives across town on a steam grate (we’ll call her Android).  You hire an armored truck to transport the gold.  The name of the transport company is "Applied Crypto, Inc."  Now imagine you’re an attacker who is supposed to steal the gold.  Would you attack the Applied Crypto truck, Linux the homeless guy, or Android the homeless woman?  Pretty easy experiment, huh?  (Hint: the answer is, "Anything but the crypto.")

As my co-author John Viega and I wrote back in our 2001 book, Building Secure Software, "Security practitioners often point out that security is a chain; and just as a chain is only as strong as the weakest link, a software security system is only as secure as its weakest component."  Attackers go after the weakest point in a system, and the weakest point is rarely a security feature or function.  When it comes to secure design, make sure to consider the weakest link in your system and ensure that it is secure enough. 

2) Defend in depth Author and consultant Kenneth van Wyk likes to call this one the "belt and suspenders" approach.  Redundancy and layering is usually a good thing in security.  Don’t count on your firewall to block all malicious traffic; use an intrusion detection system as well.   If you are designing an application, prevent single points of failure with security redundancies and layers of defense.  From Building Secure Software, "The idea behind defense in depth is to manage risk with diverse defensive strategies, so that if one layer of defense turns out to be inadequate, another layer of defense will hopefully prevent a full breach." It's a concept preached universally by information security experts, and for good reason: it works.

3) Fail securely -- Make sure that any system you design does not fail "open."  My favorite story about this principle comes from the ill-fated Microsoft Bob product of yesteryear.  (Bob was the precursor of Clippy the paperclip.)  According to legend, if you failed to get your username and password right after three attempts, Bob would helpfully notice and ask whether you wanted to pick a new password to use.  Thanks Bob (said the hacker)!  Obviously a better default in this situation is to deny access.

From Building Secure Software, "Any sufficiently complex system will have failure modes. Failure is unavoidable and should be planned for. What is avoidable are security problems related to failure. The problem is that when many systems fail in any way, they exhibit insecure behavior." 

4) Grant least privilege -- When you do have to grant permission for a user or a process to do something, grant as little permission as possible.  Think about your Outlook contacts.  If you need someone to have access to your contacts to see some data, grant them reader permission, but do not grant them edit permission.  Or if you want a geekier example, try this: most users of a system should not need root permission for their everyday work, so don’t give it to them.  Bottom line, avoid unintentional, unwanted, or improper uses of privilege by doling it out in a miserly fashion.

5) Separate privileges -- I once saw a system that divided its authentication front end into an impressive number of roles with different degrees of access to the system.  The problem was that when a user of any role had to perform a back-end database action, the software granted each user de-facto administrator privilege  temporarily.  Not good.  Even the lowliest intern could blitzkrieg the database. 

Know that if an attacker is able to finagle one privilege but not a second, she may not be able to launch a successful attack.  Keep privilege sets apart.

6) Economize mechanism -- Complexity is the enemy of security engineering and the friend of the attacker. It’s just too easy to screw things up in a complicated system, both from a design perspective and from an implementation perspective.  The irony: Want to see something complicated?  Check out just about any piece of modern enterprise software! 

Do what you can to keep things simple.  From Building Secure Software, "The KISS mantra is pervasive: 'Keep It Simple, Stupid!' This motto applies just as well to security as it does everywhere else. Complexity increases the risk of problems. Avoid complexity and avoid problems."

7) Do not share mechanisms -- Should you plunk your inward-facing business application on the public cloud?  Probably not, according to this principle.  Why have your authentication system deal with random Internet traffic when you can limit it to employees who you (supposedly) trust? 

Here’s a geekier example. If you have multiple users using the same components, have your system create different instances for each user.   By not sharing objects and access mechanisms between users, you will lessen the possibility of security failure.

8) Be reluctant to trust -- Assume that the environment where your system operates is hostile.  Don’t let just anyone call your API, and certainly don’t let just anyone gain access to your secrets!  If you rely on a cloud component, put in some checks to make sure that it has not been spoofed or otherwise compromised.  Anticipate attacks such as command-injection, cross-site scripting, and so on.

This principle can get tricky fast.  From Building Secure Software, "One final point to remember is that trust is transitive. Once you dole out some trust, you often implicitly extend it to anyone the trusted entity may trust."

9) Assume your secrets are not safe -- Security is not obscurity, especially when it comes to secrets stored in your code.  Assume that an attacker will find out about as much about your system as a power user, maybe more.  The attacker’s toolkit includes decompilers, disassemblers, and any number of analysis tools.  Expect them to be aimed at your system.  Ever look for a crypto key in binary code?  An entropy sweep can make it stick out like a sore thumb.  Binary is just a language.

10) Mediate completely -- Every access and every object should be checked, every time.  Make sure your access control system is thorough and designed to work in the multi-threaded world we all inhabit today.  Whatever you do, make sure that if permissions change on the fly in your system, that access is systematically rechecked.  Don’t cache results that grant authority or wield authority.  In a world where massively distributed systems are pervasive and machines with multiple processors are the norm, this principle is a doozy to think about.

11) Make security usable -- If your security mechanisms are too odious, your users will go to great length to circumvent or avoid them.  Make sure that your security system is as secure as it needs to be, but no more.  If you affect usability too deeply, nobody will use your stuff, no matter how secure it is.  Then it will be very secure, and very near useless.

Spaf has always laughed at the line that mentions how the most secure system in the world is one with its hard drive demagnetized that is buried in a 30 foot hole filled with concrete poured around a Faraday grid.  Such a system is, ahem, difficult to use.

12) Promote privacy -- Yeah, I know, everybody talks about privacy, but most people don’t actually do anything about it.  You can help fix that.  When you design a system, think about the privacy of its ultimate users.  Are you collecting personally identifiable information (PII) just because somebody from the marketing team said to do so?  Is it a good thing to do?  Do you store PII in a place where it can be compromised?  Shouldn’t that be encrypted? Information security practitioners don't always have to provide the answers to these privacy questions (that's what CIOs get paid for), but it's important for infosec to put forth these kinds of questions if no one else does.

13) Use your resources -- As I was taught in troop leader development class when I was 14, "use your resources" is a principle with incredibly wide application.  If you’re not sure whether your system design is secure, ask for help.  Architectural risk analysis is hard, but there are people who have been doing it well for decades.  Don’t try to go it alone if you can’t.  And don’t feel bad about asking for help; this stuff is tricky.

More by Gary McGraw

Proactive defense a prudent alternative to cybewarfare

The ten commandments for software security

Borrow others' good ideas, or 'It’s a small world after all'
I will never forget the day that I was presenting some slides on software security at a HSARPA meeting in Silicon Valley.  The presentation was based on my book Building Secure Software, which has a chapter on these security principles.  One of the slides had a picture of Saltzer and Schroeder on it, and who should happen to be sitting in the very small audience? None other than Michael Schroeder himself!  Small world indeed.  For what it’s worth, he approved of the slide. Salzer and Schroeder were right in 1975, remained right when we wrote Building Secure Software, and remain right today.  Apply their ideas every day in 2013. And don't be afraid to use good ideas developed by others (though always give credit where credit is due).

The original reference is always best
Here is a complete citation of Saltzer and Schroeder’s original article:

Saltzer, Jerome H. & Schroeder, Michael D. "The Protection of Information in Computer Systems," 1278-1308. Proceedings of the IEEE 63, 9 (September 1975).  See especially, section 3.  The paper is available on the web here http://www.cs.virginia.edu/~evans/cs551/saltzer/

About the author:
Gary McGraw is the CTO of Cigital, Inc., a software security consulting firm with headquarters in the Washington, D.C. area and offices throughout the world. He is a globally recognized authority on software security and the author of eight best selling books on this topic. His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games, and 6 other books; and he is editor of the Addison-Wesley Software Security series.  Dr. McGraw has also written over 100 peer-reviewed scientific publications, authors a monthly security column for SearchSecurity and Information Security magazine, and is frequently quoted in the press. Besides serving as a strategic counselor for top business and IT executives, Gary is on the Advisory Boards of Dasient (acquired by Twitter), Fortify Software (acquired by HP), Wall + Main, Inc., and Raven White. His dual PhD is in cognitive science and computer science from Indiana University where he serves on the Dean's Advisory Council for the School of Informatics.  Gary served on the IEEE Computer Society Board of Governors and produces the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine (syndicated by SearchSecurity).

Dig Deeper on Application and platform security

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close