Gary McGraw on software security assurance: Build it in, build it right

If the field of computer security is to be fixed, the only hope we have is building security in, says software security expert Gary McGraw.

RSA Conference 2012 was, as usual, packed to the gills with vendors of all shapes and sizes. Through all of the noise and heat emanating from the show floor (not to mention the speakers tracks), the first question that came to my mind was, “How can a field that appears to be this busy be making so little progress?” Or put a bit more caustically, “If the company behind the RSA show can be spectacularly compromised itself, what good is computer security today?” My answer is this: Network security was simply a Band-Aid in the first place and it attained its technological apex long ago. If we are going to fix the field of computer security, the only hope we have is building security in.

Bad people and broken stuff
When I first started working in computer security in 1996, the firewall was a shiny new idea (well ok, it was a gangly 10-year-old idea). Marcus Ranum put out the fwtk, Trusted Information Systems (TIS) was busy with DARPA grants extending the science, and the commercial market was poised to explode. But the core idea of computer security, though simple, wasn’t good enough: protect the broken stuff from the bad people by putting a thing -- in this case a firewall -- between the two. My main question back then was, and remains, “Why is the stuff broken anyway?”

About the [In]security column:

I am pleased to write a monthly security opinion column for SearchSecurity. This column started life in print in IT Architect and Network magazines and was originally called “[In]security.” That was back in October 2004. The column then transitioned into Web content at several publications before finding a home at SearchSecurity. You can always find pointers to the complete [In]security series on my writing page. Your feedback on the column is greatly appreciated.

Sixteen years later at the RSA Conference, most of what I see among major vendors is devoted to the unworkable paradigm of protecting or guarding the broken stuff with a thing or maybe an entire skyscraper rack filled with things. Sometimes the thing is a deep-packet inspection Web-application enabled firewall. Sometimes it is an anomaly-based intrusion detection system. Sometimes it is an intelligence-driven threat monitoring appliance. Oh, and you’ll need a new place to stick all the big data, and a business intelligence tool to trawl through the haystack looking for needles. And maybe a dashboard tool to display pretty colors and feature flashing red warnings. Whatever.

Ranum himself put it most amusingly some years ago. (I am paraphrasing.) Since we’re spending more and more money on computer security every year while the problem is clearly getting worse, perhaps we should come to the conclusion that the real solution would be to spend less! I don’t really believe that, of course. (I do believe that firewalls have their place. There is absolutely no reason that a generic point-and-shoot port scanner/exploit tool should work against my home LAN!) But, silliness and hyperbole aside, why is it that even security vendors can’t protect themselves from compromise using their own state of the practice stuff?

Build stuff that does not suck
I am a major proponent of a security engineering philosophy that we call building security in. Years ago during the Java Security wars, I came to the conclusion that the only real way to solve the computer security problem was to teach the people building our systems to think carefully about security while they were both designing and implementing new ones. Since then, we have made a huge amount of progress along the lines of building security in—especially when it comes to software security—and the time has come to take these ideas wide.

What perimeter?
The first thing we must do is realize that our systems (especially our software systems) do not have a perimeter. This is a huge problem for network security approaches, because they all fundamentally rely on guarding a perimeter. If you can’t tell inside from outside, you are in deep trouble indeed when you’re trying to stop packets going in either direction. And with the advent of the cloud, well, all bets are off for drawing lines in the sand around our stuff.

If we are going to fix the field of computer security, the only hope we have is building security in.

Gary McGraw, CTO, Cigital Inc.

Modern systems are massively distributed systems. Ross Anderson and Roger Needham liken the notion of getting security straight on one of these beasts to “programming Satan’s computer (.pdf).” And as computer science wizard Leslie Lamport famously said, “a distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable.”

The only hope we have of securing a distributed system is designing it to be secure enough in the first place and carefully implementing it. Sound familiar? Fortunately, we are learning how to do that in real modern systems. A short history lesson is in order.

The rise of software security
Perhaps no segment of the security industry has evolved more in the past decade than the discipline of software security. At the start of the millennium, software security was a small, arcane field that was often confused with security software (note the essential word order). But several things happened in the early part of the decade that set in motion a major shift in the way people built software: the release of my book Building Secure Software, the publication of Bill Gates’ Trustworthy Computing Initiative memo, the publication of Microsoft’s Writing Secure Code by Steve Lipner and Michael Howard, and a wave of high-profile attacks that forced Microsoft and ultimately other large software companies to get religion about software security.

In 2001, software security was new, and not many people thought much about it. In fact, I had a hard time even convincing my mom it was good to spend my own time and energy on it. Fortunately, we got some momentum from the Java security issues that Ed Felten and I wrote about in the late ’90s (see G. McGraw and E. Felten, Java Security: Hostile Applets, Holes, and Antidotes, John Wiley & Sons, 1996). Through our work, we began wondering, “Why is it that these amazing guys like Guy Steele, who’s a phenomenal languages guy, and Bill Joy, who wrote Berkeley Unix— both no hacks—screwed it up when it came to Java and security?” If you were a developer—somebody who was building and designing, and implementing one of these systems—where would you go to learn how to do it right from a security perspective? The answer 10 years ago was an emphatic nowhere.


Parts of this section are adapted from, “Lost Decade or Golden Era: Computer Security since 9/11,” by Anup Ghosh and Gary McGraw in IEEE Security & Privacy magazine January/February 2012 10(1), pages 6-10 and from and “The Past, Present, and Future of Software Security” by Dennis Fisher.

That’s one of the reasons why John Viega and I published Building Secure Software (BSS)in 2001. BSS was quickly followed by Writing Secure Code. Remember, this was when Microsoft was completely hammered by Nimda (just after 9/11), Code Red and SQL Slammer. All malware was aimed at exploiting vulnerabilities in Microsoft software.

As a reaction to Microsoft’s security troubles, Bill Gates wrote his now-famous Trustworthy Computing Initiative memo and really tried to turn the entire Microsoft ship a few degrees toward security. Today, 10 years later, Microsoft has made great strides in software security and building security in, and learned a lot of lessons that other people can emulate. Further, it’s publishing its ideas and sharing some of its tools, which is great. About five years in, everybody collectively realized the way to approach software security was to integrate security practices that I call touchpoints into the software development lifecycle. At the end of a decade of this great progress in software security, we now have a way of measuring software security initiatives called the BSIMM, which is helping turn the field from an art into a measurable science.

The reason the BSIMM is important is this: When you start a new field from scratch, like we did in 2001 with the philosophy of building security in, by necessity you need a lot of cheerleading, evangelism and advocacy. Lots of people have good ideas about what might work, mostly based on faith, which is critical for booting a field. However, 10 years later the time comes to turn the corner to science -- to talk about facts, to describe what works, and to make measurements.

Today, many people are working professionally on software security full time. In some sense, we’ve turned the corner from a philosophy and an idea -- through a process-oriented approach -- to an actual enterprise software security initiative description, and science that makes sense.

Bugs per square inch trending down
Outside observers complain that the software security situation is not tangibly improving even though we have spent plenty of effort trying to make things better. Their argument looks suspiciously similar to the one we just wielded against network security. Critics point to zero-days, rampant software vulnerabilities, the absolute necessity of Microsoft Patch Tuesday, and universal reliance on instant update for commercial software; and shake their heads in disbelief. So who is right?

Both of us are. The tricky bit is that we are definitely getting much better software out of various commercial SDLs than we were before we started working on software security. Code review technology, architecture risk analysis and penetration testing actually works. The defect density ratio (that is, number of bugs per thousand lines of code) is in fact trending down. Let’s liken that to fewer bugs per square inch. The challenge is that we are building more square miles of code faster than ever before. So we really are getting better, but the sheer velocity of code production is swamping out this improvement when it comes to large-scale bug cardinality.

As we said in Tennessee where I grew up -- can’t win for losing. But we must persist. Building security in is our only hope. 

About the author: 
Gary McGraw,Ph.D., is CTO of Cigital Inc. a software security consulting firm. He is a globally recognized authority on software security and the author of eight best selling books on this topic. Send feedback on this column to [email protected]


This was last published in April 2012

Dig Deeper on Secure software development