Security professionals with coding skills can get a lot done in not a lot of time. Hear why Python suits beginners and how it puts security and developers on the same team.
Just as software developers and IT admins automate away as many of their mundane tasks as possible, so should information security professionals. Automating security yields precious extra time to address a near-daily onslaught of new vulnerabilities and concerns.
Unlike dev and ops folks, security professionals often lack the technical skills necessary to script these tasks -- or perhaps they don't want to cede too much control over the process to bots. That's a mindset that doesn't square with modern development practices, where teams single out and eliminate impediments to speedy delivery.
Mark Baggett, a senior certified instructor at the SANS Institute and a technical advisor to the United States Department of Defense, argues that some of the most effective cybersecurity professionals are former developers. In this episode of the Test & Release podcast, he explains why code knowledge helps an information security team assess an application from the perspective of both a developer and a hypothetical black hat.
"Understanding how the attacker thinks is a significant advantage in defending your network. But, also, understanding how to develop applications is one of the best ways to understand how you can manipulate systems," Baggett says. "Combining those two skills turns into a very powerful mindset that helps you see the threats, understand how they would be exploited and then put defenses in place to protect yourself."
In his training course, "Automating Information Security with Python," Baggett makes the case for security professionals to use the easy-to-understand programming language. Not only is Python a breeze for first-time programmers, he said, but it comes with a vibrant community that has built an array of modules and packages to share with developers and security professionals.
"In information security, we've got [Python] modules that other people have already written that can authenticate to remote systems and interact with them, that can exploit vulnerabilities, that can map networks, that can read packets and pull them apart, that can create forged packets and transmit them across the network," he says.
Mark BaggettSenior certified instructor, SANS Institute
Information security professionals simply don't have time to wait for the dev team to jump in or a vendor to build the tools they need. With every passing minute, attackers gain more knowledge and capabilities. Security professionals who can code don't have to wring their hands as vulnerabilities pile up. They can get to work automating security tasks and building what they need.
"I think that having programming skills in your information security team makes you more nimble, makes you able to respond to threats as quickly as they change, because the threat actors out there, they're always coming up with new techniques," Baggett says. "We don't want to be forced to sit back and wait for some vendor to create a product that will allow us to respond to the threats. We need to be able to make changes in our environment as quickly as the attackers are changing."
Of course, when security engineers use a programming language to build tools, they confront challenges that typically fall on developers to address, such as porting a language over to a new version. The more development and security teams can understand their respective challenges, the better they can work toward common goals and reduce unnecessary drag on software releases.
"Oftentimes, security people don't understand developers and what it is that they're trying to get done," he says. "I think that when you come from a development background and you understand information security, you can eliminate a lot of that tension, or vice versa."
Editor's note: The transcript has been lightly edited for clarity and brevity.
Let's start out with a big-picture question. There's a lot of worry about the state of cybersecurity and the inability of many organizations to keep up with the various threats that are out there. You're plugged into the cybersecurity community as an advisor and as an instructor. What cybersecurity challenges are keeping teams awake at night? What are their most significant concerns as far as you see them?
Mark Baggett: What keeps people awake at night really changes from organization to organization. It really comes down to what data do they have, and what are the biggest risks that they have associated with that data? There're various ways that attackers break into systems. For some organizations, it's things like ransomware coming around and encrypting their data. They're worried about information being encrypted and stolen.
Common attacks we see that are affecting large numbers of organizations are things like phishing schemes, where someone within the organization clicks on a link in an email and enters their credentials into what they think is their Outlook email's website, but it turns out that it was just a credential harvesting website. [Attackers] steal their passwords, and then they use that to commit financial fraud or to take money from the organization. Simple attacks like that we see that are still affecting large numbers of organizations that people can address with just simple, basic information security hygiene -- keeping patches up to date, antivirus software and having good strong passwords in their organizations.
And then there's more sophisticated attacks that we see in organizations, things where people have done the basic blocking and tackling associated with protecting your networks, and they have a different level of attacker that's coming after them. Things that [are] oftentimes issues that are created by poor programming practices -- mistakes in code, things like that -- largely in corporate websites and things like that, where people have created a corporate web application that's vulnerable to some type of a web attack -- like a command injection or SQL injection attack and things like that -- where attackers can embed themselves inside of the content of the webpage or get code to execute on the webpage -- giving them an advantage over their victims.
Sure. You advise on the public sector side as well. Obviously, cyberoperations, cyberwarfare is a real worry now, too. While I'm sure you can't get into specifics, how do those sorts of organized and sophisticated attacks change the equation? How do teams combat that from a top-to-bottom kind of perspective?
Baggett: Certainly over the last 10 years, the landscape has changed quite dramatically from the volume of government organizations that are involved in these types of attacks. Many of the main players are just targeting each other: government-upon-government attacks. But you do still have some countries out there that have well-funded organizations that are attacking corporate America, attacking users in their homes and going after [people who are] not necessarily sophisticated attackers.
One of the things we talk about -- we compare the United States to other countries. As the United States, as you're training your attackers, you really have to build a simulated environment. If I want to teach somebody how to break into a company, well, I've got to build a network that has simulated users and simulated applications with configurations that we would typically see in a government or in a corporate network and then teach people how to breach those systems. Whereas in other countries, they don't necessarily follow the same laws we do. They can train their hackers on different levels of networks that are actually just on the internet. So, you might have level one of their training would be to attack some low-level civilian networks and then go after some of the small businesses and then work their way up to some of the corporations that have better defenses in their networks.
Right, and we've seen that recently, too, just watching the news. But having that black hat perspective has got to be so helpful when you're thinking through the possibilities here.
Baggett: It does [help]. But since I know you're the Test & Release podcast, and we're talking about application developers, one thing I've got to say is ... people who know how to write applications really often have a significant advantage over those who don't when it comes to understanding how these attacks work. So, if you know how to write code, if you know how programs work, then you know the rules that are used to construct applications, and you know how to manipulate those rules. So oftentimes, yes, understanding how the attacker thinks is a significant advantage in defending your network.
But, also, understanding how to develop applications is one of the best ways to understand how you can manipulate systems. When you combine the application development experience with the process of understanding how the attacker's thinking, what techniques they use -- combining those two skills turns into a very powerful mindset that helps you see the threats, understand how they would be exploited and then put defenses in place to protect yourself.
This gets into something that we talked about before the podcast. You mentioned that developers who enter the security field are often some of the best talent in the industry. Why is that? And do you generally see more developers entering security today than in the past?
Baggett: So as to why it is, I think it's that you understand the rules of the system. If I was going to come up with an analogy -- I'm not a big video gamer -- but I would guess that the more time you spend in a game, or in any type of a system, then the more you understand the intricacies of how that system works and how you might be able to cheat that system and get things to do things that they're not intended to do. All of our computers, the operating system itself, the applications we run, they were all written by developers. They have the same workflow and processes that other developers have. So, a developer understands the time crunch about getting an application finished and turned over to quality assurance and what quality assurance is going to look for and the things that they're going to have to do in testing their apps and the shortcuts that they themselves take.
When you understand the shortcuts that you yourself take, then you can anticipate what [shortcuts] other people might have taken, the mistakes that they have made. I look back at some of the programs that I wrote early in my career and think, 'Oh, my goodness, what did I do to that poor company from an information security standpoint?' I wish I could go back and undo some of those applications that I wrote early in my career, just because they were, in retrospect, horrible applications from an information security standpoint. I know not to make those mistakes anymore, and I'll also know to look for those types of mistakes in other programs that people have developed. So when I see an application in a corporate environment, then I'm going to try and look for those mistakes, because I know that I've made them, and I know exactly how that mistake impacts security. So, developers really have a better understanding of what's going on inside the operating system and inside the application, and [they] make some of the best information security professionals.
I'm not going to say that I see more developers entering into information security, because I don't think that's necessarily true. I think that the demand for developers continues to grow. There's just so many opportunities out there that if you want to just develop applications, then you don't necessarily have to look to information security for these fields. I see most of the people coming into information security are coming into it from a more of a systems engineering or systems administration background than from a development background, which is a shame because, as I said, I think developers often have some of the best insights as to what's going on.
That's a good point. And believe me, it's no easier as a journalist to go back and read stories that you've written years and years before. It's the same sort of cringe-inducing experience, although maybe with a lot less risk involved. So, I understand what you're saying as far as that goes. But you did start out as a software developer, and that's interesting. You wrote code for years and programming languages like C, Delphi, PHP and now you're instructing people on automating security tasks with Python. Why is Python a helpful language to work with in a security context?
Baggett: Well, I don't know that information security has an advantage over other disciplines with regards to how easy Python is. I just think Python is a very easy [language] to understand, very readable code. It's very easy. For someone who's never programmed before, if I'm going to teach them a language, well, I'm going to teach them Python.
You can approach it from a procedural standpoint. You can approach it from a functional standpoint. You can approach it from an object-oriented standpoint. It's a very flexible language. Its syntax is easy to understand. It's got such a wide range of support from other people that have already developed modules that do many of the things that you already want to do. One of the classic jokes from the Python programming world is, in the Python interpreter, if you import antigravity, when you import antigravity, that actually launches a browser and brings up a cartoon that is a joke. It's built into the interpreter as this joke about importing antigravity, and the joke there is alluding to the fact that Python just has such a rich module base that if you need to do something, somebody's probably already written a module to do that for you. You can just import it and begin using it.
The same is certainly true for information security. In information security, we've got modules that other people have already written that can authenticate to remote systems and interact with them, that can exploit vulnerabilities, that can map networks, that can read packets and pull them apart, that can create forged packets and transmit them across the network. So there's a very, very rich development environment with a long history of support that's available to people who want to develop applications in information security in Python.
Speaking of Python, support for Python 2 just ended, and Python 3 is not backwards compatible from what I understand. Most or all of the major Python packages out there seem to be supporting the new version in one way or another, which should make migration easier for most people. But, generally speaking, when a programming language goes end of life like this, how should teams adapt? Should they drop what they're doing and migrate right away? Or can they stay put if their apps are fairly secure and reliable?
Baggett: Yeah, so that's a really good question. The answer is [I don't know]. One of the things that's happened here with the end of Python 2 -- so Python 2 is no longer supported, which means they're not going to be introducing new features. If there's a vulnerability in Python, then they will most likely not release a patch to fix the old Python interpreter, which means if you're running that Python 2 interpreter and a vulnerability is discovered, it'll probably stay vulnerable to attack. So that would obviously be a bad thing. That would be a catalyst for you to upgrade from Python 2 to Python 3, quite hastily.
But the fact that they're no longer releasing updates doesn't mean that your code stops working, [or] your code base stops working. You can continue to run your Python 2 programs in your Python 2 interpreter forever. It's just like Windows XP -- support for that ended many years ago, but what percentage of our critical infrastructure is being run on Windows XP? How many hospital life-support systems are still running on Windows XP? The answer is a scary amount. So those applications aren't necessarily going to die.
But here's the other thing. There's this one standard called PEP 394. PEP 394 is a Python standard that says, on Linux systems, when you type the word 'Python,' which version of Python is it supposed to run? Is it supposed to run Python 2 or Python 3? Well, PEP 394 today, even a couple days into January, still says it should run Python 2, but that may change at some point in the future. Even if it doesn't change, we've already seen some Linux operating systems make the change, where the Python command now automatically launches Python 3. Since they're not compatible -- if you've got Linux systems out there that are running Python 2 source code -- today, when programs launch Python and their source code, they work fine, [but] there is the possibility that you're going to install patches on your Linux system sometime in the near future, and the operating system will have applied a patch that changes where that Python interpreter points. So now you're going to run Python 3 and your code will break. That'll be a matter of administration going back in and changing things or removing patches and things like that. It may not be as easy as, 'I do nothing, and everything continues to work.' You may have to take some steps to make sure that you continue to run Python 2 in a Python 2 interpreter moving forward.
Other than that, is there a catalyst to move? Yeah, I'd say, at some point, we will discover a vulnerability in the Python interpreter that will necessitate you moving off of that system if you want to not have a vulnerability. But, that said, I can tell you that I've, within the last year, gone to a dentist's office who is running their entire organization on a Python 2.2 application -- which that interpreter's known to have several vulnerabilities, including buffer overflows and other critical vulnerabilities that [hackers] can attack -- but if there isn't a business driver for them to make that change, then oftentimes it's hard for organizations that don't have technology folks in there advocating for making these changes to force that to happen. So they will continue to run Python 2.2 apps forever until they get owned -- and that's when they'll make the change.
Right, the immediate risk has to reach a critical mass of sorts, right? With a language like Python, you can build a variety of tools to improve security posture. You mentioned packet analyzers and backdoors for penetration testers are a couple examples. Is it more preferable for a security person to be the one to code these kinds of features? Is it an area where an organization needs to make a judgment call on whether that should be developers or security? Or does that typically fall in the latter group?
Baggett: I think many of the organizations I've seen have developers already entrenched in their application development teams and things like that who are coding in some application -- many of those Python shops, or they at least have a part of their team that's Python and other parts that are .NET and things like that.
As a security professional, if you already knew how to code in this language, and you can have those discussions with those developers, so that when they have good questions like, 'Hey, if Python is a type-safe language, do I really have to worry about buffer overflows?' ... you understand what those terms mean and know how to respond to that. Then you can develop those relationships with developers and help them to understand how to write better code.
I find that [for] information security professionals, it's a different focus, right? An application developer is focused on building a tool. They want to talk about user experience and efficiencies of algorithms and things like that in order to sort through their data. Oftentimes, in information security, many of the tools that we're developing are there to automate the boring stuff, automate the things that would take us a long period of time, like running scans, consolidating reports, sorting through packet captures, filling the gaps of where the application developers haven't necessarily implemented a feature.
For example, a very common use of Python would be I've got a forensics tool that analyzes 90% of the artifacts that I've got as part of an investigation, but I've got these other 10% [where] there isn't a tool out there that knows what to do with these forensics artifacts. The fact that I know how to code and I can write my own tools, I don't have to sit back and wait for a development team to build those tools for me. If I am in the middle of an investigation, and I've got some artifacts that I need to rip apart at the binary level and get in and find out what splits inside of that artifact, I can write my own tool to do that. I'm not hampered by the development process of a large organization.
So, I think that having programming skills in your information security team makes you more nimble, makes you able to respond to threats as quickly as they change, because the threat actors out there, they're always coming up with new techniques. We don't want to be forced to sit back and wait for some vendor to create a product that will allow us to respond to the threats. We need to be able to make changes in our environment as quickly as the attackers are changing.
That's such a good point. There's already a little bit of this inherent friction between the development side and the security side, even as we hear about organizations trying to break down the silos. So, if you have to wait for another feature to be built, god knows when that's going to happen. Do you have any advice, generally speaking, about how to get those two sides to communicate or collaborate a little bit more -- developers and security? Anything that you found that is generally helpful or works well?
Baggett: Well, what is the nature of the tension? I think, oftentimes, security people don't understand developers and what it is that they're trying to get done. 'I've got a product timeline. I've got to develop these features. I've got to get these things implemented. I've got to get it through quality testing and peer reviews.' And security professionals are like, 'Well, you've got this vulnerability. You need to fix it,' and [they] don't even necessarily give justification to the developer as to why they need to spend their time on this vulnerability because it hasn't been fully explained and fully justified.
I think that the same can be true from the other perspective. I think that oftentimes the two groups have different goals -- not necessarily competing, but different goals -- and they don't necessarily understand fully what the other is trying to do. This is one of the reasons I think that having a development background [helps]. If you have a development background, and you understand what the other department is doing, then you're able to communicate in a more effective way, such that you understand their goals and you can integrate those things.
Ideally, information security should not be a step that that developers have to go through in order to get a product released. Today, you have a product team. [Applications] go through quality testing, and then they go through security testing before they go through a release, right? If all of our developers were security folks, you'd still want to have [known] what's the difference between running your tests. [Say] you're going to build a test suite. Before you release your test suite, why would we have to have a separate test that's going to test algorithm efficiency and function inputs and function outputs and just looking for assertions of different types of data structures, and then a second set of tests for security? It would seem like if we're all speaking the same language, that those first quality tests can test for both.
That said, there are some different types of threats that you have when you're looking at trust boundaries. When all of your internal functions are calling other functions that you trust, you're not necessarily going to want to look at those the same way that you would look at the functions or parts of your program that are taking input from untrusted sources, like the user who's running the application or data that you're reading from the internet. But I think that when you come from a development background and you understand information security, you can eliminate a lot of that tension or vice versa. If you come from a security background and then you learn the development side, then you can eliminate a lot of that tension between the two groups because then everybody's on the same team.
One last question, Mark, before I let you go. We've been talking a lot about automating security tests or tasks. Should security folks automate as much as possible, as early as possible? Or are there any practical boundaries there that they should avoid or scale back what they automate? When we talk about automation, from our [readers'] perspective, the dev and the test perspective, it's usually automate as much as you can, so that you can focus on the more important things. Is that the same kind of idea from the security perspective?
Baggett: Yeah, it is. You want to automate as much as you can as often as you can in information security. One of the stories that I tell is, when I first started to learn about programming in Python in particular, I was already head of development in some other language, but I decided I was going to teach myself Python.
The very first project I wrote for myself was to automate the creation of this Excel spreadsheet that I would have to write at the end of every month. It was a typical management spreadsheet that had key performance indicators, like the number of antivirus alerts and the number of firewall alerts and the number of patches we deployed. But I'd do this report every month, and it would take me about two days to log into these 20 different systems, pull all these statistics back and put them into an Excel spreadsheet. So I thought I would just write this thing, and automate it. So I taught myself Python, and I wrote the program. And it took me 45 days to write this program and get it to the point where I was pulling from all these different systems and reproducing the same spreadsheet that it took me two days to create, and then I did the math on that. I'm like, 'Wait a minute. Two days versus 45 days -- I do this once a month. It's going to take me, like, three years before I even make up the hours that it took me to automate this process.'
So, initially, it seemed to me like this automation was a complete waste of time, but then this magical thing happened. Now at the end of every month, I just run this little spreadsheet, this Python program, and it builds my spreadsheet for me that used to take two days. Now, I'd look at the report, and I'd be thinking myself, 'You know, this report would be much more effective if I would incorporate statistics from the vertical market that we're in and include them and compare that to others in the report. So, let me just add that to the report.' So this would take me another hour. So, the next month I still spent two days on this report, even though I had an automated process to do it. But, this two days took this report that really didn't make any impact on management and really [didn't] affect any change in the organization, and it turned it into a report that really drove significant change in the organization and justified budgets and just completely changed the way people looked at information security in the organization. If it wasn't for the automation that got me past that two days of logging into these systems to pull those reports, I never would have gotten to the point where I was able to add that value to the report.
So, even when the time spent in doing automation seems like it's a losing game -- 'I'm never going to make up this time that it's going to take me to write this program' -- oftentimes, you find that once you're behind the chore of having to do this daily grind process, the time that you now spend adding value to what that process did, does have the value that you needed. So, I think that almost always there's value in automating these tasks that we've got in front of us and developing a program to make that thing happen for us automatic.