The developer's role in application security strategy
Developers often pay lip service about being integral to application security, but they usually don't consider vulnerabilities until much too late in the dev process.
Application security strategy starts and ends in the software development lifecycle ... at least, that's what a lot of people say.
It's true that security is a large part of software development: From developing standards to modeling threats to testing for security flaws, it's good to get -- and keep -- developers on board throughout the process. This has been a convenient, well-intentioned message for the past couple of decades. But rather than just saying that developers should be on board with security, it's important to understand why.
Security can be baked into software, but it ideally needs to be included in the process as early as possible. Developers often claim that certain security vulnerabilities simply cannot be resolved due to the architecture of the systems or because it's not supported for whatever reasons. That's understandable, but such limitations don't make it right.
Imagine adding on a garage, bedrooms or another significant component to a house well after it's built. Such homes can suffer from structural, foundational and drainage problems if the proper design and quality of work don't go into it.
The same challenges apply to software that's been around for a while. In many cases, trying to add on core security elements is downright impossible because the application environment is just too big to change. This explains why web application firewalls (WAF) have grown in popularity over the years. Do you have application security problems that you can't fix? Stick a WAF in front and tell everybody the issues have been resolved.
Sometimes, the only way you can address security issues is with compensating controls. They're not ideal, however; that's why it's so important to treat software development and application security strategy as a formal business function on a periodic and consistent basis.
This requires getting all the right people on board as part of a security committee or a similar group. This group must lay the groundwork for application security in terms of the standards and policies involved, formal threat modeling, and ongoing oversight in terms of vulnerabilities and risks. However, what often happens are escalating issues, such as:
- business requirements are established;
- developers -- both in-house and outsourced -- are tasked with the work;
- informal security discussions happen throughout the process, but, by and large, there's no real security vision or leadership;
- developers lob the code over the fence to DevOps/DevSecOps staff and security teams for deployment and evaluation;
- informal vulnerability and penetration testing is performed and issues are uncovered way too late in the process; and
- developers are -- hopefully -- made aware of vulnerabilities, and the assumption is that they'll solve them.
This begins the back-and-forth cycle of application security remediation, which typically involves addressing the issues and then rolling out new security features that should have been included in the software all along.
I have worked with many developers in my years of information security consulting and one thing is clear: Developers are not on board with security like they should be. Simply put, the expectations are not there. This is a complicated issue, but it is largely reflective of management and leadership -- not only on the side of IT and security, but on the business side, as well. Security is sometimes used as a competitive differentiator, but time-to-market is still a top priority for most projects.
Another thing that stands out in terms of developer involvement with security is that developers are often kept from using security tools to their advantage. Web vulnerability scanners, such as Netsparker, and Acunetix, as well as source code analyzers, such as Veracode and CodeSonar, are often missing from developers' toolboxes. It's a squandered opportunity because vulnerabilities could be detected and addressed much earlier in the process.
In some cases, this is because of a lack of budget or simply not knowing what to do. However, I have witnessed many situations in which IT and security teams want to be able to do that work and keep developers and QA professionals in the dark until deployment -- when it's too late to do anything about it.
Another challenge for developers is a lack of application security training. It's very rare to find a development staff that is given the time and budget to attend security conferences or take continuing education courses on the topic. This is an HR issue that must be solved at higher levels of the business.
Still, developers looking to extend their security knowledge have plenty of YouTube videos and related resources at their disposal. If this is the only approach that developers can take to learn how to bake security into their software, just a few hours a month may be all they need to get to where they need to be. If you have a big enough why then you'll always find the how.
There's no doubt that software developers play a critical role in the overall security of enterprise applications. Just don't get caught up in the, "We address security in the SDLC [software development lifecycle]" stance if that's not happening. You walk the application security talk by getting developers involved with security in intentional ways that add tangible value that's measured and improved over time.
This certainly applies to some businesses more than others. For instance, a manufacturing company will likely not have developer involvement to the extent that a cloud application service provider would.
You'll need to adapt this approach to your unique business situation, while also considering both culture and politics. Figure out how to best implement and manage a developer-focused application security strategy to elevate your security program to a much higher level.