Testing applications in production vs. non-production benefits
To ensure proper application security testing, production and non-production systems should both be tested. In this tip, expert Kevin Beaver weighs the pros and cons.
Application security testing can present many questions, and one of the most common is which systems should be tested for vulnerabilities. Should testing be performed on systems in the development environment, the staging environment, the production system or some combination of those?
Because clients are often pressured into testing applications used in production, there can be some confusion about the value and acceptance of testing non-production systems. In many cases, those who request the tests are unaware of the risks, or they don't care because it doesn't impact them.
So what should be tested? Given the people, business and technical complexities involved, this is not a simple question. Organizations operating across different industries tend to have their own unique requirements and approaches in terms of risk tolerance and managing their security programs.
Testing production systems is going to give the most realistic view of how things are looking and what can be exploited. In a simple world, organizations would just test production and be done with it; however, several side effects to testing production systems should be considered, including:
- a reduction in system performance;
- potential for denial of service;
- potential for email flooding via web forms with no CAPTCHA protection;
- risk of databases getting filled with junk data that can't be easily removed;
- potential exposure of sensitive information and source code to outside parties; and
- the need to use IT, security and development resources to ensure the production environment remains stable during application testing.
Even though there are many drawbacks to testing production environments, such testing can paint the most accurate picture of how the latest code is currently exposed to the elements -- it's plain, simple and based in the real world.
Still, there's value in testing non-production environments, such as development, quality assurance and staging systems. While it obviously has less impact on the production systems, many people choose to test non-production systems.
But non-production testing is not necessarily reflective of the real world, as non-production application environments likely run different code. They're not necessarily running on the same server operating systems, and they also tend to have different patch levels. Add on top of that the varying configurations at the application and server levels and it's not uncommon to see vulnerability and penetration testing results that vary quite widely from production to non-production.
Another issue is that making non-production systems accessible to the internet can, in turn, expose the systems unnecessarily, especially in situations with known vulnerable code, missing patches and improperly secured production data.
If you test non-production systems, will that be documented as such in the final report? Will the security posture be assumed to be reflective of the production environment? One thing that you can and should do when you uncover vulnerabilities in a non-production environment is to immediately test for flaws in production. This is especially important to recognize SQL injection, cross-site scripting and missing server patches.
Again, there's no definitive answer as to which environment to use for testing; it could be production, non-production or some combination of both. The important thing is ensuring that application testing is being done and that the necessary controls exist to properly mitigate identified vulnerabilities.
Get together with the right people in IT, security and the necessary business units in order to develop a testing standard. That way, when customers, business partners or auditors ask how your application security testing is performed, everyone will be on the same page.