pixel_dreams - Fotolia
Getting to the bottom of the software vulnerability disclosure debate
The vulnerability disclosure debate rages on: Enterprises should know they are at risk, but vendors need time to patch flaws. Which side should prevail? Expert Michael Cobb discusses.
Software vulnerabilities will always exist; the bad guys will try to find and exploit them while the good guys will try to find them and … .
How this sentence should finish is a hot topic among security researchers and software vendors, and has been raging since the Morris Worm became the first major virus to gain mainstream attention back in 1988.
Project Zero, a team of security analysts employed by Google to find zero-day exploits, recently reignited the debate as to when and how researchers should disclose vulnerabilities after automatically disclosing various zero-day flaws in Microsoft's products that the team had previously reported to the software giant as soon as its 90-day deadline expired.
The rationale behind publicly disclosing details about even potentially dangerous software vulnerabilities is that the threat of disclosure puts pressure on a vendor to issue a patch and customers have the right to know if their systems are at risk so they can take an informed decision on how best to protect them until a patch is released. The opposing view is that keeping software vulnerabilities secret keeps them out of the hands of hackers. However, the problem with this approach is that hackers can discover vulnerabilities on their own, and software companies are unlikely to spend time and money fixing undisclosed vulnerabilities.
The software industry was initially against independent vulnerability research and public disclosure of any kind. For example, in 2005 Cisco took legal action against security researcher Mike Lynn and Black Hat for explaining how to run attack code to gain control of Cisco routers. Thankfully, the industry finally agreed that vulnerability disclosure is beneficial, but it still can't agree on how it should be handled. The main sticking point is how long a company should be given to fix a vulnerability before it is made public. Without a deadline, vendors are unlikely to prioritize patch development. In 1999, the hacker website Nomad Mobile Research Centre said it would give a vendor a month's notice before it went public. A year later, the RFPolicy only gave vendors five working days to establish communication with the person that reported a bug to them. These very short windows were partly borne out of a frustration that many vendors were not responding and developing fixes when they received details of a software vulnerability. Arbitrary deadlines remove any sense of context, though, and patch development and testing times can vary.
The industry has moved toward promoting a responsible software vulnerability disclosure model whereby everyone involved agrees to allow a period of time for the vulnerability to be patched before publishing the details. Attempts to agree on a standard response period have been met with limited success, with companies and research teams operating under different time frames. ISO/IEC 29147:2014 gives guidelines on how vendors should receive information about potential vulnerabilities in their products or online services and how to disclose them, but it doesn't cover how quickly vendors should issue a fix. Microsoft has its own policy, Coordinated Vulnerability Disclosure, for engaging with researchers, favoring responsible disclosure over full disclosure. Responsible disclosure dictates that the vulnerability is reported privately to the vendor and no one else until the vendor issues a patch. It also favors coordinated public disclosure that coincides with the vendor update release.
There are, of course, differing views on the whole topic of how to handle software vulnerabilities. After demoing proof-of-concept code of a SQL server exploit at Black Hat 2002, security researcher David Litchfield later questioned the benefit of publishing his code when he realized it may have been used as a template for the Slammer worm. The Anti Security Movement is completely opposed to the full disclosure of information relating to software vulnerabilities, as it believes this will prevent script kiddies from using them to compromise systems. Dan Geer, CISO of In-Q-Tel, a nonprofit that supports the CIA, put forward the idea that the U.S. government should openly corner the world vulnerability market by being the highest bidder and making them public; this would not only encourage more researchers to look for vulnerabilities, but would also disarm most malware writers and hackers.
What is certain is that trusting in the security of secrecy doesn't work long term, but having rigid disclosure timetables that don't take into account different scenarios won't benefit users either. Microsoft made a very good point in its response to Google's Project Zero actions regarding the inability of most customers to take mitigating action when a vulnerability is disclosed before a patch is ready: "Those in favor of full, public disclosure believe it forces customers to defend themselves, even though the vast majority take no action, being largely reliant on a software provider to release a security update."
Google's Project Zero has since added more leeway to its software vulnerability disclosure policy by adding an optional two-week extension for vendors that are close to making a patch available, but there is little consensus on what is considered to be an appropriate deadline for vendors to respond while giving them enough time to manage a sensitive, resource-intensive process. CERT's 45-day disclosure policy says there needs to be a balance between the need for the public to be informed of security vulnerabilities and the vendor's need for time to respond. Yahoo's 90-day policy notes that, "the more quickly we address the risks, the less harm an attack can cause," while TippingPoint's Zero Day Initiative feels a 120-day policy is adequate.
A vulnerability disclosure policy that ignores context has the potential to do more harm than good. The industry needs to work together with well-intentioned security researchers to continually improve the quality of software in a responsible manner, leaving personal frustrations and beliefs aside for the good of the Internet as a whole.
About the author:
Michael Cobb, CISSP-ISSAP, is a renowned security author with over 20 years of experience in the IT industry. He co-authored the book IIS Security and has written numerous technical articles for leading IT publications. He was also formerly a Microsoft Certified Database Manager and a registered consultant with the CESG Listed Advisor Scheme (CLAS). Cobb has a passion for making IT security best practices easier to understand and achievable. His website offers free security posters to raise employee awareness of the importance of safeguarding company and client data and of following good practices.