Vulnerability Statistics Suck

Starting this Saturday, Black Hat USA 2013 will be convening at Caesar’s Palace in Las Vegas. In this series, intended to preview many of the talks and presentations scheduled for the event, SiliconANGLE will focus on the exploitative vulnerabilities associated with big data and how those vulnerabilities can be limited.

Today’s article will focus on a presentation entitled ‘Buying Into The Bias: Why Vulnerability Statistics Suck’ offered by Steve Christey and Brian “Jericho” Martin. Christey and Martin will explore the limitations improperly gathered and analyzed statistics and how they could ultimately be deleterious to an organizations bottom line.

Martin has, for the previous 15 years, worked in the field of collecting, studying and cataloging vulnerabilities. He is currently the Content Manager for the Open Source Vulnerability Database (OSVDB) and has actively advocated for the evolution of VDBs for many years. His work in the vulnerability disclosure process has included seeking new vulnerabilities, writing advisories, coordinating disclosure and working with several organizations, helping them to improve their vulnerability handling and response. Martin is also a member of the Common Vulnerabilities and Exposure (CVE) Editorial Board.

Co-hosting the presentation with Martin, Christey is a Principal Information Security Engineer in the Security and Information Operations Division of The MITRE Corporation. Christey also serves as the editor of the CVE list as well as being the Chair of the CVE Editorial Board. He has been a contributor to vulnerability studies and was co-author of the influential “Responsible Vulnerability Disclosure Process” IETF draft of 2002. His focus of late has revolved around secure software development and testing, consumer-friendly software security metrics, the theoretical underpinnings of vulnerabilities, and vulnerability research.

The abstract for the presentation highlights how everyone, from academic researchers to journalists, security and software vendors to individual enterprises will often undertake an analysis of vulnerability statistics, utilizing large stores of vulnerability data. While the statistics gleaned from the analysis often claims to demonstrate trends in disclosure, (eg. the number or type of vulnerabilities or their relative severity), too often they are used incorrectly to compare competing products to prove which is the better option for an organization.

To say Christey and Martin are unimpressed with such statistical analysis would be an understatement. They even go so far as to call them, “…faulty or just pure hogwash.” They attribute this to the use of data that, while easily available, is drastically misunderstood. From this footing, firms then go on, “…to craft irrelevant questions based on wild assumptions.” All of this is done without even an attempt being made to figure out the limitations of the data they are analyzing. As the abstract points out, “This leads to a wide variety of bias that typically goes unchallenged, that ultimately forms statistics that make headlines and, far worse, are used for budget and spending.”

It is this last point, if you are attending Black Hat USA on behalf of your organization that might make this session worth attending. Martin and Christey will provide concrete examples of both the misuse and abuse of vulnerability statistics. They will share with you their opinion on which studies have been most reliable across time and how you might judge critically the claims of future studies.

This presentation promises to be a workshop where attendees will be instructed on the kinds of documented and undocumented bias that can exist in a vulnerability data source, on how variations in counting hurts comparative analyses, and on all of the ways vulnerability information is observed, cataloged and annotated.

Christey and Martin are currently scheduled to present during the 3:30pm session on Wednesday, July 31.