If you’ve dealt with IT security for any length of time, chances are that you’ve come across a claim that research has shown that automated tools can only detect 45% of vulnerabilities. It is often cited to illustrate the need for participation of human experts in security and penetration tests. However is the claim really true?
You may find it in, among many other places, the latest OWASP Testing Guide.
Given this source in particular, one might reasonably expect the claim to be correct…but, as you may have guessed, that is not the case. Or rather not entirely. Some other sources cite the original research, where the number 45% originated, more or less correctly, as tools are capable of detecting 45% of types of vulnerabilities. This version may be found (again, among many other places) in several OWASP presentations.
Many have cited these presentations. For example Mitnick Security, company of the world renowned Kevin Mitnick, cites OWASP almost word for word on their website. However, as a closer look at the text of their site shows, even they, when coming from exact citation of OWASP, managed to interpret the conclusions of the original research to mean “automated tools detect only 45% of vulnerabilities” in some cases.
Leaving misunderstanding/misquoting of the original conclusions to one side, even in cases when the conclusions are cited correctly, many seem to gloss over the fact that the research, which resulted in the “45%” result, was limited in scope to only certain types of tools and took place all the way back in 2007 so its results don’t necessarily describe the current state of affairs… But we’re getting ahead of ourselves. First, let’s take a look at where the number actually came from.
OWASP Testing Guide is one of the few places where we may find an attribution (although the reference in OTG should point to , not ), which leads us to a presentation from BlackHat DC 2007 by a team (Robert A. Martin, Sean Barnum and Steve Christey) from MITRE/Cigital.
Unfortunately, by itself, the slide from the presentation which is cited in OTG doesn’t give us much information. We may deduce from it that 55% of CWEs were found not to be covered by - presumably - some tested or analyzed tools, but that is about it.
Since the other slides in the presentation don’t give us any more information regarding the presumed 45% detection rate, we need to dig a bit deeper. After a while of Googling, one might find couple of articles from the same authors on MITRE website (one which probably served as a basis for the BlackHat talk and one from CrossTalk magazine), which are both titled the same as the presentation. Neither of them, unfortunately, sheds any light on the issue of detection rate among automated tools.
I have to admit that this was the point, where my Google-Fu failed me as I was unable to find anything more exact with regards to the original research. I was, however, able to find e-mail contacts for all three authors of the original paper/presentation from BlackHat DC 2007 and one of them - Bob Martin - was kind enough to reply to my message and explain what their work was based on. Following paragraphs are contents of the e-mail I received, unedited except for the use of bold font for what I believe are the most important parts.
As we may see - among many other information for which I’m very grateful to Bob Martin - the original research only covered static analysis tools (SAST). Even if the research wasn’t as old as it is, this fact alone shows its results should not be interpreted and presented in the way they very often are.
Don’t get me wrong - I don’t claim that tools alone can find every type of vulnerability out there. They can’t - automated scanners and other tools are great at finding certain types of vulnerabilities, but for others, they are either unable to find them at all or don’t come even close to what an experienced penetration tester, analyst or auditor may discover. I don’t even claim that tools are currently capable of finding more than 45% of all vulnerability types - I don’t know whether or not they are and as far as I can tell, no one else does either.
And that is the point - although we might like to have hard numbers to back up why human factor is indispensable when it comes to finding vulnerabilities, citing results of a study from 2007 as current, or using misquoted version of its conclusions in marketing materials in order to convince customers that they really need our experienced pentesters in order to be secure, is something we should try very hard to avoid.