Monday, November 21, 2011

Are you positive?

It will not die, and this won’t end it, but I have to try.  “False positive” findings are hotly debated by some folks, but that debate often centers on erroneous definitions or assumptions.  Regardless of the type of system we are discussing, IDS, Anti-Virus, vulnerability tool, whatever- there are some basic ideas involved.
The Basics:
There is a defined condition which either exists, or it doesn’t.
The tool or utility detects it, or it doesn’t.
This gives us a pretty simple set of situations, expressed in the table below:


Not Detected


True Positive

False Negative

Does Not Exist

False Positive

True Negative


There are issue which complicate this simple picture.  One is how strictly we define the condition:

If I want my anti-virus to detect viruses and it misses one- that is a false negative to me.  It is supposed to detect malware, it missed, simple.  Unfortunately, modern malware is constantly evolving and signatures and other triggers are frequently behind the malware- this means the tool misses something it is not configured to detect.  You are still left wiping and rebuilding the computer, but there’s something to consider while looking for the right CD, DVD, or image file.  For what it’s worth, I still consider that a false negative, we use A/V to prevent malware in general, not to block WORMBOTTROJAN.X87.03 or other specific Bad Things with even more pathetic names.

We should be able to ignore two of these for this discussion, the green ones I have labeled “Valid”.  Note I said we *should* be able to ignore.  Sadly we can’t, because true positives are often dismissed as false positives.  Sometimes it is because we don’t care about the result, or it is not relevant in our environment.  Sometimes it is because we can’t handle the truth.  HandletheTruth(Thanks to Graham Lee, @iamleeg, I now refer to these as Unacceptable Positives).  Regardless of our level (or lack) of concern, or the discomfort caused by the truth, if the condition exists and it is detected it is not a false positive.  It is often easy to prevent the utility from reporting on findings, either by changing how it searches, or how it reports on findings.  Go ahead and accept the finding and dismiss it in your environment- just don’t call that a false positive. 

Real false positives certainly do exist, and can be a burden.  There are a myriad of reasons they occur, some specific to the technology in question.  Anti-Virus may trigger on a file which looks close to a known bit of malware.  People can screw up signatures. There may be performance trade-offs, looking at larger chunks of network traffic may provide more accurate detection and identification at the expense of speed, either of the detection system, the network (when inline), or both.  Slow down the network, users scream.  Slow the system, traffic overruns the utility and some things will get by.  Tune for performance, miss a few detections.  For scanners, there is a limited amount of information which can be determined in a scan from “outside” a system.  An exhaustive network scan can find a lot of things, but it can also cause network problems due to the load placed on the network.  The limited information available without logging in to inspect a system can lead to inaccurate detections by the tool, positive or negative.  (Note: this is why I always recommend credentialed scans when possible- but that’s another post).

True negatives are safe to ignore, nothing is reported because nothing is there.  Unless, of course, you are a typical security-minded person, in which case you always wonder if something has been missed. Caution leads us to try multiple tools to validate our non-findings (when budget and time allow).

False Negatives are very real, too.  This is where anti-virus gets beaten up, and generally for good reason.  It isn’t only A/V, network load when using scanners and sniffers can lead to missed detections.  Sometimes the signatures just don’t work.  Sometimes the condition we are trying to detect has changed.  This is true for everything from malware to operating systems- new versions come out, patches are applied, and detections change.

Remember that the nature of the system will dictate the tolerance for errors.  A good example can be seen by comparing IDS (true passive intrusion detection systems) and IPS (inline and blocking intrusion prevention systems).  While the technologies are very similar, the goals are different.  A good IDS will not miss detections, false negatives are a serious problem because we don’t want to miss anything- this means false positives are more acceptable if the trade-off means not missing Bad Things.  An IPS false positive means we block valid network traffic, users wail and gnash teeth, and security takes a beating for hindering the operation of the organization again.  Keeping false positives at a minimum is a priority, this means it is more likely that some false negatives will occur.  If the cost of the occasional missed detection is lower than the cost of false positives blocking valid traffic, the trade-off is worth it.

Knowing the strengths and weaknesses of your environment and the tools you use is important in tuning for optimum results. Yes, tuning- you share responsibility here- choosing the right tools and using them properly will reduce the pain that leads to tedious blog posts like this.