Monday, November 21, 2011

Are you positive?

It will not die, and this won’t end it, but I have to try.  “False positive” findings are hotly debated by some folks, but that debate often centers on erroneous definitions or assumptions.  Regardless of the type of system we are discussing, IDS, Anti-Virus, vulnerability tool, whatever- there are some basic ideas involved.
 
The Basics:
There is a defined condition which either exists, or it doesn’t.
The tool or utility detects it, or it doesn’t.
This gives us a pretty simple set of situations, expressed in the table below:
 

Detected

Not Detected

Condition:
Exists

Valid:
True Positive

Invalid:
False Negative

Condition:
Does Not Exist

Invalid:
False Positive

Valid:
True Negative

 

There are issue which complicate this simple picture.  One is how strictly we define the condition:

If I want my anti-virus to detect viruses and it misses one- that is a false negative to me.  It is supposed to detect malware, it missed, simple.  Unfortunately, modern malware is constantly evolving and signatures and other triggers are frequently behind the malware- this means the tool misses something it is not configured to detect.  You are still left wiping and rebuilding the computer, but there’s something to consider while looking for the right CD, DVD, or image file.  For what it’s worth, I still consider that a false negative, we use A/V to prevent malware in general, not to block WORMBOTTROJAN.X87.03 or other specific Bad Things with even more pathetic names.

We should be able to ignore two of these for this discussion, the green ones I have labeled “Valid”.  Note I said we *should* be able to ignore.  Sadly we can’t, because true positives are often dismissed as false positives.  Sometimes it is because we don’t care about the result, or it is not relevant in our environment.  Sometimes it is because we can’t handle the truth.  HandletheTruth(Thanks to Graham Lee, @iamleeg, I now refer to these as Unacceptable Positives).  Regardless of our level (or lack) of concern, or the discomfort caused by the truth, if the condition exists and it is detected it is not a false positive.  It is often easy to prevent the utility from reporting on findings, either by changing how it searches, or how it reports on findings.  Go ahead and accept the finding and dismiss it in your environment- just don’t call that a false positive. 

Real false positives certainly do exist, and can be a burden.  There are a myriad of reasons they occur, some specific to the technology in question.  Anti-Virus may trigger on a file which looks close to a known bit of malware.  People can screw up signatures. There may be performance trade-offs, looking at larger chunks of network traffic may provide more accurate detection and identification at the expense of speed, either of the detection system, the network (when inline), or both.  Slow down the network, users scream.  Slow the system, traffic overruns the utility and some things will get by.  Tune for performance, miss a few detections.  For scanners, there is a limited amount of information which can be determined in a scan from “outside” a system.  An exhaustive network scan can find a lot of things, but it can also cause network problems due to the load placed on the network.  The limited information available without logging in to inspect a system can lead to inaccurate detections by the tool, positive or negative.  (Note: this is why I always recommend credentialed scans when possible- but that’s another post).

True negatives are safe to ignore, nothing is reported because nothing is there.  Unless, of course, you are a typical security-minded person, in which case you always wonder if something has been missed. Caution leads us to try multiple tools to validate our non-findings (when budget and time allow).

False Negatives are very real, too.  This is where anti-virus gets beaten up, and generally for good reason.  It isn’t only A/V, network load when using scanners and sniffers can lead to missed detections.  Sometimes the signatures just don’t work.  Sometimes the condition we are trying to detect has changed.  This is true for everything from malware to operating systems- new versions come out, patches are applied, and detections change.

Remember that the nature of the system will dictate the tolerance for errors.  A good example can be seen by comparing IDS (true passive intrusion detection systems) and IPS (inline and blocking intrusion prevention systems).  While the technologies are very similar, the goals are different.  A good IDS will not miss detections, false negatives are a serious problem because we don’t want to miss anything- this means false positives are more acceptable if the trade-off means not missing Bad Things.  An IPS false positive means we block valid network traffic, users wail and gnash teeth, and security takes a beating for hindering the operation of the organization again.  Keeping false positives at a minimum is a priority, this means it is more likely that some false negatives will occur.  If the cost of the occasional missed detection is lower than the cost of false positives blocking valid traffic, the trade-off is worth it.

Knowing the strengths and weaknesses of your environment and the tools you use is important in tuning for optimum results. Yes, tuning- you share responsibility here- choosing the right tools and using them properly will reduce the pain that leads to tedious blog posts like this.

 

Jack

Friday, November 18, 2011

(ISC)2 election reminder

Not that you are likely to forget, but if you are an (ISC)2 member (hold the CISSP or other certification), the election is on for the Board of Directors.

There were a handful of unendorsed candidates who tried to make it onto the ballot,  One candidate, Wim Remes, made the ballot.  Two others, Rolf Moulton and Javed Ikbal missed making the ballot, but are running as write-in candidates.  And, of course there is the endorsed slate.

First: you should vote if you are eligible. That’s the most important part- participate, and vote for those you feel best represent you.

Second: My opinion may not be relevant to you, but I’m voting for Wim. And writing in Rolf and Javed. I think Wim can win, and I hope he does- I have faith in him.  I also hope that frustration with (ISC)2 can get Javed and Rolf on the board, too.

You can vote for up to four.  I’ll be voting for three.  I will say that at least one of the board “elders” represents what I feel is wrong with (ISC)2, and to a certain extent, InfoSec.  Choose wisely, and hope it makes a difference.

Oh, yeah- it is the (ISC)2 website, so the links don’t go where you expect and one thing labeled “ballot” dead-ends at the candidate page.  At least I didn’t see any certificate errors this time.  If you have problems voting, complain to (ISC)2.

Go here to vote:

https://webportal.isc2.org/custom/ElectionBallot.aspx?YEAR=2011

If you choose to write-in candidates, please make sure their names are spelled correctly.  There are instructions on both Javed and Rolf’s websites.

 

Jack

Monday, November 7, 2011

End of year predictions

The end of the year is approaching, so the annual flurry of predictions must be right around the corner.  Or maybe that smell is just a septic pumping truck, the contents are similar, except there are regulations covering the disposal of septic waste.

Here are my predictions:

People will predict stuff, and for the most part only their successes will be remembered.

Some people will predict the same things they have been predicting for years (or maybe even decades), and if they are eventually “right”, no one will ask about all the times they were wrong, and even of they did it would be shrugged off as “I was right, just off on timing”.

2012 will not be the year of Linux on the desktop.

And because I feel compelled to make one real prediction, Windows 8 as a desktop OS will be as disappointing as Windows 7 has been successful.

No matter what is predicted or what actually happens, randomness will not get the credit it deserves as people look both forward and backwards in time. Admitting that “life is a crap shoot” doesn’t get you the respect it should.

Dice, random or predictable?

I’ve listened to a couple of interesting books in the past several months, and a recent episode of the Freakonomics podcast does a great job of summarizing a lot of ideas into a one-hour show.  Short version: random stuff happens, and that makes prediction hard.  Really hard.  Also: so called “experts” are usually wrong- and the more adamant and certain an “expert” is, the more likely they are to be wrong.

The Freakonomics “Folly of Prediction” episode does a great job of distilling a lot of research into an easily digestible audio format.  (Note: If you aren’t familiar with Freakonomics, you should be- they make economics entertaining, challenging, and informative.  I’ve read both books and am a regular listener to the podcast.  Unrelated to this post, the recent episode on quitting was another great one).  Some of what they bring up in the predictions episode of  Freakonomics podcast is covered in much greater detail elsewhere, including a couple of books I listened to earlier this year.  The predictions podcast briefly discusses prediction markets, which seem much more promising than traditional pundit-centric pontification style prediction.

Note: I listened to both as audiobooks, Audible is not perfect, but for the commuter and frequent traveler they are great.  (I’ve also heard audiobooks are great for people who “exercise”, but people who do things like that clearly have too much to live for and are just punishing themselves for it).

The first book I listened to was The Drunkard’s Walk by Leonard Mlodinow.  Here’s an excerpt from Stephen Hawking's Amazon Review of The Drunkard's Walk:

In The Drunkard’s Walk Leonard Mlodinow provides readers with a wonderfully readable guide to how the mathematical laws of randomness affect our lives. With insight he shows how the hallmarks of chance are apparent in the course of events all around us.

The Drunkard’s Walk covers a variety of probability topics, from the significance of randomness to some history of the study of probability, and uses many illustrative anecdotes (including a look a the Monty Hall problem and others where “common sense” appears to let us down).

The second book was Future Babble by Dan Gardner.  From the author’s site:

Future Babble, a critical look at expert predictions and the psychology that explains why people believe them even though they consistently fail.

Future Babble is focused on prediction, but as random events and probabilities are challenges to prediction this book does have some content which overlaps with The Drunkard’s Walk.

Both books are overly negative at times, and thoroughly dismissive of many “experts”, but together they make a compelling case for a healthy dose of skepticism.  These works do highlight issues of bias and fallacies which lead us into making or accepting seemingly “logical” but wrong predictions, being aware of these biases and fallacies can help us identify and avoid them.

One of the recurring lessons of all of these works is that the more confident and adamant someone is about their predictions, the less likely they are to be correct, and the more likely they are to deny when they have been proven wrong.  A lot of this goes back to Philip Tetlock’s works including Expert Political Judgment, a skewering of political pundits’ ability to predict much of anything.  Tetlock often speaks of “hedgehogs and foxes”, a reference to the phrase:

Four-toed Hedgehog, Atelerix albiventris, 3 weeks old, in front of white background

The fox knows many things, but the hedgehog knows one big thing

Red fox (4 years)- Vulpes vulpes

from the ancient Greek poet Archilochus.  The hedgehogs are those with an ideology or single big idea, they hold onto the idea and rationalize around it.  Hedgehogs tend to use absolute words and are very confident in their predictions- hide from these people (television, especially cable news and talk radio are full of them).  Foxes, by comparison see much more variability in the world and are prone to use what we often derisively call “weasel words” such as “probably” or “likely”.  Foxes are also much more likely to admit they were wrong when history proves their predictions in error.

I am not saying that nothing can be predicted, and I’m not tossing stones at my risk and metrics friends- I am just suggesting that we pay attention to the realities of the world.  And the reality is that random events happen and have a large impact on our lives, and that some things which appear random are not.  And that means predictions are often hard, if not impossible.

I’ll leave you with a final quote, this one from the great philosopher Yogi Berra:

“It’s tough to make predictions, especially about the future.”

 

Jack