Friday, April 20, 2012

Wait, what? Someone has to look at those logs?

Anton Chuvakin has a good post over on the Gartner blog about security monitoring and cloud systems.  Depending on your point of view and/or experience, you may think his comments are thought provoking, or possibly obvious (this will probably depend on where you are on the cloud adoption path).  I agree with the good Dr. Chuvakin, but my recent conversations with people trying to come to grips with monitoring and log analysis have given me some contradictory insights.

Anton is correct in his mapping of visibility and coverage, and on the observations of the perspective of CSP-MSSPs (Cloud Service Provider – Managed Security Service Provider), but there is one point I have heard loudly from some people- that in spite of some MSSP’s theoretical threat intelligence and perspective advantages, they simply do not understand the businesses they serve well enough to provide enough value to justify their expense.

In my recent peer-to-peer session on What Works in Log Analysis at the RSA Conference some participants were struggling to pull log management and analysis back in-house after outsourcing it.  Their battle was that the MSSPs never lived up to the promise of economies of scale and advanced insight into traffic anomalies, possibly due to shortcomings on the part of the MSSPs, and possibly because the advantages of scale and “big picture” view were offset by a lack of focus on the specific circumstances of the customer.  As with many other issues in business, you (hopefully) know your situation better than anyone else.  I’m not saying that you can’t outsource SIEM, log management/analysis, or anything else for that matter- I’m just saying you need to understand the trade-offs and make sure you monitor the MSSP until you are satisfied- and then keep monitoring them.  Any effort you duplicate in monitoring the performance of your CSP-MSSP or MSSP is cheap insurance- the last thing you want to face is a surprise failure of your monitoring service and the sudden need to rebuild an in-house monitoring program.  You thought getting all that data pushed out to the MSSP was a pain- just imagine trying to get it back.

 

Jack

Tuesday, April 10, 2012

Who put all that travel on my calendar?

I did it to myself if I’m honest.  I will grumble about airlines, the TSA, hotels, cabs, etc.- but the great thing is that I get to see old friends, meet folks, and have some engaging (and inane) conversations.  Some of my upcoming adventures are below- if you’ll be at these events or in the general area either find me and say hello, or hide from me, as you feel appropriate.

I’ll be at BSides Austin later this week, participating in a cloud computing panel and later giving an update on the stress and burnout research.  And joining in Hackers on a Duck III.

Next week I will be helping at SOURCE Boston and MassHackers BeaCon (both in Boston), followed by a trip to London for Infosecurity Europe where I’ll be working the Tenable booth (and hopefully sneaking over to BSides London).

After just enough time to do some laundry, I’ll be at NAISG Securanoia in Boston, helping with the event and speaking on the state of information (in)security, then off to InterOP in Las Vegas where I’ll join the panel “So you want to be a Tech Influencer”.  Next stop will be BSidesROC, in Rochester, NY, and then maybe home before heading out again to Las Vegas and who knows where else.  Travel arrangements per that old Johnny Cash song.

I’m not hard to spot, subtlety is not one of my strong suits- find me and chat.

 

Jack

Sunday, April 1, 2012

Filling in some blanks

My last post had some incomplete thoughts (this is not unusual), and I decided to address some of them (this is unusual).

I mentioned that segmenting your network was advantageous for a variety of scanning and monitoring reasons, but I didn’t didn’t elaborate, let me do that now.

There are some great systems for data correlation which can tell you significant things- for example whether that IDS alert was for traffic targeting a host vulnerable to the specific attack detected.  Unfortunately, we don’t all have the resources to have such systems, or the time to tune them.  If, however, you have an effectively segmented network and see an IDS alert for an attack against Internet Explorer in a segment with only Linux servers you can relax.  On the other hand, if you see alerts for an event targeting a Windows bug you have yet to patch, and it is inbound to your Windows segment- it is time to crank up the caffeine and get busy.  You get the idea.  And it extends beyond IDS, even simple network stats can become informative- anomalous traffic is much easier to spot in a segmented network, a sudden increase in inbound traffic to a workstation segment, outbound requests from web servers, or SMTP where it doesn’t belong are just a few examples.  You can certainly sort this out with a little analysis, but in a well segmented network you can reduce the amount of thought required to make “react or relax” decisions.

Some of the other reasons I mentioned are more obvious, keeping traffic in local segments where possible to minimize network noise, and protecting systems from having Something Bad™ rip through the network unhindered.  A couple of thoughts on the segmentation-for-security concept are worth elaboration; grouping by OS makes sense from a management perspective, but if you do that it won’t stop the aforementioned Bad Things™ from running wild, so consider how best to segment for your situation and needs.  It may be that the security disadvantages of putting all similar digital eggs in one basket are offset by the administrative advantages.  Knowing you can scan, patch, and monitor quickly and accurately may be a stronger defense than splitting up your Windows environment.  On the other hand, if it takes a long time to get patches deployed, the added separation may buy you time when bad things happen before patches or mitigations are deployed.  If you do segment for security, you need to put meaningful rules in place to restrict the traffic or you are just adding latency and complexity without adding security.  I would like to tell you that deciding what traffic to allow will be easy, but it probably won’t be.  First, note that I said “traffic to allow”, that is because a default block rule is needed internally as well as for inbound and outbound traffic to the wider internet.  You may need to temporarily allow all traffic internally and perform analysis on what ports and protocols traverse the links, then build rules based on existing traffic.  This is not ideal, as you could allow inappropriate traffic based on “grandfathering” bad behavior, but this is a starting point; as you implement the filtering rules make sure they make sense.  As always, understanding your environment is critical to doing this properly.

Still not a complete story, but hopefully this has filled in a few holes in my last post and given a bit more insight into how and why to implement or extend segmentation.

 

Jack