My last post had some incomplete thoughts (this is not unusual), and I decided to address some of them (this is unusual).
I mentioned that segmenting your network was advantageous for a variety of scanning and monitoring reasons, but I didn’t didn’t elaborate, let me do that now.
There are some great systems for data correlation which can tell you significant things- for example whether that IDS alert was for traffic targeting a host vulnerable to the specific attack detected. Unfortunately, we don’t all have the resources to have such systems, or the time to tune them. If, however, you have an effectively segmented network and see an IDS alert for an attack against Internet Explorer in a segment with only Linux servers you can relax. On the other hand, if you see alerts for an event targeting a Windows bug you have yet to patch, and it is inbound to your Windows segment- it is time to crank up the caffeine and get busy. You get the idea. And it extends beyond IDS, even simple network stats can become informative- anomalous traffic is much easier to spot in a segmented network, a sudden increase in inbound traffic to a workstation segment, outbound requests from web servers, or SMTP where it doesn’t belong are just a few examples. You can certainly sort this out with a little analysis, but in a well segmented network you can reduce the amount of thought required to make “react or relax” decisions.
Some of the other reasons I mentioned are more obvious, keeping traffic in local segments where possible to minimize network noise, and protecting systems from having Something Bad™ rip through the network unhindered. A couple of thoughts on the segmentation-for-security concept are worth elaboration; grouping by OS makes sense from a management perspective, but if you do that it won’t stop the aforementioned Bad Things™ from running wild, so consider how best to segment for your situation and needs. It may be that the security disadvantages of putting all similar digital eggs in one basket are offset by the administrative advantages. Knowing you can scan, patch, and monitor quickly and accurately may be a stronger defense than splitting up your Windows environment. On the other hand, if it takes a long time to get patches deployed, the added separation may buy you time when bad things happen before patches or mitigations are deployed. If you do segment for security, you need to put meaningful rules in place to restrict the traffic or you are just adding latency and complexity without adding security. I would like to tell you that deciding what traffic to allow will be easy, but it probably won’t be. First, note that I said “traffic to allow”, that is because a default block rule is needed internally as well as for inbound and outbound traffic to the wider internet. You may need to temporarily allow all traffic internally and perform analysis on what ports and protocols traverse the links, then build rules based on existing traffic. This is not ideal, as you could allow inappropriate traffic based on “grandfathering” bad behavior, but this is a starting point; as you implement the filtering rules make sure they make sense. As always, understanding your environment is critical to doing this properly.
Still not a complete story, but hopefully this has filled in a few holes in my last post and given a bit more insight into how and why to implement or extend segmentation.