Network segmentation faults, that is- not those pesky software problems. Penetration testers and others often say network segmentation doesn’t stop attackers, and that at best segmentation only slows them slightly. Systems and network admins often complain of needlessly complicated routing and access rules, latency, and other problems.
What these people say is largely true and also largely wrong. Because they are doing it wrong, and for the wrong reasons.
Network segmentation does not mean simply adding a hop between network segments to confuse and exhaust the poor little packets, and it is not just a tool for restricting traffic for controlling access.
Obviously restricting traffic and isolating access in logical network divisions by function, type, criticality, sensitivity or other reasons relevant to your environment is a logical reason to think about segmentation, but that is only the beginning.
VLAN if you must, but I like physical segregation where possible. Especially for the most high-traffic and most sensitive segments. I prefer to use a firewall with a lot of real ports, not one of those crappy things where most of the ports are just switch ports for the LAN. Just make sure whatever gear you use can fling packets without adding noticeable latency.
Thankfully, broadcast storms are largely a thing of the past, but isolation can still help in diagnosing network oddities. Not pretty or sophisticated- but sometimes disconnecting segments is the fastest way to find problems. I can unplug a lot of patch cables (or power cords) in the time it takes to log in and poke around in most network gear (where’s the damned CAM table shown in this version of $EXPLETIVE). Also, the switch/router/firewall interfaces are great places for packet captures when you are having one of those “the packets hate me” days. You know, the ones where you go digging for the old taps and suck traffic right off the wire (or fiber).
Why else should you segment? Network and systems management can be enhanced by segmentation and isolation, as can performance- patch and systems management servers, departmental servers, printers and more can be placed in the most advantageous segment of the network. For systems which can’t be in the target segment, traffic can be restricted and directed to limit noise on the wire (or fiber, or ether, whatever).
And finally, near and dear to me lately, we have scanning and monitoring. All your Apache servers in one segment? Great, patch or vulnerability scans can regularly scan that segment with minimal stray results if the scans have the relevant tuning. The great unwashed of Windows workstations? Hammer those with scans looking for unpatched RDP or whatever the Next Big Bug is- without annoying the PostgreSQL servers over there in the DB segment. It goes without saying that you put scanners in each segment to minimize network noise. And not just active scanners: passive scanners, network analyzers, netflow sensors, IDS sensors, full packet capture systems, and more can benefit from segmentation and isolation of traffic.
This even applies to virtual segmentation. Well, some of it does, and there are some virtual equivalents for some things.