Today I present a guest post, writing by my friend Jeffrey Man. This is a very well thought out piece on Target, PCI, and surrounding issues.
There has been much discussion online and in the media as to whether or not Target was compliant with PCI DSS at the time of their breach. Details of the compromise are still not completely known, but there have been some new details released that –while not definitive – are starting to give us at least an idea of the path that the attackers took to gain access to Target’s network, the cardholder data environment, and ultimately the POS systems where malware was installed to capture transaction data and ultimately exfiltrate the data out to the attackers.
I’ve been debating with several colleagues how to best approach a discussion of whether or not Target was compliant at the time of the breach. We are all seeking an informed and objective way of discussing this issue from several vantage points, basically trying to decide the points of failure (if any), and which specific PCI DSS requirements led to the compromise.
Ira Winkler published an article for Computerworld yesterday where he discusses “6 failures that led to Target hack”. Ira very astutely points out that there really wasn’t a single failure that led to the Target breach but in actuality there were a series of systematic failures that allowed the compromise of millions of credit/debit cards and other customer personal information. I’ve been involved with numerous companies over the years that are attempting to recover from a breach or compromise and Ira’s words rang true – there is almost never a single point of failure but a series of actions (and inactions) that lead to the event.
I also thought that the six failure points that Ira discusses would be a great springboard for an objective discussion of whether the PCI DSS controls applied, were implemented, or were not being followed by Target. Let me start with summarizing the 6 APPARENT failure points that Ira pointed out in his article:
1. Lack of or improperly implemented segmentation controls to isolate the cardholder data environment (CDE);
2. Lack of or improperly deployed IDS/IPS solutions;
3. Failure to detect compromise of internal software distribution system or failure to detect changes/modification of the software being distributed internally (really two failures, IMO);
4. Lack of whitelisting solution to protect the POS systems;
5. Lack of detection of the compromise of systems commandeered to enable the collection of the transaction data and subsequent exfiltration; and
6. Lack of detection of the exfiltration of the data itself.
My intention is to foster a discussion about these failures as they pertain to the PCI DSS controls specifically and how they are interpreted and applied for the typical large merchant. I have had numerous retail customers over the years, some recovering from a breach; some trying to prevent one; (all trying to comply with PCI DSS and not spend too much time, money and resources). The failures discussed point out the difficulties of implementing adequate security controls in a typical retail environment, and also the complexities of consistently interpreting and applying the PCI DSS controls.
I’ll get the ball rolling with some initial thoughts:
1. Network segmentation is not a PCI DSS requirement, but a highly recommended means of limiting a QSA's scope for validation (but often means the systems to which our clients' apply the PCI DSS controls). Evaluating adequate segmentation is highly subjective so this point is debatable as to whether or not Target failed to adequately segment their CDE, or whether their QSA approved it or not. Frankly, if this proves to be the actual path of compromise, I think this will serve as the death knell for segmentation and limiting scope altogether (or should). The lesson learned should be to apply the PCI DSS framework across the enterprise. Period. No exceptions.
2. IDS placement is also debatable - as the standard requires perimeter placement to and at "strategic points" within the CDE. It's likely that the hackers circumvented the perimeter by finding what effectively was a backdoor/trusted ingress path via the HVAC/Ariba system. This could be a simple case of putting alarms on the “front door” and leaving the back door wide open.
3. This one is a little tougher to defend. On the one hand, these systems should have clearly been considered in-scope for PCI and thus should have been in the CDE. But, because they perform a supporting function and not actual transaction processing, I could certainly understand if the focus was more on the controls associated with Requirement 6 as it pertains to change management, software development, testing, and so forth – and not so much on the hardening, logging, monitoring controls put forth in other sections of the PCI DSS.
4. While whitelisting solutions for POS systems are fairly common, they are not technically required. The requirement for these systems is for Anti-virus/malware solutions to be installed, receiving automatic updates, and periodically scanning the system, have FIM installed and reporting/alerting, and receiving critical patches within 30 days of release. I mention these three categories (AV, FIM, Patching) because these are the categories that many of my retail clients try to address through compensating controls using primarily a whitelisting solution as an alternative. The use of a compensating control is allowed for technical limitations; in this case the limitation was the difficulty in successfully administering large numbers of geographically dispersed systems – many of which were not routinely online – in a timely manner according to the specific PCI DSS Requirements. Presumably Target either had the primary controls in place, or a compensating control alternative such as a whitelisting solution, or they did not. IF they did, the discussion should focus on whether the control actually worked, and I would point out that as a QSA I was not supposed to judge whether a solution actually performed as advertised, but that it advertised meeting the goals of a particular requirement.
5. The commandeering of these systems should have been detected, so this should be an easy one to say was non-compliant at the time of the breach. The only rebuttal might be the logical location of these systems (outside the CDE) and whether they were being maintained and monitored according to PCI DSS requirements. But the failure then was the lack of detection of the transfer of data – oh wait, that’s the next failure point…
6. I can’t get past this one. You have to assume that the CHD data started out inside the CDE and was exfiltrated somehow outside of the CDE and ultimately outside of the enterprise. That should have been disallowed by outbound firewall rules, so either the attackers used trusted (existing) outbound ports/services/protocols, the rules were non-existent that would have prevented the egress, or they compromised the firewalls and added their own rules. My initial thought was that they would likely have used existing rules to get the data out – but then there’s the matter of the destination. PCI DSS is supposed to prohibit the use of “any” rules, so maybe the attackers did have to compromise the firewall and at least add an IP or two to an existing outbound server group? I want to give the benefit of the doubt here, but properly implemented PCI DSS controls should have prevented or at least alerted on this egress.
That is my current thinking based on these failure points. What do you think? Feel free to agree or disagree but by all means you are welcome to contribute to the discussion.
Jeffrey Man