Monday, December 29, 2014

“Is your computer working?”

As promised, that other hospital tech incident.  I was leaving a friend’s room right after the nursing shift changed and the new nurses were beginning their rounds.  As I was preparing to leave I heard the nurse outside my friend’s room call down the hall “Is your computer working?”.  I paused in saying my goodbyes and we listened to the nurse muttering and typing ever louder on the mobile cart keyboard.  Not good.  Especially since that computer stood between my friend, and every other patient, and medications.  The nurse popped in, said they were having computer issues, and that she was going to pull his medications manually- the delay would only be a few more minutes.  And true to her word, his meds arrived only about 20 minutes late thanks to a manual backup routine for checking out medications.

As I left I saw that two of the cart computers were displaying “unable to authenticate” errors.  I don’t know what the problem was, and my friend never found out.  I guess he was too busy being seriously ill to diagnose authentication failures.

Not bad, eh?  There was a system failure, but backup procedures were in place to prevent serious problems.  High fives for all?

Not so fast.  That 20 minute delay doesn’t seem significant, unless of course you were the one waiting for medication.  Most critical meds would be administered intravenously so… wait, those are behind the same system.  But still, only a 20 minutes delay… except the process had to be repeated for each patient until the error was resolved, and the manual paper records had to be transferred into the computers when they were restored- so at the end of their shift the nurses were further distracted from patient care to do data entry. 

I’m not repeating these medical computer issues to throw stones at the medical profession, or at technologists working in healthcare- but to illustrate some fundamental issues with technology and security.

In the first tales of poor communication, there seemed to be be a few symptoms and causes, but one crucial result.  Data input was inconsistent and maybe not as easy for medical professionals to use as it could have been.  Probably related since there often wasn’t timely info available in the computer system, people relied on it less, and thus input less frequently- a classic “chicken and egg” situation.  The critical end result was delayed patient information,  but there was also the sadly familiar case of a system becoming a burden (and possibly even a liability) when it should have been an asset.  Usability, user buy-in, and management oversight all needed to improve to move this forward.  I’m sure that sounds familiar, although hopefully in different contexts.

Today’s tale is a bit different, it is about a failure to understand the consequences of operating on backup procedures.  “We have a plan for when things go wrong” is great and all, but if it doesn’t let people do their jobs in a reasonable manner without undue consequences your fail-safe is a failure.  Granted, these are extreme conditions; delayed email is not the same as delayed patient care, but there are still lessons to learn.

Oh, and you’ll note I didn’t mention compliance, that wasn’t an oversight.  I’m not an expert on healthcare compliance (unlike many who pontificate on it but can’t spell HIPAA) and I don’t want to blindly speculate on things like what perversions to pain management are imposed by the “war on drugs” and what that means for procedures for dispensing controlled substances.  If potential impact on patient care doesn’t get you thinking, I hope you aren’t working in healthcare.


Tuesday, December 16, 2014

About that Herbie Hancock book

The first Hancock story I mentioned last week is the opening story in his new book.  He tells the story better than I do.

I’m not far into the audiobook, but I wanted to hear a bit of it the other day between chapters of Kim Zetter’s new(ish) book on Stuxnet.  That one is good, too- Zetter balances making the story approachable to non-techies with detail enough to keep those with some knowledge of the events engaged.  Unfortunately, the audiobook version means I don’t have access to the extensive footnotes unless I buy a print copy, too- but I spend enough time on the road that the audiobook was the fastest way I would get to digest the book.

A note on the audio of these two books- the reader of Zetter’s “Countdown to Zero Day” speaks slowly and clearly, so slowly that I find the book much more listenable at 1.5x speed.  Herbie Hancock reads his own book and tells his own stories, his delivery is, not surprisingly, fantastic.

Yeah, I still owe you that other hospital story.  Remember, patience is a virtue.  It is not one of mine, but that’s another story.


Computers are efficient. And other lies.

Sometimes stuff gets put into perspective.  With force.

I was recently reminded of a few things which happened several months ago while I was visiting friends in hospitals (this happens more and more as you get old- or they are visiting you).

All events occurred at large, modern facilities- the kind with computers in every patient room plus roving computer carts, and all the patient info readily available to authorized personnel.  Of course, by “all” I mean “all information which has already been entered into the right systems”, which leads to my first observation.

Hanging out with my friend for an afternoon I got to overhear some of his conversations and frustrations with the medical staff.  It was a busy afternoon for him, no sooner had one team of specialists left him than another would wander in.  Each team came in with a handful of patient files, and checked up on him in the computer when they were talking to him.  And he invariably had to fill them in on some test result or comment from other specialists about his challenging situation- it was common enough that he kept a journal to make sure he could pass the latest info on to his caregivers.  Remember, computers everywhere, in rooms, staff stations, and mobile carts.  Oh, and paper files in a binder outside each patient’s room.  And that wasn’t enough to get info shared in a timely manner.  The computer systems were apparently less than efficient, so data input was tedious- thus forcing the reliance on paper, further slowing the timely input of data.  Somewhere the technology became a burden instead of an aid, and that compounded aggravation for the people who relied on the systems to do their jobs.  We’ve all seen poorly implemented technology like this, but seeing it in a hospital where a patient, your buddy, has to keep notes to make sure he bridges failures in communication with medical staff, that’s pretty terrifying.

Just as this was sinking in, one of the aides came in and took his vital signs- and scribbled them down on a scrap of paper to input somewhere else later.  This was not an anomaly, my buddy assured me that happened every time his vitals were taken throughout his stay, expensive machines display numbers, aides scribble them on scraps of paper for later input.  Damn, that’s the way to share important information in a timely manner.  And efficient, too.

doctor at office

That afternoon I wandered down to the waiting room a few times as doctors were examining him.  One time I overhead an interesting conversation, there was a pretty ugly technical problem and the person looking into it was the kind of network admin I want working in healthcare.  I’m sure he thought the waiting room was empty as he used the phone in the hall, so I got to overhear a pretty candid exchange.  He was investigating a connectivity problem with the wireless telemetry system, the system which monitors patients and reports the vitals and more to staff throughout the floor.  Wireless telemetry systems are generally used for patients who need continuous monitoring, but are somewhat mobile, such as post-operative recovery and patients with self-administered pain management.  The telemetry wireless was down and patient data wasn’t filling the screens in the halls and nurses stations, and that threatens patient care.  The admin was polite, and chose his words carefully, but he was obviously livid.  It was clear he was a network guy, not a medical professional, but his primary concern was patient care (as you would hope in a hospital).  It sounded like poorly planned maintenance had caused the outage, and proper procedures weren’t followed, resulting in the outage.  Another pretty scary scenario given the systems affected by the outage.  As appalled as I was that this happened, I was impressed with the admin’s focused outrage.  “Not only can’t this be happening now, it can’t ever have happened, and it can’t ever happen again” was one comment he made over the phone, a line I’m not likely to forget soon.  At the end of the call he explained to whoever was on the other end of the line that the issue would be reported to senior management- not IT management, but senior medical management.  As bad as it was that this problem happened, I was glad to hear that a network admin had a direct path to report issues to appropriate executives directly, even in a huge facility like [redacted].  That’s the way it should be, the head of medicine needs to know about preventable and unusual threats to patient care, regardless of the source.  Imagine what technology could accomplish without insular silos disconnecting technology from consequences- maybe my buddy could put away his notepad.

Another incident happened as I was leaving a different friend’s room at the end of visiting hours in another large, modern facility.  But that’s a story for another day, I’ll leave you to reflect on this little set of horrors until then.


Friday, December 12, 2014

The other Herbie Hancock story

Herbie Hancock’s other story

As promised, the second lesson from Herbie Hancock’s interview a couple of weeks ago.

Hancock was asked about the ease of musical creation and experimentation with modern computers and electronics. Not surprisingly, he loves the lower barrier to entry and the ease of experimentation- especially compared to the amazing lengths required for electronic musical experimentation in his early days. Then he said something striking, he talked about having to learn all of the old ways, the basics, the fundamentals- and then having to unlearn them to get the most out of new musical technologies.

The foundation provided a deep understanding, but could also hold him back from fully utilizing the new tools; that applies to many advances in technology, from understanding point ignition and carburetors before tackling modern computer controlled ignition and fuel injection, to advances in networking, virtualization, and cloud technologies.

Mastery includes knowing not only what to learn, but what to unlearn, and when- and knowing how to unlearn without forgetting.

I’m pretty good at the unlearning part, the rest I’m still working on.


Thursday, December 11, 2014

Herbie Hancock Stories

Herbie Hancock 2010 by Guillaume Laurent

Herbie Hancock

After the horror of faux country bubblegum abuse of “Crazy” I saw part of an interview with Herbie Hancock, it more than made up for the horror. Hancock has a new book out, “Possibilities”. I haven’t read it yet, but it is in my Audible queue for my next road trip. Based on the interview I heard, I’m really looking forward to hearing the book in his own voice.

Miles Davis 22

Miles Davis 

The first story came from the days when Hancock played with the great Miles Davis. During one show Herbie played an obviously wrong chord, and he was mortified at his mistake. Miles’ reaction was to pause very briefly, then play the “mistake” into the song until it was no longer a mistake, but part of the performance. And nothing was ever said about the mistake- because it was no longer a mistake. At face value, that is a great story about a gracious and talented musician. Beyond that, you can find a lot of inspiration and run with it as it moves you. It certainly can be applied to the mayhem of InfoSec in a few different ways.

There are a couple of quotes we often hear in InfoSec (and in the rest of life), both carry the same message, but come from two very different people.

In recent years, the more common quote comes from Mike Tyson:


Mike Tyson Portrait

“Everyone has a plan 'till they get punched in the mouth.”

The older quote, which I’ve heard attributed and misattributed to many people, is from Helmuth Karl Bernhard Graf, translated and paraphrased from the original German:

Helmuth Karl Bernhard von Moltke

“No plan survives contact with the enemy.”

As accurate (and quotable) as these quotes are, they are negative. I think Herbie Hancock’s story of Miles Davis dealing with the unexpected is a much better model for us and the challenges we face, no matter how idealistic that may be.

Tomorrow you can have the second story.


Wednesday, December 10, 2014

Manual labor and the horrors of television

File:Patsy Cline II.jpgWillie UK2K7 2

Are you either of the people shown above?  If not, please don’t try to sing “Crazy”.

The past several weekends have involved a fair amount of manual labor, which has reminded me how happy I am that I don’t do that kind of thing for a living anymore. On one of my beer breaks I flipped on the TV to see what horrors it held for me, and I was reward with one horror, and a couple of great stories.

First, the horror: Someone who was neither Patsy Cline nor Willie Nelson was attempting to sing “Crazy” on what passes for country music TV. It was pathetic. (Patsy Cline made that song hers, but Willie wrote it and his take on it is authentic). There are some songs that simply shouldn’t be done by folks who aren’t up to the task, Crazy is one of them. Stick to that pitch and tempo corrected bubblegum country crap, don’t defile masterpieces.

You may be wondering about the InfoSec angle here- but there really isn’t one. Most of us who are in InfoSec did it very badly and passed it off as good enough for quite a while when we started out- and many of us still do. That’s the nature of what we do, we rarely have the luxury of delivering “masterpiece” quality work, we do the best we can in the situation; expecting perfection is na├»ve in our world. In InfoSec, even Patsy Cline would be reduced to singing “99 bottles” with some regularity- and as with pop music, in InfoSec we get what the market demands and what the market will pay for. By the very nature of what we do we are technicians, not artists. If I were deep I might reflect that this may be why so many in InfoSec have artistic outlets- but that’s a simple answer to the complexity of humanity.

Now, about the good stories… those are for tomorrow.


Wednesday, November 26, 2014

Yeah, I’m sick of hearing it too. So just go vote.

(ISC)2 member?  Read on.  Not a member?  You may not care about this one- although if you are in the InfoSec field the results of the election may be of interest.

It is election time for the (ISC)2 again.  As I’ve said before, I don’t have much hope for fixing that mess, but some folks are really trying to make a difference, and if it won’t die I guess I should support them.

The candidates are listed here.  As you peruse that list, you’ll note that all candidates hold some (ISC)2 cert, most CISSP- that’s because it is a requirement for board service.  If I were educated I’d start tossing around phrases like selection bias, confirmation bias, sunk costs, and stuff like that.  Instead I’ll just say that I would prefer a more diverse board.  The US is well represented, and the slate is almost exclusively male.  But, there are some folks out there trying to reduce the suck, and they believe they are making progress.  Vote for the ones you think will try to steer the beast in the direction you want.

For me, I’m happy condemning Wim Remes to another term of board service, and would happily sentence Allison Miller to join him.  That left two votes, I removed “US males” in an effort to push diversity and made my choices from the remaining three.  Use whatever method you like for choosing candidates, but vote if you are eligible.

And I didn’t even get a stupid “I voted” sticker…



Monday, October 13, 2014

Introducing the Shoulders of InfoSec Project

"If I have seen further it is by standing on the shoulders of giants"

Most famously attributed to Sir Isaac Newton, this quote reflects the sentiment of a new project.  In InfoSec we all stand on the shoulders of giants.

It was just supposed to be a talk at DerbyCon, but as I dug into the topic I realized it needed to be more than just one talk.

Another relevant quote is George Santayana’s oft-misquoted:

“Those who cannot remember the past are condemned to repeat it.”

In information security we have a very bad habit of ignoring the past; many times it isn’t even a failure to remember, it is a failure to ever have known who and what came before.

Thus, the Shoulders of InfoSec Project.  It is an attempt to compile a lot of information about early figures in InfoSec (and hopefully it will move beyond just the early figures).  There are some great resources out there already, notably the University of Minnesota's Charles Babbage Institute which includes a great set of oral histories of security luminaries.  The goal is not to compete with, but to complement and highlight other relevant projects.

A note about the name: the project’s name is “Shoulders…”, not “Giants…”, because you do not need to be a giant to offer a shoulder to help others see further.  Many people

There are two components to the project at this time, a low-volume blog and the wiki.  The project wiki is a work in progress, it includes an ever-expanding list of names, each with a dedicated page including links to relevant information, and will hopefully gain some more color and context as the project develops.  The wiki also includes a references and resources page which has links to several related sites and projects.

The presentation I delivered at DerbyCon is up on Adrian Crenshaw’s Irongeek site if you would like to see some of the ideas and people featured in this project.

Suggestions and contributions are welcome, see the wiki for information about contribution to the project.



Monday, June 23, 2014

What’s the best tool for the job?

This year I’ve been thinking about fundamentals a lot.  That includes  patch management, and in preparing a presentation on the topic I pondered the question:

“What is the best patch management tool?”

I thought back to my favorite patch and systems management tools from past jobs when I ran mixed (but mostly Windows) networks for small businesses.  That reminded me of a lesson about tools I learned many years ago.

What is the best [insert category here]?  I believe there are two answers:

The one you have

The one you know

Note that these may not necessarily True, but in the real world “truth” can be pretty fluid.  There certainly may be better [whatever category] tools than the ones you have now, but you can’t make a difference with them tomorrow- and “a little better tomorrow” is our goal.  The tools available to you, and which you know how to use, those are the ones you can make gains with immediately.  If you really are pushing the limits of the tools you have available, consider what works and what doesn’t work with the old tools- then look for better tools and processes, making sure you don’t lose anything you currently rely on in the transition (or at least know what trade-offs you are making).

Get the most out of what you have and you’ll make progress and be better prepared for when the elusive Budget Fairy appears with the Magic Resources Dust- you’ll be better able to make the case for new tools if you can show that you are pushing the existing stuff to its limits; as we all know, the Budget Fairy is hard to find, and harder to get money from.

The bottom line is that we can’t let our existing tools artificially limit us.  I’ve heard variations on “I can’t do X without a new tool” since my days as a mechanic- and while it is sometimes true, it is sometimes just an excuse for doing nothing.



Tuesday, June 17, 2014

Is OWASP broken?

That’s a silly question.  I wasn’t going to comment on the current struggles of the Board of Directors for fear of adding to the Pointless InfoSec Drama, but I need to say a few things about it.  I am not an OWASP insider, but I do support their mission.

OWASP has done a lot of great things, and continues to do so today.  As I said, I’m not an insider, but there appear to be some struggles at the global Board level and possibly organizationally at the national and international level.  And I don’t really care- I hope it gets sorted out soon, but the power of OWASP (and a myriad of other organizations, not just in InfoSec and tech) is largely in the local and regional chapters and events, and in the OWASP projects.

If you believe in OWASP (or any other organization struggling with high-level issues), I encourage you to focus your efforts locally, that’s almost always where you can make the most difference.  In the case of OWASP, there are also the numerous projects- you don’t need to be local to work on them.

As Tip O’Neill frequently observed, “All politics is local”.  Please don’t waste time on drama, focus locally and keep up the good work.


Tuesday, April 22, 2014

A small rant on presenting at conferences

The more conferences I run the more sympathy I have for other conference organizers, even the big commercial ones, and the more inclined I am to follow their rules and requests- but I expect the conferences to have a clue about what’s involved in delivering a good presentation and facilitate that, not hinder it.

If there are glitches at a BSides or other smaller, volunteer-run, or new events I’m OK with that.  It happens.  What I can’t stand are conferences which try to manage the speakers in ways that prevent delivering quality presentations.

First and foremost, I hate having to rely on the conference’s laptops for presentation.  I completely understand the desire to avoid the regular struggles of getting the right settings between a new laptop and the projector or display at the beginning of each session, but most “house laptop” situations I’ve been in are far worse than the lost couple of minutes of the VGA adapter shuffle.  The most common gripe I have is the loss of presenter view.  I want my notes, damn it- stop stealing them from me.  If I have to use your damned laptop, with its lack of fonts, odd and/or old versions of software, aspect ratio distortion and such- please, in the name of all that is good, give me presenter view.

And then we have your slide templates.  I’m sorry, but they suck.  Every. Single. One. Of. Them. Sucks.  Sure, mine suck, too- but in ways I expect.  Your templates and themes take away layout flexibility, they screw up notes pages, and sometimes even hinder basic functionality I rely on.  But then, you want me to use your crappy laptop, so those functions don’t work anyway.

I get it, you run cons, you don’t speak at them, so I’ll forgive you for past transgressions.  But not future ones, our audiences deserve better.



Friday, April 11, 2014

Threat Modeling, by Adam Shostack

Adam has a new book out, Threat Modeling: Designing for Security, and it is a great resource for anyone in security.  As with New School of Information Security, this is one to grab, read, and keep on the shelf (e-shelf?).

The layout is great, after a short introduction Adam takes you into an easy, but informative practice exercise.  After the exercise there is a more in-depth introduction, which builds on what you learn in the exercise- and also answers some questions which inevitably come up during the exercise.  From the first couple of chapters the book gets progressively deeper into threat modeling theory and practice.  Even if enterprise threat modeling isn’t your world, reading the first few chapters will help you think about securing systems and software more clearly and logically.

I know there are different views and opinions on threat modeling theory and methodology, but even if you approach it differently from Adam, I think you’ll find it informative and valuable.

Those who know me know that I’m a real fan of Adam’s work, he explains complex topics in easy to understand ways- concise and clear without “dumbing things down”.

Gunnar Peterson, who actually knows about this stuff, has an in-depth review of Threat Modeling on his great 1 Raindrop blog.

Grab a copy and give it a read.



Thursday, March 20, 2014

Missing the (opportunity of) Target

You may have heard that some companies lost some credit card data recently.  I think it was in the news.  Come to think of it, a couple of weeks ago I featured a great guest post by Jeff Man on the topic.


In recent stories it has come out that some of the compromised companies “ignored thousands of alerts”, and many folks are heaping scorn and derision on the compromised companies because victim-blaming is easier than looking inward and securing their own stuff.  Also, unless we have a historical record of “normal” alert levels for these environments, and average false positive rates, with statistical deviation analysis- let’s not assume “X-thousand alerts” means a damned thing.  I generate thousands of alerts in my own labs playpens without even trying, I can’t imagine what kind of background noise a global retailer has.

Oh, and millions of people had cards compromised.  And the impact on the vast majority was nothing.  At least nothing more than getting a new card in the mail.  The payment card security system is, in my opinion, badly broken- but it functioned as designed, and consumers were protected (in that the built-in margins designed to cover fraud covered the fraud to protect the consumers).

There has, of course, been renewed cry for chip and pin cards to replace the US-only magnetic stripe cards of antiquity we cling to.  And, of course, the expected backlash against chip and pin being  an imperfect solution, and thus not worth the effort- forget that getting a little better tomorrow is still a laudable (and arguably the only viable) goal.

And all of this misses a huge opportunity.  An opportunity to make consumers like me happy.  I understand that I am not normal, on a bewildering array of scales of normalcy, but I’m not alone in traveling outside of North America.  I have found myself in subway and train stations late at night, across Europe, with a pocketful of useless US credit cards and no way to buy a ticket without a chip and pin card, the standard for most of the rest of the world.  That’s just plain stupid.


I’ve been plenty of other places where my retro-tech US cards didn’t work, but the “late at night in a transit station” one REALLY sucks.  Now there’s word that we’ll finally start moving away from the old magnetic stripe cards… and the latest is that we will get “chip and signature”, not chip and pin- so much for compatibility.

What we have is an opportunity to make customers and some merchants happier by standardizing technology across the globe- and we could slide a little increase in security into the process at the same time.  But noooooo.  The payment card industry gets it wrong, again.

Glad we never miss opportunities like that in InfoSec.



Monday, March 10, 2014

Recovered yet?

I think I have.  I am, of course, talking about the annual week of madness in San Francisco.

Security BSides San Francisco was another great event, lots of diverse and thought-provoking content, and plenty of good conversations- as we expect from BSides.  The planned lead organizer for BSides San Francisco had a change in career path, and a few of the BSides regulars had to step up and make the event happen- it is amazing working with the folks who make BSides happen, it looked easy from the outside.  And there are new folks ready to take the lead for BSidesSF 2015, so we’ll see you there next year.

Believe it or not, there was a lot more than BSides happening that week.  The RSA/NSA controversy didn’t appear to have any impact on the RSA conference, there were almost 30,000 people in attendance and a record number of vendors, with an expanded vendor expo area.  I was pleased to see a significant reduction in the number of scantily clad women working the booths, but I’m still struggling to understand the significance of a boxing ring in an infosec booth, other than as a bad metaphor.  And nothing, absolutely nothing, says “enterprise security” to me like some dude juggling while riding a unicycle in an expo booth.  At least he was fully dressed.  I had a lot of good conversations at RSA again this year, but the expo floor seemed unusually devoid of innovation.  I didn’t get to do a full crawl of the smaller booths on the edges of the big hall, but it really looked like a “yelling about nothing” year to me.  Terms like “threat intelligence” and “big data” were everywhere, but definitions for “threat intelligence” were often unintelligible.  Patrick Gray’s interview of Marcus Ranum summed it up pretty well (37 second mp3).

I did not make it to TrustyCon, the event spun up to provide an alternative for those who pulled talks from RSA, and a place to focus on trustworthy computing- but it sounded like it had some great content and I hope it grows into a focused event to provide insight and context to the challenges of privacy and security in our “post-Snowden” world.  They seem to be off to a good start.  (Yes, some folks seem to be playing the RSA/NSA story for media and PR, but many folks involved in TrustyCon are, I believe, truly sincere).

Once again the real value of the RSA conference for me was having thousands of people in one area, I had several informative meetings, and many good conversations in and around San Francisco that week.  Speaking of which, as soon as the Spare Time Fairy pays me an overdue visit, I want to write up some of what’s new with Denim Group’s ThreadFix project, cool things are happening there.


Thursday, February 13, 2014

Target and PCI: Talking About the 800 lb. Gorilla (a guest post)

Today I present a guest post, writing by my friend Jeffrey Man.  This is a very well thought out piece on Target, PCI, and surrounding issues.


There has been much discussion online and in the media as to whether or not Target was compliant with PCI DSS at the time of their breach. Details of the compromise are still not completely known, but there have been some new details released that –while not definitive – are starting to give us at least an idea of the path that the attackers took to gain access to Target’s network, the cardholder data environment, and ultimately the POS systems where malware was installed to capture transaction data and ultimately exfiltrate the data out to the attackers.

I’ve been debating with several colleagues how to best approach a discussion of whether or not Target was compliant at the time of the breach. We are all seeking an informed and objective way of discussing this issue from several vantage points, basically trying to decide the points of failure (if any), and which specific PCI DSS requirements led to the compromise.

Ira Winkler published an article for Computerworld yesterday where he discusses “6 failures that led to Target hack”. Ira very astutely points out that there really wasn’t a single failure that led to the Target breach but in actuality there were a series of systematic failures that allowed the compromise of millions of credit/debit cards and other customer personal information. I’ve been involved with numerous companies over the years that are attempting to recover from a breach or compromise and Ira’s words rang true – there is almost never a single point of failure but a series of actions (and inactions) that lead to the event.

I also thought that the six failure points that Ira discusses would be a great springboard for an objective discussion of whether the PCI DSS controls applied, were implemented, or were not being followed by Target. Let me start with summarizing the 6  APPARENT failure points that Ira pointed out in his article:

1. Lack of or improperly implemented segmentation controls to isolate the cardholder data environment (CDE);

2. Lack of or improperly deployed IDS/IPS solutions;

3. Failure to detect compromise of internal software distribution system or failure to detect changes/modification of the software being distributed internally (really two failures, IMO);

4. Lack of whitelisting solution to protect the POS systems;

5. Lack of detection of the compromise of systems commandeered to enable the collection of the transaction data and subsequent exfiltration; and

6. Lack of detection of the exfiltration of the data itself.

My intention is to foster a discussion about these failures as they pertain to the PCI DSS controls specifically and how they are interpreted and applied for the typical large merchant. I have had numerous retail customers over the years, some recovering from a breach; some trying to prevent one; (all trying to comply with PCI DSS and not spend too much time, money and resources). The failures discussed point out the difficulties of implementing adequate security controls in a typical retail environment, and also the complexities of consistently interpreting and applying the PCI DSS controls.

I’ll get the ball rolling with some initial thoughts:

1. Network segmentation is not a PCI DSS requirement, but a highly recommended means of limiting a QSA's scope for validation (but often means the systems to which our clients' apply the PCI DSS controls). Evaluating adequate segmentation is highly subjective so this point is debatable as to whether or not Target failed to adequately segment their CDE, or whether their QSA approved it or not. Frankly, if this proves to be the actual path of compromise, I think this will serve as the death knell for segmentation and limiting scope altogether (or should). The lesson learned should be to apply the PCI DSS framework across the enterprise. Period. No exceptions.
2. IDS placement is also debatable - as the standard requires perimeter placement to and at "strategic points" within the CDE. It's likely that the hackers circumvented the perimeter by finding what effectively was a backdoor/trusted ingress path via the HVAC/Ariba system. This could be a simple case of putting alarms on the “front door” and leaving the back door wide open.

3. This one is a little tougher to defend. On the one hand, these systems should have clearly been considered in-scope for PCI and thus should have been in the CDE. But, because they perform a supporting function and not actual transaction processing, I could certainly understand if the focus was more on the controls associated with Requirement 6 as it pertains to change management, software development, testing, and so forth – and not so much on the hardening, logging, monitoring controls put forth in other sections of the PCI DSS.

4. While whitelisting solutions for POS systems are fairly common, they are not technically required. The requirement for these systems is for Anti-virus/malware solutions to be installed, receiving automatic updates, and periodically scanning the system, have FIM installed and reporting/alerting, and receiving critical patches within 30 days of release. I mention these three categories (AV, FIM, Patching) because these are the categories that many of my retail clients try to address through compensating controls using primarily a whitelisting solution as an alternative. The use of a compensating control is allowed for technical limitations; in this case the limitation was the difficulty in successfully administering large numbers of geographically dispersed systems – many of which were not routinely online – in a timely manner according to the specific PCI DSS Requirements. Presumably Target either had the primary controls in place, or a compensating control alternative such as a whitelisting solution, or they did not. IF they did, the discussion should focus on whether the control actually worked, and I would point out that as a QSA I was not supposed to judge whether a solution actually performed as advertised, but that it advertised meeting the goals of a particular requirement.

5. The commandeering of these systems should have been detected, so this should be an easy one to say was non-compliant at the time of the breach. The only rebuttal might be the logical location of these systems (outside the CDE) and whether they were being maintained and monitored according to PCI DSS requirements. But the failure then was the lack of detection of the transfer of data – oh wait, that’s the next failure point…

6. I can’t get past this one. You have to assume that the CHD data started out inside the CDE and was exfiltrated somehow outside of the CDE and ultimately outside of the enterprise. That should have been disallowed by outbound firewall rules, so either the attackers used trusted (existing) outbound ports/services/protocols, the rules were non-existent that would have prevented the egress, or they compromised the firewalls and added their own rules. My initial thought was that they would likely have used existing rules to get the data out – but then there’s the matter of the destination. PCI DSS is supposed to prohibit the use of “any” rules, so maybe the attackers did have to compromise the firewall and at least add an IP or two to an existing outbound server group? I want to give the benefit of the doubt here, but properly implemented PCI DSS controls should have prevented or at least alerted on this egress.

That is my current thinking based on these failure points. What do you think? Feel free to agree or disagree but by all means you are welcome to contribute to the discussion.

Jeffrey Man