Friday, April 11, 2014

Threat Modeling, by Adam Shostack

Adam has a new book out, Threat Modeling: Designing for Security, and it is a great resource for anyone in security.  As with New School of Information Security, this is one to grab, read, and keep on the shelf (e-shelf?).

The layout is great, after a short introduction Adam takes you into an easy, but informative practice exercise.  After the exercise there is a more in-depth introduction, which builds on what you learn in the exercise- and also answers some questions which inevitably come up during the exercise.  From the first couple of chapters the book gets progressively deeper into threat modeling theory and practice.  Even if enterprise threat modeling isn’t your world, reading the first few chapters will help you think about securing systems and software more clearly and logically.

I know there are different views and opinions on threat modeling theory and methodology, but even if you approach it differently from Adam, I think you’ll find it informative and valuable.

Those who know me know that I’m a real fan of Adam’s work, he explains complex topics in easy to understand ways- concise and clear without “dumbing things down”.

Gunnar Peterson, who actually knows about this stuff, has an in-depth review of Threat Modeling on his great 1 Raindrop blog.

Grab a copy and give it a read.

 

Jack

Thursday, March 20, 2014

Missing the (opportunity of) Target

You may have heard that some companies lost some credit card data recently.  I think it was in the news.  Come to think of it, a couple of weeks ago I featured a great guest post by Jeff Man on the topic.

Fotolia_43578938_XS

In recent stories it has come out that some of the compromised companies “ignored thousands of alerts”, and many folks are heaping scorn and derision on the compromised companies because victim-blaming is easier than looking inward and securing their own stuff.  Also, unless we have a historical record of “normal” alert levels for these environments, and average false positive rates, with statistical deviation analysis- let’s not assume “X-thousand alerts” means a damned thing.  I generate thousands of alerts in my own labs playpens without even trying, I can’t imagine what kind of background noise a global retailer has.

Oh, and millions of people had cards compromised.  And the impact on the vast majority was nothing.  At least nothing more than getting a new card in the mail.  The payment card security system is, in my opinion, badly broken- but it functioned as designed, and consumers were protected (in that the built-in margins designed to cover fraud covered the fraud to protect the consumers).

There has, of course, been renewed cry for chip and pin cards to replace the US-only magnetic stripe cards of antiquity we cling to.  And, of course, the expected backlash against chip and pin being  an imperfect solution, and thus not worth the effort- forget that getting a little better tomorrow is still a laudable (and arguably the only viable) goal.

And all of this misses a huge opportunity.  An opportunity to make consumers like me happy.  I understand that I am not normal, on a bewildering array of scales of normalcy, but I’m not alone in traveling outside of North America.  I have found myself in subway and train stations late at night, across Europe, with a pocketful of useless US credit cards and no way to buy a ticket without a chip and pin card, the standard for most of the rest of the world.  That’s just plain stupid.

Fotolia_47118878_XS

I’ve been plenty of other places where my retro-tech US cards didn’t work, but the “late at night in a transit station” one REALLY sucks.  Now there’s word that we’ll finally start moving away from the old magnetic stripe cards… and the latest is that we will get “chip and signature”, not chip and pin- so much for compatibility.

What we have is an opportunity to make customers and some merchants happier by standardizing technology across the globe- and we could slide a little increase in security into the process at the same time.  But noooooo.  The payment card industry gets it wrong, again.

Glad we never miss opportunities like that in InfoSec.

 

Jack

Monday, March 10, 2014

Recovered yet?

I think I have.  I am, of course, talking about the annual week of madness in San Francisco.

Security BSides San Francisco was another great event, lots of diverse and thought-provoking content, and plenty of good conversations- as we expect from BSides.  The planned lead organizer for BSides San Francisco had a change in career path, and a few of the BSides regulars had to step up and make the event happen- it is amazing working with the folks who make BSides happen, it looked easy from the outside.  And there are new folks ready to take the lead for BSidesSF 2015, so we’ll see you there next year.

Believe it or not, there was a lot more than BSides happening that week.  The RSA/NSA controversy didn’t appear to have any impact on the RSA conference, there were almost 30,000 people in attendance and a record number of vendors, with an expanded vendor expo area.  I was pleased to see a significant reduction in the number of scantily clad women working the booths, but I’m still struggling to understand the significance of a boxing ring in an infosec booth, other than as a bad metaphor.  And nothing, absolutely nothing, says “enterprise security” to me like some dude juggling while riding a unicycle in an expo booth.  At least he was fully dressed.  I had a lot of good conversations at RSA again this year, but the expo floor seemed unusually devoid of innovation.  I didn’t get to do a full crawl of the smaller booths on the edges of the big hall, but it really looked like a “yelling about nothing” year to me.  Terms like “threat intelligence” and “big data” were everywhere, but definitions for “threat intelligence” were often unintelligible.  Patrick Gray’s interview of Marcus Ranum summed it up pretty well (37 second mp3).

I did not make it to TrustyCon, the event spun up to provide an alternative for those who pulled talks from RSA, and a place to focus on trustworthy computing- but it sounded like it had some great content and I hope it grows into a focused event to provide insight and context to the challenges of privacy and security in our “post-Snowden” world.  They seem to be off to a good start.  (Yes, some folks seem to be playing the RSA/NSA story for media and PR, but many folks involved in TrustyCon are, I believe, truly sincere).

Once again the real value of the RSA conference for me was having thousands of people in one area, I had several informative meetings, and many good conversations in and around San Francisco that week.  Speaking of which, as soon as the Spare Time Fairy pays me an overdue visit, I want to write up some of what’s new with Denim Group’s ThreadFix project, cool things are happening there.

Jack

Thursday, February 13, 2014

Target and PCI: Talking About the 800 lb. Gorilla (a guest post)

Today I present a guest post, writing by my friend Jeffrey Man.  This is a very well thought out piece on Target, PCI, and surrounding issues.

 

There has been much discussion online and in the media as to whether or not Target was compliant with PCI DSS at the time of their breach. Details of the compromise are still not completely known, but there have been some new details released that –while not definitive – are starting to give us at least an idea of the path that the attackers took to gain access to Target’s network, the cardholder data environment, and ultimately the POS systems where malware was installed to capture transaction data and ultimately exfiltrate the data out to the attackers.

I’ve been debating with several colleagues how to best approach a discussion of whether or not Target was compliant at the time of the breach. We are all seeking an informed and objective way of discussing this issue from several vantage points, basically trying to decide the points of failure (if any), and which specific PCI DSS requirements led to the compromise.

Ira Winkler published an article for Computerworld yesterday where he discusses “6 failures that led to Target hack”. Ira very astutely points out that there really wasn’t a single failure that led to the Target breach but in actuality there were a series of systematic failures that allowed the compromise of millions of credit/debit cards and other customer personal information. I’ve been involved with numerous companies over the years that are attempting to recover from a breach or compromise and Ira’s words rang true – there is almost never a single point of failure but a series of actions (and inactions) that lead to the event.

I also thought that the six failure points that Ira discusses would be a great springboard for an objective discussion of whether the PCI DSS controls applied, were implemented, or were not being followed by Target. Let me start with summarizing the 6  APPARENT failure points that Ira pointed out in his article:

1. Lack of or improperly implemented segmentation controls to isolate the cardholder data environment (CDE);

2. Lack of or improperly deployed IDS/IPS solutions;

3. Failure to detect compromise of internal software distribution system or failure to detect changes/modification of the software being distributed internally (really two failures, IMO);

4. Lack of whitelisting solution to protect the POS systems;

5. Lack of detection of the compromise of systems commandeered to enable the collection of the transaction data and subsequent exfiltration; and

6. Lack of detection of the exfiltration of the data itself.

My intention is to foster a discussion about these failures as they pertain to the PCI DSS controls specifically and how they are interpreted and applied for the typical large merchant. I have had numerous retail customers over the years, some recovering from a breach; some trying to prevent one; (all trying to comply with PCI DSS and not spend too much time, money and resources). The failures discussed point out the difficulties of implementing adequate security controls in a typical retail environment, and also the complexities of consistently interpreting and applying the PCI DSS controls.

I’ll get the ball rolling with some initial thoughts:

1. Network segmentation is not a PCI DSS requirement, but a highly recommended means of limiting a QSA's scope for validation (but often means the systems to which our clients' apply the PCI DSS controls). Evaluating adequate segmentation is highly subjective so this point is debatable as to whether or not Target failed to adequately segment their CDE, or whether their QSA approved it or not. Frankly, if this proves to be the actual path of compromise, I think this will serve as the death knell for segmentation and limiting scope altogether (or should). The lesson learned should be to apply the PCI DSS framework across the enterprise. Period. No exceptions.
2. IDS placement is also debatable - as the standard requires perimeter placement to and at "strategic points" within the CDE. It's likely that the hackers circumvented the perimeter by finding what effectively was a backdoor/trusted ingress path via the HVAC/Ariba system. This could be a simple case of putting alarms on the “front door” and leaving the back door wide open.

3. This one is a little tougher to defend. On the one hand, these systems should have clearly been considered in-scope for PCI and thus should have been in the CDE. But, because they perform a supporting function and not actual transaction processing, I could certainly understand if the focus was more on the controls associated with Requirement 6 as it pertains to change management, software development, testing, and so forth – and not so much on the hardening, logging, monitoring controls put forth in other sections of the PCI DSS.

4. While whitelisting solutions for POS systems are fairly common, they are not technically required. The requirement for these systems is for Anti-virus/malware solutions to be installed, receiving automatic updates, and periodically scanning the system, have FIM installed and reporting/alerting, and receiving critical patches within 30 days of release. I mention these three categories (AV, FIM, Patching) because these are the categories that many of my retail clients try to address through compensating controls using primarily a whitelisting solution as an alternative. The use of a compensating control is allowed for technical limitations; in this case the limitation was the difficulty in successfully administering large numbers of geographically dispersed systems – many of which were not routinely online – in a timely manner according to the specific PCI DSS Requirements. Presumably Target either had the primary controls in place, or a compensating control alternative such as a whitelisting solution, or they did not. IF they did, the discussion should focus on whether the control actually worked, and I would point out that as a QSA I was not supposed to judge whether a solution actually performed as advertised, but that it advertised meeting the goals of a particular requirement.

5. The commandeering of these systems should have been detected, so this should be an easy one to say was non-compliant at the time of the breach. The only rebuttal might be the logical location of these systems (outside the CDE) and whether they were being maintained and monitored according to PCI DSS requirements. But the failure then was the lack of detection of the transfer of data – oh wait, that’s the next failure point…

6. I can’t get past this one. You have to assume that the CHD data started out inside the CDE and was exfiltrated somehow outside of the CDE and ultimately outside of the enterprise. That should have been disallowed by outbound firewall rules, so either the attackers used trusted (existing) outbound ports/services/protocols, the rules were non-existent that would have prevented the egress, or they compromised the firewalls and added their own rules. My initial thought was that they would likely have used existing rules to get the data out – but then there’s the matter of the destination. PCI DSS is supposed to prohibit the use of “any” rules, so maybe the attackers did have to compromise the firewall and at least add an IP or two to an existing outbound server group? I want to give the benefit of the doubt here, but properly implemented PCI DSS controls should have prevented or at least alerted on this egress.

That is my current thinking based on these failure points. What do you think? Feel free to agree or disagree but by all means you are welcome to contribute to the discussion.

Jeffrey Man

Monday, November 18, 2013

When is a patch not a patch?

When is a patch not a patch?  When it is not a patch.  That seems rather obvious, but sometimes we lose sight of the obvious when talking about patching and vulnerability management (and a lot of other things).

In my “day job” at Tenable, we think about vulnerability management a lot, it is what we do.  We also think about patching and patch management a lot, even though that is not what we do.  (I often wish companies who sell patching and patch management systems were similarly honest about their core competencies, but that’s a rant for another day- it is not quite floor wax and dessert topping territory, but patch and vulnerability management are two related things I do not want coming out of a single can, no matter how shiny or tasty they claim to be).

Back to the topic, patching… and not patching.  Patch Tuesday has driven many into a myopic patch mentality, sometimes that works well, sometimes it works well enough, and sometimes is leads to stupidity.  (Tangent number two: I was always a fan of Shavlik, I don’t know what VMware was thinking when they acquired and nearly ruined them, but thankfully Shavlik has survived, escaped, and will hopefully recover fully).  But patching isn’t always the answer; when a vulnerability is found there should be a logical process for dealing with it, and while “slap a patch on that bad boy” is often a great answer, and frequently the easiest answer, it is not the only answer.

Let’s say you’ve found a vulnerability (or more likely thousands) in your environment, where do you start to deal with it?  There are a handful of questions you need to answer before acting.  In no particular order:

  • Is it real?  I wrote a post on positives and negatives, true and false, some time ago- check out Are you Positive? for thoughts on the topic.  The bottom line is that you need confidence in your findings.  Acting on bad info is rarely a good idea unless you are a politician.
  • Are the “vulnerable” systems exposed?  We don’t always think about online “exposure” the way we should.  We generally understand threats that come to us, whether in the form of physical threats to our homes and offices, or services exposed to the Internet.  In the physical world, we generally only think of going to threatening places in “high-risk” environments, such as high-crime areas or potentially dangerous places such as mountain trails or beaches known for undertow.  The problem with that is that the entire Internet is pretty sketchy, not just the “high-crime” areas.  Legitimate sites are compromised, DNS is hijacked, bad things happen all over- so venturing out is always a little risky.  Any system receiving email or accessing the Internet has some exposure.  Where it gets more tricky is with the indirect exposures- systems which are exposed via pivot or relay.  This often means systems which are not directly exposed to the Internet, but which are exposed to Internet-accessing systems.  This sort of attack path analysis can be challenging, but it does add context to our efforts at understanding exposures and mitigating vulnerabilities.  (Forgive me for not addressing air-gapped systems here, but you will note I am not addressing unicorns, either).
  • Do we care?
    • Should we care?
    • If so, how much?
    • Do the vulnerabilities really expose anything important?
      • How much exposure are you comfortable with?
  • What risks are posed by potential exploit of the vulnerability?
  • What risks are posed by the patch or mitigation?
  • Does the cost of mitigating the vulnerability make sense?  Spending a dollar to protect a dime is probably not the best use of limited resources.
  • Are there known exploits in the wild for the vulnerability?  There may be unknown exploits, but ignore known Bad Things™ at your own risk.
  • Is a patch the best answer?  Maybe you should just uninstall or disable the application or service.  If you don’t need it, kill it.  Maybe there are other mitigations like network segmentation or other ACLs, configuration settings, permissions restrictions, or tools like Microsoft’s EMET which can reduce or eliminate the exposure.  This requires an understanding of the implications of each mitigation- sometimes it is easiest to “just patch”, but patching is not without risks.
  • Can you recover quickly from whatever mitigation you deploy?  Sometimes unwinding a bad patch is as simple as logging into your patch or systems management server and removing the patch.  Sometimes it involves re-imaging thousands of systems.  If faced with the latter, how would you handle it (besides updating your resume)?

I’m sure you can think of more, but this list should start or re-start a conversation I hope you’ve already had several times.

I can’t write about patching without addressing a little problem I thought was pretty much behind us, at least for Microsoft: bad patches.  For years I have advocated rapid patching of Microsoft systems since they have done an outstanding job of QA on their updates.  Back in the days when I was an admin in the trenches I patched fast, with a 72-hour patch target for desktops and laptops, and a 10-day target for most servers.  Obviously, some testing is needed, and a lot of testing is needed for critical systems- but you have to decide if the risk of deploying a patch outweighs the risk of not patching, and how other possible mitigations might change the risk.  This has been made a little trickier by the past year’s string of “less than perfect” patches coming from Redmond.  I chatted about this topic with Pat Gray on a recent episode of his outstanding Risky Business podcast.  Microsoft updates are the largest software distribution system in the world, and the quality of the patches is still generally very good.  “Generally very good” might be good enough to push patches to a lot of systems in a rolling deployment after a short test cycle, it is probably not good enough to skip thorough testing before testing on critical systems.

In the immortal words of Spock: “Patch well and prosper”.

Or something like that.

Jack

Saturday, November 9, 2013

Microsoft MVP Summit

Headed to the Microsoft MVP Summit?  If so, please stop by and join me at an informal gathering for Security MVPs and like-minded folks on Sunday night, Nov 17.  Drop in anytime between 7 and 10:30 and say hello.  Stay for a few minutes, or a couple of hours- and enjoy snacks, drinks, and conversation.  Send me an email at jdaniel [at] tenable.com for more details and venue info.  (It is very close to all the MVP goings-on in Bellevue, a short walk from any of the event hotels).

This reception will be sponsored by the nice folks who routinely send me a paycheck, Tenable Network Security.  No sales pitches, banners, or anything like that- Tenable is just encouraging conversations, as we often do.

See you there.

Jack

Saturday, September 14, 2013

Can you trust them?

Let’s turn a common theme in InfoSec upside down:

Can you trust, and should you hire, former hackers government employees?

In the still-unfolding Snowden saga, we now have allegations that the US government, specifically the NSA, has attacked cryptography at scale, including the software, protocols, and algorithms we rely on for secure and private communications.  On one hand, I have to say “duh, that’s their job”, but it certainly appears to me that they have significantly overstepped their authority and damaged our ability to secure our data.  While I hold some senior NSA officials, notably General Alexander, partially responsible for part of this abuse, I believe that the real blame lies with past couple of presidents and the Congress for their utter abandonment of responsibility to the Constitution, and to us, the citizens it is designed to protect.  The NSA (as is true for much of the US federal government) is full of great people, working very hard to properly execute their assigned tasks.  But, if your assigned task is something like fighting terrorism, or combatting drugs, or child pornography- it is only natural that you will lose perspective in the face of the horrors you are trying to combat.  (Don’t get me wrong, I know that a lot of folks are in the “war on [whatever]as profiteers, but I believe most people are trying to do what they believe to be right).  That’s where the elusive property of “oversight” comes in.  Or in the case of things like the abuses of the NSA, oversight should come in, but presidents, congress critters, and others have abdicated their sworn duties.

Back to the question at hand…

Having “NSA” on your resume has traditionally been seen as an asset.  We now have credible claims that government agents have subverted the security of the systems we rely on, in some cases by covert infiltration of private enterprise.

Imbecile executives in the InfoSec industry like to make pronouncements like “We don’t hire hackers”, showing their ignorance of what “hacker” means to many people, and limiting their pool of talented recruits.  Computer criminals have a hard time concealing their past convictions, but covert agents have the power of the intelligence community behind them to create squeaky-clean résumés.  Is that former NSA researcher, the one who is now working on your software, really “former”?

Thus, we have to ask: Is it time for NSA to become scarlet letters on a résumé?

For the record, I don’t think so- but I do believe it is past time to reflect on “who can you trust” before hiring people and putting them in positions of responsibility, regardless of their past.

And that’s a belief I am confident the NSA shares with me…Edward_Snowden-2

(Image Attribution: Laura Poitras / Praxis Films)

 

Jack

[Note: I have not  provided links to anything in this post. There are so many sources, with so many revelations, counterclaims, and outright lies that I’ll leave you to use the sources you trust, and reach your own conclusions on the reality and implications of this mess].

Thursday, September 12, 2013

Security BSides, stories and back-stories, part 1

I realize that I’m overdue on providing an update on all things Security BSides, so here is a start.  Usual disclaimers apply, I’m writing personally, not on behalf of BSides or any of the BSides event or organizations, etc..

Bsides_Logo_No City_SM

This weekend will be the 92nd Security BSides, in Augusta, Georgia, a new city for BSides.  That makes 92 events in just over four years, spanning 51 cities, 11 countries, and 5 continents.  And event 100 is just over a month away.  In reality, there will be three events on October 18, numbers 99-101, so let’s call it a three-way tie for 100th.  That three-way tie spans three countries, Poland, Canada, and the US.  Pretty damned amazing if you ask me.

But let’s back up- just what is this “BSides” thing anyway?  There is still some confusion, and a little misinformation floating around.  It started when a handful of people had some ideas, which coalesced and merged the different thoughts into an event in July of 2009, parallel with Black Hat USA and before DEF CON.  The semi-official history is on the Security BSides wiki.  The original idea was to offer a “B-side” to the “A-side” events.  For those unfamiliar with the term, back in Ye Olden Days we listened to music on spinning bits of plastic called “records”; on singles there was usually a mass-market appeal (at least the artists and producers hoped so) song on the A-side, and the B-side was generally more experimental, or more artistic instead of pop-centric.  When such things made it to the radio A-sides were on generally AM and B-sides were often on that fancy FM.  That’s what we imagined for BSides, a place for more experimental, niche-audience content, plus some things with wider appeal.http://www.dickestel.com/images/littlerichard45.jpg

(To save you Googling it, “Baby Face” was the A-side to this Little Richard B-side, “I’ll never let you go”)

The first event was held in a rented house in west Las Vegas, a lot of folks came together and made it happen (I won’t try listing names, there are too many to list- besides, everyone who showed up helped make it happen in some way).  We had about 200 people through the house in the two day event, and it was a great success.  People wanted more, so several of us began discussing “next steps”.

There was demand for a BSides parallel with RSA in San Francisco, and the San Francisco-based BSides crew started working to make that happen.

Before the event in San Francisco, some people wanted to have an event by the Bay in Mountain View, but there was no “A-side” event.  General consensus was that BSides events didn’t need an A-side to be successful, or to be useful to the community- so BSides Bay happened in December of 2009.  That’s right, the second-ever BSides didn’t have an A-side.  In fact, most Security BSides events haven’t had an A-side event.  By my count, only 27 of the 91 BSides events held thus far have been adjacent to, or parallel with, another event- and it is becoming less common.  Only 8 out of the 41 BSides this year have an adjacent event.  The standalone events often provide underserved communities with a security/hacker event where none would otherwise happen, and that is a huge part of the value the BSides community brings to the greater security and hacker community.

BSides do not require an A-Side, and over two-thirds of Security BSides have been standalone events.  BSides offer a B-Side to the mainstream.

Many of those 27 were done in cooperation with the adjacent event, sometimes even co-branding and cross-promoting to increase value to all attendees and participants.  Sure, some tensions are happen, but the two big overlapping event pairs (RSA US/BSides San Francisco and Black Hat/BSides Las Vegas) now have open communications and cooperation between the events.  Also, some proposed BSides events never happen; the BSides community sometimes discourages ones which might fragment or stress adjacent community-driven events.  (Note that there has never been a BSides around Shmoocon, for example).

BSides strive to work with and respect adjacent events.

There is a lot more to tell, but that’s enough for this post.  I’ll follow up with more on BSides in coming posts- until then, check the front page of the BSides wiki for all of the upcoming events around the world.

Oh, and pencil in Tuesday and Wednesday, August 5-6 2014 for Security BSides Las Vegas.  That’s right, we’re changing the days of BSidesLV to reduce overlap with both Black Hat USA and DEF CON- many people in the community have responsibilities which span two or all three of the events of that week, and this move makes it easier to meet those responsibilities.  Or maybe just give people time sneak over to Frankie’s or Double Down to unwind a bit between duties.

Jack

Tuesday, July 23, 2013

Hacker Summer Camp and @HackerRoad

Next week is “Hacker Summer Camp”, also known as BSides Las Vegas, Black Hat, and DEF CON week.  As you might expect, I’ll be at BSides most of next week, then heading over to DEF CON when we finish hiding all the bodies cleaning up and packing out.  We have a killer lineup for BSidesLV as always, and Irongeek will be recording the sessions so you can catch up if you won’t be joining us or miss one you want to see.

I’ll be giving a talk in the Common Ground track, a decidedly non-InfoSec talk:

The Erudite Inebriate’s Guide to Life, Liberty, and the Purſuit of Happineſs

An exploration of bitters, classic cocktails and other stuff

That will be on Wednesday at 16:30 in the Tuscany room.  I’ll also be joining the all-star lineup of Davi Ottenheimer, Raymond Umerley, Steve Werby, David Mortman and George V.  Hulme on Thursday at 12:30 in Florence G for a panel discussion on breach notifications, ethics, and law.

I’ll once again be participating in DEF CON Hacker Pyramid and beard competitions, and of course providing logistical support for the FAIL Panel.  But no pink camisoles this year.  Well, probably not.  Possibly something worse, though.

And finally, for a little entertainment, follow the adventures of video guy Steve and I as we drive from Cape Cod to Las Vegas and back.  Face it, you’ll just be pretending to work until next week, either in prep for the trip, or out of bitterness because you can’t go.  So follow the adventures on Twitter at @HackerRoad as we wander the countryside cursing the latest update to Google Maps for Android, stop at distilleries, and spread cheer wherever we go. Or something like that. Maps, photos, video, etc. will be posted to or linked from that Twitter feed.  (Yes, that’s the old Shmoobus account, rebranded for a more wide-ranging set of adventures).  The road trip is made possible by my awesome employers at Tenable Network Security, who are too smart to directly sponsor something this silly, but are kind enough to indulge me taking time for such madness.

 

Jack

Thursday, July 18, 2013

Missing the lessons

Listen up people, I enjoy a pointless socio-political sequential rant on Twitter as much as most folks (I say sequential rant instead of debate, because real debate rarely happens on Twitter)- but seriously, almost the entire InfoSec world is missing the lessons of Manning, Snowden, et. al. which are relevant to our goals of securing info.  Also, I see way too many people who should know better falling into media, troll, and pundit (hard to tell the difference sometimes, probably because there isn’t always a difference) trap of narrowing choices.

Let’s start with the choice flaw: if you are given an “either/or” choice and fall for it, you’ve let the punditroll define the terms of the conversation, and you’ve lost (or at least truth has lost, but what the hell, The Truth is sadly accustomed to losing).  Is [Snowden/Manning/U.S. Grant/George Washington] a hero, traitor, or demon? Yes and no to all of the above- it depends on your position and too many other factors.  Reject the either/or fallacy, and don’t participate in it.

Now, about the lessons- politics, justice, and all that stuff best decided on Twitter or Reddit (or 4Chan) needs to be set aside for a minute so we can look at the security challenge.  My first InfoSec reaction to both the Manning and Snowden breaches was WHY THE HELL DID HE HAVE ACCESS TO ALL OF THAT?!?!?  A few hundred thousand diplomatic cables and other sensitive info freely available at a forward military base- all of which could be accessed and copied by enlisted personnel without supervision- and without setting off any detections?  Treasure troves of Top Secret documents available to a junior contract employee of an intelligence contractor?   Epic failures of fundamental information protection.

The US Department of Defense knows better, but they failed miserably.  To their credit, they’re trying to fix those access problems, but that is not an easy task, and I fear that those beating the Drums of Cyberwar will distract the DoD from getting the basics under control.  And what about Boozed-Allen-give-us-the-Hamiltons?  We (literally we, US taxpayers) pay them a lot of money to screw up.  As I have said many times before, never outsource your core competencies, especially failure.

I understand that this is not a simple challenge, but if you can’t answer

“Who has access to what, under what conditions, and with what monitoring and safeguards?”

you have a problem.  Probably more than one.  And no, I do not expect you to be able to answer that about everything you need to protect.  But maybe, just maybe, the stuff that can embroil multiple nations and in political and diplomatic turmoil if leaked- that stuff, you should put a little thought into protecting it.  Maybe you don’t protect (or fail to protect) anything that sensitive, but you probably help protect things which if lost would cause people (including you) to have A Really Bad Day.  Skip the next round of pundit listening or troll feeding and think about that.

Jack