Monday, June 23, 2014

What’s the best tool for the job?

This year I’ve been thinking about fundamentals a lot.  That includes  patch management, and in preparing a presentation on the topic I pondered the question:

“What is the best patch management tool?”

I thought back to my favorite patch and systems management tools from past jobs when I ran mixed (but mostly Windows) networks for small businesses.  That reminded me of a lesson about tools I learned many years ago.

What is the best [insert category here]?  I believe there are two answers:

The one you have

The one you know

Note that these may not necessarily True, but in the real world “truth” can be pretty fluid.  There certainly may be better [whatever category] tools than the ones you have now, but you can’t make a difference with them tomorrow- and “a little better tomorrow” is our goal.  The tools available to you, and which you know how to use, those are the ones you can make gains with immediately.  If you really are pushing the limits of the tools you have available, consider what works and what doesn’t work with the old tools- then look for better tools and processes, making sure you don’t lose anything you currently rely on in the transition (or at least know what trade-offs you are making).

Get the most out of what you have and you’ll make progress and be better prepared for when the elusive Budget Fairy appears with the Magic Resources Dust- you’ll be better able to make the case for new tools if you can show that you are pushing the existing stuff to its limits; as we all know, the Budget Fairy is hard to find, and harder to get money from.

The bottom line is that we can’t let our existing tools artificially limit us.  I’ve heard variations on “I can’t do X without a new tool” since my days as a mechanic- and while it is sometimes true, it is sometimes just an excuse for doing nothing.

 

Jack

Tuesday, June 17, 2014

Is OWASP broken?

That’s a silly question.  I wasn’t going to comment on the current struggles of the Board of Directors for fear of adding to the Pointless InfoSec Drama, but I need to say a few things about it.  I am not an OWASP insider, but I do support their mission.

https://www.owasp.org/skins/monobook/ologo.png

OWASP has done a lot of great things, and continues to do so today.  As I said, I’m not an insider, but there appear to be some struggles at the global Board level and possibly organizationally at the national and international level.  And I don’t really care- I hope it gets sorted out soon, but the power of OWASP (and a myriad of other organizations, not just in InfoSec and tech) is largely in the local and regional chapters and events, and in the OWASP projects.

If you believe in OWASP (or any other organization struggling with high-level issues), I encourage you to focus your efforts locally, that’s almost always where you can make the most difference.  In the case of OWASP, there are also the numerous projects- you don’t need to be local to work on them.

As Tip O’Neill frequently observed, “All politics is local”.  Please don’t waste time on drama, focus locally and keep up the good work.

Jack

Tuesday, April 22, 2014

A small rant on presenting at conferences

The more conferences I run the more sympathy I have for other conference organizers, even the big commercial ones, and the more inclined I am to follow their rules and requests- but I expect the conferences to have a clue about what’s involved in delivering a good presentation and facilitate that, not hinder it.

If there are glitches at a BSides or other smaller, volunteer-run, or new events I’m OK with that.  It happens.  What I can’t stand are conferences which try to manage the speakers in ways that prevent delivering quality presentations.

First and foremost, I hate having to rely on the conference’s laptops for presentation.  I completely understand the desire to avoid the regular struggles of getting the right settings between a new laptop and the projector or display at the beginning of each session, but most “house laptop” situations I’ve been in are far worse than the lost couple of minutes of the VGA adapter shuffle.  The most common gripe I have is the loss of presenter view.  I want my notes, damn it- stop stealing them from me.  If I have to use your damned laptop, with its lack of fonts, odd and/or old versions of software, aspect ratio distortion and such- please, in the name of all that is good, give me presenter view.

And then we have your slide templates.  I’m sorry, but they suck.  Every. Single. One. Of. Them. Sucks.  Sure, mine suck, too- but in ways I expect.  Your templates and themes take away layout flexibility, they screw up notes pages, and sometimes even hinder basic functionality I rely on.  But then, you want me to use your crappy laptop, so those functions don’t work anyway.

I get it, you run cons, you don’t speak at them, so I’ll forgive you for past transgressions.  But not future ones, our audiences deserve better.

 

Jack

Friday, April 11, 2014

Threat Modeling, by Adam Shostack

Adam has a new book out, Threat Modeling: Designing for Security, and it is a great resource for anyone in security.  As with New School of Information Security, this is one to grab, read, and keep on the shelf (e-shelf?).

The layout is great, after a short introduction Adam takes you into an easy, but informative practice exercise.  After the exercise there is a more in-depth introduction, which builds on what you learn in the exercise- and also answers some questions which inevitably come up during the exercise.  From the first couple of chapters the book gets progressively deeper into threat modeling theory and practice.  Even if enterprise threat modeling isn’t your world, reading the first few chapters will help you think about securing systems and software more clearly and logically.

I know there are different views and opinions on threat modeling theory and methodology, but even if you approach it differently from Adam, I think you’ll find it informative and valuable.

Those who know me know that I’m a real fan of Adam’s work, he explains complex topics in easy to understand ways- concise and clear without “dumbing things down”.

Gunnar Peterson, who actually knows about this stuff, has an in-depth review of Threat Modeling on his great 1 Raindrop blog.

Grab a copy and give it a read.

 

Jack

Thursday, March 20, 2014

Missing the (opportunity of) Target

You may have heard that some companies lost some credit card data recently.  I think it was in the news.  Come to think of it, a couple of weeks ago I featured a great guest post by Jeff Man on the topic.

Fotolia_43578938_XS

In recent stories it has come out that some of the compromised companies “ignored thousands of alerts”, and many folks are heaping scorn and derision on the compromised companies because victim-blaming is easier than looking inward and securing their own stuff.  Also, unless we have a historical record of “normal” alert levels for these environments, and average false positive rates, with statistical deviation analysis- let’s not assume “X-thousand alerts” means a damned thing.  I generate thousands of alerts in my own labs playpens without even trying, I can’t imagine what kind of background noise a global retailer has.

Oh, and millions of people had cards compromised.  And the impact on the vast majority was nothing.  At least nothing more than getting a new card in the mail.  The payment card security system is, in my opinion, badly broken- but it functioned as designed, and consumers were protected (in that the built-in margins designed to cover fraud covered the fraud to protect the consumers).

There has, of course, been renewed cry for chip and pin cards to replace the US-only magnetic stripe cards of antiquity we cling to.  And, of course, the expected backlash against chip and pin being  an imperfect solution, and thus not worth the effort- forget that getting a little better tomorrow is still a laudable (and arguably the only viable) goal.

And all of this misses a huge opportunity.  An opportunity to make consumers like me happy.  I understand that I am not normal, on a bewildering array of scales of normalcy, but I’m not alone in traveling outside of North America.  I have found myself in subway and train stations late at night, across Europe, with a pocketful of useless US credit cards and no way to buy a ticket without a chip and pin card, the standard for most of the rest of the world.  That’s just plain stupid.

Fotolia_47118878_XS

I’ve been plenty of other places where my retro-tech US cards didn’t work, but the “late at night in a transit station” one REALLY sucks.  Now there’s word that we’ll finally start moving away from the old magnetic stripe cards… and the latest is that we will get “chip and signature”, not chip and pin- so much for compatibility.

What we have is an opportunity to make customers and some merchants happier by standardizing technology across the globe- and we could slide a little increase in security into the process at the same time.  But noooooo.  The payment card industry gets it wrong, again.

Glad we never miss opportunities like that in InfoSec.

 

Jack

Monday, March 10, 2014

Recovered yet?

I think I have.  I am, of course, talking about the annual week of madness in San Francisco.

Security BSides San Francisco was another great event, lots of diverse and thought-provoking content, and plenty of good conversations- as we expect from BSides.  The planned lead organizer for BSides San Francisco had a change in career path, and a few of the BSides regulars had to step up and make the event happen- it is amazing working with the folks who make BSides happen, it looked easy from the outside.  And there are new folks ready to take the lead for BSidesSF 2015, so we’ll see you there next year.

Believe it or not, there was a lot more than BSides happening that week.  The RSA/NSA controversy didn’t appear to have any impact on the RSA conference, there were almost 30,000 people in attendance and a record number of vendors, with an expanded vendor expo area.  I was pleased to see a significant reduction in the number of scantily clad women working the booths, but I’m still struggling to understand the significance of a boxing ring in an infosec booth, other than as a bad metaphor.  And nothing, absolutely nothing, says “enterprise security” to me like some dude juggling while riding a unicycle in an expo booth.  At least he was fully dressed.  I had a lot of good conversations at RSA again this year, but the expo floor seemed unusually devoid of innovation.  I didn’t get to do a full crawl of the smaller booths on the edges of the big hall, but it really looked like a “yelling about nothing” year to me.  Terms like “threat intelligence” and “big data” were everywhere, but definitions for “threat intelligence” were often unintelligible.  Patrick Gray’s interview of Marcus Ranum summed it up pretty well (37 second mp3).

I did not make it to TrustyCon, the event spun up to provide an alternative for those who pulled talks from RSA, and a place to focus on trustworthy computing- but it sounded like it had some great content and I hope it grows into a focused event to provide insight and context to the challenges of privacy and security in our “post-Snowden” world.  They seem to be off to a good start.  (Yes, some folks seem to be playing the RSA/NSA story for media and PR, but many folks involved in TrustyCon are, I believe, truly sincere).

Once again the real value of the RSA conference for me was having thousands of people in one area, I had several informative meetings, and many good conversations in and around San Francisco that week.  Speaking of which, as soon as the Spare Time Fairy pays me an overdue visit, I want to write up some of what’s new with Denim Group’s ThreadFix project, cool things are happening there.

Jack

Thursday, February 13, 2014

Target and PCI: Talking About the 800 lb. Gorilla (a guest post)

Today I present a guest post, writing by my friend Jeffrey Man.  This is a very well thought out piece on Target, PCI, and surrounding issues.

 

There has been much discussion online and in the media as to whether or not Target was compliant with PCI DSS at the time of their breach. Details of the compromise are still not completely known, but there have been some new details released that –while not definitive – are starting to give us at least an idea of the path that the attackers took to gain access to Target’s network, the cardholder data environment, and ultimately the POS systems where malware was installed to capture transaction data and ultimately exfiltrate the data out to the attackers.

I’ve been debating with several colleagues how to best approach a discussion of whether or not Target was compliant at the time of the breach. We are all seeking an informed and objective way of discussing this issue from several vantage points, basically trying to decide the points of failure (if any), and which specific PCI DSS requirements led to the compromise.

Ira Winkler published an article for Computerworld yesterday where he discusses “6 failures that led to Target hack”. Ira very astutely points out that there really wasn’t a single failure that led to the Target breach but in actuality there were a series of systematic failures that allowed the compromise of millions of credit/debit cards and other customer personal information. I’ve been involved with numerous companies over the years that are attempting to recover from a breach or compromise and Ira’s words rang true – there is almost never a single point of failure but a series of actions (and inactions) that lead to the event.

I also thought that the six failure points that Ira discusses would be a great springboard for an objective discussion of whether the PCI DSS controls applied, were implemented, or were not being followed by Target. Let me start with summarizing the 6  APPARENT failure points that Ira pointed out in his article:

1. Lack of or improperly implemented segmentation controls to isolate the cardholder data environment (CDE);

2. Lack of or improperly deployed IDS/IPS solutions;

3. Failure to detect compromise of internal software distribution system or failure to detect changes/modification of the software being distributed internally (really two failures, IMO);

4. Lack of whitelisting solution to protect the POS systems;

5. Lack of detection of the compromise of systems commandeered to enable the collection of the transaction data and subsequent exfiltration; and

6. Lack of detection of the exfiltration of the data itself.

My intention is to foster a discussion about these failures as they pertain to the PCI DSS controls specifically and how they are interpreted and applied for the typical large merchant. I have had numerous retail customers over the years, some recovering from a breach; some trying to prevent one; (all trying to comply with PCI DSS and not spend too much time, money and resources). The failures discussed point out the difficulties of implementing adequate security controls in a typical retail environment, and also the complexities of consistently interpreting and applying the PCI DSS controls.

I’ll get the ball rolling with some initial thoughts:

1. Network segmentation is not a PCI DSS requirement, but a highly recommended means of limiting a QSA's scope for validation (but often means the systems to which our clients' apply the PCI DSS controls). Evaluating adequate segmentation is highly subjective so this point is debatable as to whether or not Target failed to adequately segment their CDE, or whether their QSA approved it or not. Frankly, if this proves to be the actual path of compromise, I think this will serve as the death knell for segmentation and limiting scope altogether (or should). The lesson learned should be to apply the PCI DSS framework across the enterprise. Period. No exceptions.
2. IDS placement is also debatable - as the standard requires perimeter placement to and at "strategic points" within the CDE. It's likely that the hackers circumvented the perimeter by finding what effectively was a backdoor/trusted ingress path via the HVAC/Ariba system. This could be a simple case of putting alarms on the “front door” and leaving the back door wide open.

3. This one is a little tougher to defend. On the one hand, these systems should have clearly been considered in-scope for PCI and thus should have been in the CDE. But, because they perform a supporting function and not actual transaction processing, I could certainly understand if the focus was more on the controls associated with Requirement 6 as it pertains to change management, software development, testing, and so forth – and not so much on the hardening, logging, monitoring controls put forth in other sections of the PCI DSS.

4. While whitelisting solutions for POS systems are fairly common, they are not technically required. The requirement for these systems is for Anti-virus/malware solutions to be installed, receiving automatic updates, and periodically scanning the system, have FIM installed and reporting/alerting, and receiving critical patches within 30 days of release. I mention these three categories (AV, FIM, Patching) because these are the categories that many of my retail clients try to address through compensating controls using primarily a whitelisting solution as an alternative. The use of a compensating control is allowed for technical limitations; in this case the limitation was the difficulty in successfully administering large numbers of geographically dispersed systems – many of which were not routinely online – in a timely manner according to the specific PCI DSS Requirements. Presumably Target either had the primary controls in place, or a compensating control alternative such as a whitelisting solution, or they did not. IF they did, the discussion should focus on whether the control actually worked, and I would point out that as a QSA I was not supposed to judge whether a solution actually performed as advertised, but that it advertised meeting the goals of a particular requirement.

5. The commandeering of these systems should have been detected, so this should be an easy one to say was non-compliant at the time of the breach. The only rebuttal might be the logical location of these systems (outside the CDE) and whether they were being maintained and monitored according to PCI DSS requirements. But the failure then was the lack of detection of the transfer of data – oh wait, that’s the next failure point…

6. I can’t get past this one. You have to assume that the CHD data started out inside the CDE and was exfiltrated somehow outside of the CDE and ultimately outside of the enterprise. That should have been disallowed by outbound firewall rules, so either the attackers used trusted (existing) outbound ports/services/protocols, the rules were non-existent that would have prevented the egress, or they compromised the firewalls and added their own rules. My initial thought was that they would likely have used existing rules to get the data out – but then there’s the matter of the destination. PCI DSS is supposed to prohibit the use of “any” rules, so maybe the attackers did have to compromise the firewall and at least add an IP or two to an existing outbound server group? I want to give the benefit of the doubt here, but properly implemented PCI DSS controls should have prevented or at least alerted on this egress.

That is my current thinking based on these failure points. What do you think? Feel free to agree or disagree but by all means you are welcome to contribute to the discussion.

Jeffrey Man

Monday, November 18, 2013

When is a patch not a patch?

When is a patch not a patch?  When it is not a patch.  That seems rather obvious, but sometimes we lose sight of the obvious when talking about patching and vulnerability management (and a lot of other things).

In my “day job” at Tenable, we think about vulnerability management a lot, it is what we do.  We also think about patching and patch management a lot, even though that is not what we do.  (I often wish companies who sell patching and patch management systems were similarly honest about their core competencies, but that’s a rant for another day- it is not quite floor wax and dessert topping territory, but patch and vulnerability management are two related things I do not want coming out of a single can, no matter how shiny or tasty they claim to be).

Back to the topic, patching… and not patching.  Patch Tuesday has driven many into a myopic patch mentality, sometimes that works well, sometimes it works well enough, and sometimes is leads to stupidity.  (Tangent number two: I was always a fan of Shavlik, I don’t know what VMware was thinking when they acquired and nearly ruined them, but thankfully Shavlik has survived, escaped, and will hopefully recover fully).  But patching isn’t always the answer; when a vulnerability is found there should be a logical process for dealing with it, and while “slap a patch on that bad boy” is often a great answer, and frequently the easiest answer, it is not the only answer.

Let’s say you’ve found a vulnerability (or more likely thousands) in your environment, where do you start to deal with it?  There are a handful of questions you need to answer before acting.  In no particular order:

  • Is it real?  I wrote a post on positives and negatives, true and false, some time ago- check out Are you Positive? for thoughts on the topic.  The bottom line is that you need confidence in your findings.  Acting on bad info is rarely a good idea unless you are a politician.
  • Are the “vulnerable” systems exposed?  We don’t always think about online “exposure” the way we should.  We generally understand threats that come to us, whether in the form of physical threats to our homes and offices, or services exposed to the Internet.  In the physical world, we generally only think of going to threatening places in “high-risk” environments, such as high-crime areas or potentially dangerous places such as mountain trails or beaches known for undertow.  The problem with that is that the entire Internet is pretty sketchy, not just the “high-crime” areas.  Legitimate sites are compromised, DNS is hijacked, bad things happen all over- so venturing out is always a little risky.  Any system receiving email or accessing the Internet has some exposure.  Where it gets more tricky is with the indirect exposures- systems which are exposed via pivot or relay.  This often means systems which are not directly exposed to the Internet, but which are exposed to Internet-accessing systems.  This sort of attack path analysis can be challenging, but it does add context to our efforts at understanding exposures and mitigating vulnerabilities.  (Forgive me for not addressing air-gapped systems here, but you will note I am not addressing unicorns, either).
  • Do we care?
    • Should we care?
    • If so, how much?
    • Do the vulnerabilities really expose anything important?
      • How much exposure are you comfortable with?
  • What risks are posed by potential exploit of the vulnerability?
  • What risks are posed by the patch or mitigation?
  • Does the cost of mitigating the vulnerability make sense?  Spending a dollar to protect a dime is probably not the best use of limited resources.
  • Are there known exploits in the wild for the vulnerability?  There may be unknown exploits, but ignore known Bad Things™ at your own risk.
  • Is a patch the best answer?  Maybe you should just uninstall or disable the application or service.  If you don’t need it, kill it.  Maybe there are other mitigations like network segmentation or other ACLs, configuration settings, permissions restrictions, or tools like Microsoft’s EMET which can reduce or eliminate the exposure.  This requires an understanding of the implications of each mitigation- sometimes it is easiest to “just patch”, but patching is not without risks.
  • Can you recover quickly from whatever mitigation you deploy?  Sometimes unwinding a bad patch is as simple as logging into your patch or systems management server and removing the patch.  Sometimes it involves re-imaging thousands of systems.  If faced with the latter, how would you handle it (besides updating your resume)?

I’m sure you can think of more, but this list should start or re-start a conversation I hope you’ve already had several times.

I can’t write about patching without addressing a little problem I thought was pretty much behind us, at least for Microsoft: bad patches.  For years I have advocated rapid patching of Microsoft systems since they have done an outstanding job of QA on their updates.  Back in the days when I was an admin in the trenches I patched fast, with a 72-hour patch target for desktops and laptops, and a 10-day target for most servers.  Obviously, some testing is needed, and a lot of testing is needed for critical systems- but you have to decide if the risk of deploying a patch outweighs the risk of not patching, and how other possible mitigations might change the risk.  This has been made a little trickier by the past year’s string of “less than perfect” patches coming from Redmond.  I chatted about this topic with Pat Gray on a recent episode of his outstanding Risky Business podcast.  Microsoft updates are the largest software distribution system in the world, and the quality of the patches is still generally very good.  “Generally very good” might be good enough to push patches to a lot of systems in a rolling deployment after a short test cycle, it is probably not good enough to skip thorough testing before testing on critical systems.

In the immortal words of Spock: “Patch well and prosper”.

Or something like that.

Jack

Saturday, November 9, 2013

Microsoft MVP Summit

Headed to the Microsoft MVP Summit?  If so, please stop by and join me at an informal gathering for Security MVPs and like-minded folks on Sunday night, Nov 17.  Drop in anytime between 7 and 10:30 and say hello.  Stay for a few minutes, or a couple of hours- and enjoy snacks, drinks, and conversation.  Send me an email at jdaniel [at] tenable.com for more details and venue info.  (It is very close to all the MVP goings-on in Bellevue, a short walk from any of the event hotels).

This reception will be sponsored by the nice folks who routinely send me a paycheck, Tenable Network Security.  No sales pitches, banners, or anything like that- Tenable is just encouraging conversations, as we often do.

See you there.

Jack

Saturday, September 14, 2013

Can you trust them?

Let’s turn a common theme in InfoSec upside down:

Can you trust, and should you hire, former hackers government employees?

In the still-unfolding Snowden saga, we now have allegations that the US government, specifically the NSA, has attacked cryptography at scale, including the software, protocols, and algorithms we rely on for secure and private communications.  On one hand, I have to say “duh, that’s their job”, but it certainly appears to me that they have significantly overstepped their authority and damaged our ability to secure our data.  While I hold some senior NSA officials, notably General Alexander, partially responsible for part of this abuse, I believe that the real blame lies with past couple of presidents and the Congress for their utter abandonment of responsibility to the Constitution, and to us, the citizens it is designed to protect.  The NSA (as is true for much of the US federal government) is full of great people, working very hard to properly execute their assigned tasks.  But, if your assigned task is something like fighting terrorism, or combatting drugs, or child pornography- it is only natural that you will lose perspective in the face of the horrors you are trying to combat.  (Don’t get me wrong, I know that a lot of folks are in the “war on [whatever]as profiteers, but I believe most people are trying to do what they believe to be right).  That’s where the elusive property of “oversight” comes in.  Or in the case of things like the abuses of the NSA, oversight should come in, but presidents, congress critters, and others have abdicated their sworn duties.

Back to the question at hand…

Having “NSA” on your resume has traditionally been seen as an asset.  We now have credible claims that government agents have subverted the security of the systems we rely on, in some cases by covert infiltration of private enterprise.

Imbecile executives in the InfoSec industry like to make pronouncements like “We don’t hire hackers”, showing their ignorance of what “hacker” means to many people, and limiting their pool of talented recruits.  Computer criminals have a hard time concealing their past convictions, but covert agents have the power of the intelligence community behind them to create squeaky-clean résumés.  Is that former NSA researcher, the one who is now working on your software, really “former”?

Thus, we have to ask: Is it time for NSA to become scarlet letters on a résumé?

For the record, I don’t think so- but I do believe it is past time to reflect on “who can you trust” before hiring people and putting them in positions of responsibility, regardless of their past.

And that’s a belief I am confident the NSA shares with me…Edward_Snowden-2

(Image Attribution: Laura Poitras / Praxis Films)

 

Jack

[Note: I have not  provided links to anything in this post. There are so many sources, with so many revelations, counterclaims, and outright lies that I’ll leave you to use the sources you trust, and reach your own conclusions on the reality and implications of this mess].