Saturday, March 8, 2008

The security implications of "Digital Natives"

Wednesday night I attended an interesting talk at the Boston Area Windows Server User Group, "The Social Web and Digital Natives: Understanding the Expectations of Tomorrow's User Base".  Presented by Anthony Pino of the Digital Natives project at Harvard's Berkman Center.  What is a Digital Native?

Digital natives, a term made popular by Marc Prensky, are young people whose use of technology is completely ingrained in their lives -they have grown up always-on and constantly-connected. Unlike those even a little bit older, these Digital Natives didn’t have to learn to “be digital,” they learned in digital the first time around.

It was a very good presentation, and the audience (mostly Windows systems and network admins, none young enough to be digital natives) was engaged and participated in a good discussion.  While the topic was Digital Natives, their attitudes and expectations, and the impact they will have on the workplace- it got me thinking (not surprisingly) about the security implications of their arrival.

Here are a few thoughts on what will they bring to the workplace and what will they will expect to find.

One thing we already see, mostly in Web 2.0 applications, is the notion of the perpetual beta.  Instead of the traditional software model of spending years developing and testing software and the releasing it as a completed project, the perpetual beta mentality gets a project to a functional state and releases it for use with the idea that the project will evolve with user feedback.  Digital Natives expect this kind of feedback loop because they have grown up with it.  This model can lead to an improved user experience since real-world user feedback is part of the development cycle. (Podcast.com is an example of a site which does this well).  Unfortunately, the constant cycle of release-feedback-revise-release can also mean code gets out before it is fully reviewed and secured.  For a perpetual beta system to work securely, security can not be "bolted on" at the end of a project, it really has to be an integral part of the development process from the beginning.

Mashups (web applications resulting from combining data and functions from multiple sources) and other hybrid applications can also be problematic.  If all of the components aren't locked down, a "connect the holes" situation may develop.  I think the most like scenario is data leakage, but who knows where it could lead.  Think about Facebook and all of the applications being thrown at it.  If we believe that Facebook is trying to protect our privacy (play along with me here, OK?), what keeps all of those add-on applications from leaking your information or spying on you?  Yes, the correct answer is: nothing.  Now we need to trust multiple sources of code, and some of those sources may be obfuscated in the mashup.  A less-than-ideal situation.

It would be underestimating the Digital Natives to assume they won't respect the privacy of others or the confidentiality of sensitive data- but it would be naive to overlook their very different views of public and private information.  Employers will need to reconcile Digital Natives' attitudes with business and regulatory demands to prevent problems from developing.

A final thought, just because they have grown up with technology doesn't mean they are all tech-savvy.  We still have to teach technology to this new generation: the ability to customize a MySpace page does not make someone a web developer; nor does the ability to connect all of their computers, game and entertainment systems  make someone a network engineer.  My son, a real Digital Native and a small-business network and systems admin, frequently bemoans his peers' lack of technical (and security) skills- usually when fixing their computers. 

 

Jack