Thursday, May 27, 2010

Continuing the car-rant

Time to resume the review of the car security paper.  I will not pick it apart completely, but I do need to hit several more points in this post and then I'll wrap up the car rant in one more post on access to tools and information.
Backing up a bit, one thing I skipped over was the choice of automobiles.  They chose one pair of identical cars, testing one stationary and one on a test track.  Their reasoning behind the small sample, and not identifying the car, is stated as:
"We believe the risks identified in this paper arise from the architecture of the modern automobile and not simply from design decisions made by any single manufacturer. For this reason, we have chosen not to identify the particular make and model used in our tests. We believe that other automobile manufacturers and models with similar features may have similar security properties."
I am uncomfortable with these blanket assumptions for a few reasons.  First, it is irresponsible to praise or damn an entire industry based on a single sample.  Second, I assume the technology levels of auto manufacturers have been somewhat leveled by regulatory and market pressures, but it is naive to think they are all the same, or that they all treat digital security the same.  Time for another trip back in time: Going back to the pre-OBD-II days, some manufacturers had barely moved out of the seventies, while others had forged well ahead.  For example, there was some argument of the number of pins to be included in the proposed standard connector- many manufacturers complained loudly and said it was impossible to supplied the required information in the small number of connectors.  Chrysler also complained about the proposal, but their complaint was that they were already supplying more information than required in a smaller connector, Chrysler objected to needlessly adding pins.  At this time, Ford technicians were still using breakout boxes and meters for some testing while many Chrysler vehicles were connected to in-shop computers to do the same, only faster and more reliably.  At the same time (early 90's) we had data capture devices available for Chrysler products which let us capture system data while on the road.  I remember dumping the contents to the shop computer, then to floppy so that I could wave them in engineers faces at conferences.  "If you look here on the fuel injector and ignition timing traces..." was a lot better than "I don't know, but the customer still isn't happy".  Moving forward, up until a couple of years ago I know that the quality and technology of diagnostic equipment varied widely between manufacturers.  From what little I've seen from friends and clients this still appears to be true.  I do not think it is unreasonable to conclude from this that testing of multiple manufacturer's systems is warranted before making any sweeping statements.
A technical and safety nitpick: On page nine, section V. B., they discuss the setup for the stationary tests.  They raised the car on jackstands and ran the drivetrain at speeds for the tests with the wheels and powertrain unloaded.  This means the operating environment of the vehicle was artificial and bearings were spun without load- a bad idea.  Also, without the weight of the vehicle on the wheels, had something malfunctioned there could have been out-of-control vibration, possibly bouncing the car off the stands with catastrophic results.  (This one is not theoretical, I've seen bad things).  In this configuration, they tested the electronic braking system- the rear wheels were stationary while the front wheels spun at 40 MPH.  Since there are wheel speed sensors on each wheel, this put the system in an unnatural state, it is not surprising they experienced different results between the static and road tests.  They did understand this, but understanding that they were "doing it wrong" is not especially confidence inspiring.  Although they repeated some tests on the road, it just shows a basic lack of awareness- especially considering the information they could have gathered by running the system on a chassis dynamometer.  Dynos are not rare anymore, either- state emissions testing stations in many states are equipped with rudimentary ones, and performance and speed shops have reliable ones available.  Running on  a dyno would not have solved the wheel speed sensor issue, but it would have addressed load and safety issues.  As far as the the wheel speed issues, depending on the sensor it may have been possible to solve with a simple little hack of the sensor outputs while stationary.
Moving forward to page twelve, we return to good (and disturbing) information:
"The fact that many of these safety-critical attacks are still effective in the road setting suggests that few DeviceControl functions are actually disabled when the car is at speed while driving, despite the clear capability and intention in the standard to do so."
Again with the poorly implemented or enforced standard- this is critical information from the report.  They also found that it was possible to bridge the high- and low-speed busses, that is just plain wrong, and potentially terrifying.  More important findings, I really wish I wasn't questioning everything by this point.
Hopefully you have read the report, don't just accept their findings (or my observations), but put some thought of your own into it.  I hope there is follow up study, and it is done with more caution and better research.
There is one more petty thing I wish to bring up.  They refer to computer managed automobiles as "cyber-physical vehicles", what the [heck] does that mean?  We do not need more cyber-hyphenated nonsense terms.
One more quick car post is coming, then back to your irregularly-scheduled drivel.