Back in March this year Keen Security Lab in China managed to break the Tesla Autopilot and gain control of a moving Tesla vehicle. They managed to ‘trick’ a car under Autopilot control into changing lanes and switching on the wipers. The attack chain the used has since been secured in a good example of so-called White Hat ‘hackers’ Vs Black Hats managing a risk feedback loop.
“…Big deal” you say? Bloody oath it is! Both Tesla themselves and Uber have plans for large scale automated taxi-fleets. They’re a way off yet, but when you have a fleet of vehicles that can communicate amongst themselves, with pedestrians and with infrastructure; AND they’re capable of being duped by altering their firmware in real-time and misled with disguised road signs or lane markings, you have yourself a potential mobile attack platform… Think of the havoc that could be wrought with a fleet of aggressively psychotic & anarchic cars in peak-hour. I’m watching this space with interest….
I’m loving the fact that the report exposes Teslas Chip naming schema. One of their primary chips carries the prefix LB – for “Lizard Brain”.. Does it therefore follow that they have another chip named after the more playful ‘Monkey Brain”? Of course, it does… the whole system acronyms down to APE (Auto Pilot ECU)…
In fairness, they ran the project on the discontinued Model 75 Tesla, so its possible that the car’s firmware wasn’t up to date; but important to note that over-the-air firmware upgrades ate touted as a ‘feature’ by Tesla to deliver new functionality; and this was a registered, in-use vehicle.
From the report abstract:
In our research, we believe that we made three creative contributions:
- We proved that we can remotely gain the root privilege of APE and control the steering system.
- We proved that we can disturb the autowipers function by using adversarial examples in the physical world.
- We proved that we can mislead the Tesla car into the reverse lane with minor changes on the road.
Keen Labs’ Full report here :