© 2021 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

How safe is your car from hackers?


The progress of autonomous vehicle technology has largely run up against a very human buffer, with considerable concerns emerging about the ability of human passengers to safely regain control of the vehicle should they need to. For instance, research from the University of Southampton showed that it can take up to 25 seconds for people to regain control of the vehicle, which if it's traveling at any real speed is likely to be fatal.

The safety concerns are compounded by our generally poor reactions when we do attempt to regain control of the vehicle. Research from Stanford has shown that the transition from the car controlling things to the human driver doing so is often far from smooth, with the safe operation of the vehicle declining considerably during this transition period.

To make matters worse, a German study found that travelling in an autonomous vehicle tends to make people drowsy, and therefore less likely to be able to safely regain control should the need arise. You may be wondering why this is relevant on a blog about cybersecurity, but whereas each of these three studies were conducted in normal road conditions, there is a burgeoning risk of vehicles being hacked by external sources, and passengers needing to respond in some way to these external influences.

Hacking the car

The risks were ably demonstrated by a team from the cybersecurity company McAfee recently, when they broke into a Tesla car and manipulated it into traveling up to 50 miles per hour. The breach was performed via the car’s MobilEye EyeQ3 camera system, with the attackers able to alter a speed limit sign on the side of the road, thus altering the performance of the vehicle.

The MobilEye EyeQ3 camera system works by reading road signs and other street furniture and feeding that information to the autonomous vehicle so that it responds appropriately to its environment. That the system was capable of being distorted by something as fundamentally lo-tech as placing a tiny sticker onto the speed limit sign so that it appeared to read as 85 rather than 35 was sufficient to encourage the vehicle to whizz along at 85 miles per hour rather than the prescribed speed limit.

The stunt is part of a growing body of evidence highlighting how autonomous systems can be compromised and significantly imperil those passengers in them. For instance, researchers from Tencent were able to fool a Tesla Model S into switching lanes so that the vehicle was driving into oncoming traffic, simply by placing three stickers on the road to give the appearance of a line.

Similarly, researchers from UC Berkeley placed stickers onto a stop sign to trick an autonomous vehicle into thinking it was in fact a 45 miles per hour speed limit sign. These were all distinctly lo-tech approaches designed to manipulate the data inputs received by the vehicles, and they highlight how relatively straightforward it is for current technology to be manipulated such that they become incredibly unsafe.

Dangerous traffic

To date, all of the researchers have been acting in a ‘white hat’ way, with the aim being to help manufacturers overcome these shortcomings. Should the shortcomings not be tackled, however, then there is an array of interconnected computers that present a highly enticing target for attackers.

The McAfee work was shared with Tesla last year so that both they and MobilEye can attempt to improve their systems, but with MobilEye themselves saying that the sign alteration would fool a human driver just as much as an automated vehicle, the indications are perhaps not good. After all, most human drivers would have the situational awareness to understand the low speed limits tend to occur in built up areas, and are typically surrounded by similar speed limits. To go from a low-speed zone to an extremely high-speed zone is therefore very unlikely, and human drivers would know this.

What’s more, many satnav systems today have built in speed limit checkers, so would alert the human driver to any breaches of the speed limit. The autonomous vehicle technology of today lacks all of this, and their inadequacies are compounded by MobilEye failing to accept that such sign manipulation is even a valid form of attack.

In response to the McAfee project, the company said that autonomous vehicles also pull in data from a range of sources, so don’t rely on sensing alone. They believe this provides adequate failsafes as the data from the sensors is cross-referenced with data from elsewhere to ensure no inconsistencies are experienced. While it’s a statement that makes sense, it doesn’t seem to have prevented researchers from placing vehicles into highly compromising situations.

Given these inherent risks, it seems unwise for any passenger in a Tesla taking their eyes from the road, which perhaps defeats the point of the technology in the first place. Given the considerable challenges involved in safely regaining control of the vehicle, and the high risk of something requiring passengers to do that, it seems security concerns represent a considerable speedbump for autonomous vendors to overcome if the technology is ever to reach the mainstream.

Leave a Reply

Your email address will not be published. Required fields are marked