Security

Research

GhostStripe attack haunts self-driving cars by making them ignore road signs

Cameras tested are specced for Baidu's Apollo


Six boffins mostly hailing from Singapore-based universities say they can prove it's possible to interfere with autonomous vehicles by exploiting the machines' reliance on camera-based computer vision and cause them to not recognize road signs.

The technique, dubbed GhostStripe [PDF] in a paper to be presented at the ACM International Conference on Mobile Systems next month, is undetectable to the human eye, but could be deadly to Tesla and Baidu Apollo drivers as it exploits the sensors employed by both brands – specifically CMOS camera sensors.

It basically involves using LEDs to shine patterns of light on road signs so that the cars' self-driving software fails to understand the signs; it's a classic adversarial attack on machine-learning software.

Crucially, it abuses the rolling digital shutter of typical CMOS camera sensors. The LEDs rapidly flash different colors onto the sign as the active capture line moves down the sensor. For example, the shade of red on a stop sign could look different on each scan line to the car due to the artificial illumination.

The GhostStripe paper's illustration of the 'invisible' adversarial attack against a self-driving car's traffic sign recognition

The result is a camera capturing an image full of lines that don't quite match each other as expected. The picture is cropped and sent to a classifier within the car's self-driving software, which is usually based on deep neural networks, for interpretation. Because the snap is full of lines that don't quite seem right, the classifier doesn't recognize the image as a traffic sign and therefore the vehicle doesn't act on it.

So far, all of this has been demonstrated before.

Yet these researchers not only distorted the appearance of the sign as described, they said they were able to do it repeatedly in a stable manner. Rather than try to confuse the classifier with a single distorted frame, the team were able to ensure every frame captured by the cameras looked weird, making the attack technique practical in the real world.

"A stable attack … needs to carefully control the LED's flickering based on the information about the victim camera's operations and real-time estimation of the traffic sign position and size in the camera's [field of view]," the researchers explained.

The team developed two versions of this stablized attack. The first was GhostStripe1, which does not require access to the vehicle, we're told. It employs a tracking system to monitor the target vehicle's real-time location and dynamically adjusts the LED flickering accordingly to ensure a sign isn't read properly.

GhostStripe2 is targeted and does require access to the vehicle, which could perhaps be covertly done by a miscreant while the vehicle is undergoing maintenance. It involves placing a transducer on the power wire of the camera to detect framing moments and refine timing control to pull off a perfect or near-perfect attack.

"Therefore, it targets a specific victim vehicle and controls the victim's traffic sign recognition results," the academics wrote.

The team tested their system out on a real road and car equipped with a Leopard Imaging AR023ZWDR, the camera used in Baidu Apollo's hardware reference design. They tested the setup on stop, yield, and speed limit signs.

GhostStripe1 presented a 94 percent success rate and GhostStripe2 a 97 percent success rate, the researchers claim.

One thing of note was that strong ambient light decreased the attack's performance. "This degradation occurs because the attack light is overwhelmed by the ambient light," said the team. This suggests miscreants would need to carefully consider the time and location when planning an attack.

Countermeasures are available. Most simply, the rolling shutter CMOS camera could be replaced with a sensor that takes a whole shot at once or the line scanning could be randomized. Also, more cameras could lower the success rate or require a more complicated hack, or the attack could be included in the AI training so that the system learns how to cope with it.

The study joins ranks of others that have used adversarial inputs to trick the neural networks of autonomous vehicles, including one that forced a Tesla Model S to swerve lanes.

The research indicates there are still plenty of AI and autonomous vehicle safety concerns to answer. The Register has asked Baidu to comment on its Apollo camera system and will report back should a substantial reply materialize. ®

Editor's note: This story was revised to clarify the technique and to include an illustration from the paper.

Send us news
51 Comments

AI firms and civil society groups plead for passage of federal AI law ASAP

Congress urged to act before year's end to support US competitiveness

Tesla FSD faces yet another probe after fatal low-visibility crash

Musk’s camera-only approach may not be a great idea after all?

The billionaire behind Trump's 'unhackable' phone is on a mission to fight Tesla's FSD

Dan O'Dowd tells El Reg about the OS secrets and ongoing clash with Musk

Gary Marcus proposes generative AI boycott to push for regulation, tame Silicon Valley

'I am deeply concerned about how creative work is essentially being stolen at scale'

AI 'bubble' will burst 99 percent of players, says Baidu CEO

Plus: Australian bank glitch empties accounts; China online slang crackdown; Toshiba teams with Airbus; and more

Sorry, but the ROI on enterprise AI is abysmal

Appen points to, among other problems, a lack of high-quality training data labeled by humans

OpenAI loses another senior figure, disperses safety research team he led

Artificial General Intelligence readiness advisor Miles Brundage bails, because nobody is ready

Polish radio station ditches DJs, journalists for AI-generated college kids

Station claims it's visionary, ex-employees claim it's cynical; reality appears way more fiscal

Linus Torvalds: 90% of AI marketing is hype

Linux kernel creator says let's see which workloads use GenAI in five years

UK’s new Minister for Science and Technology comes to US touting Britain's AI benefits

$82B in investment shows we've still got it as a nation

Voice-enabled AI agents can automate everything, even your phone scams

All for the low, low price of a mere dollar

Major publishers sue Perplexity AI for scraping without paying

We sell that to OpenAI – how dare you steal it and make stuff up