Autonomous vehicles have often been considered a harbinger that the high-tech future many imagined has finally arrived. Media from “Knight Rider” to “Minority Report” have portrayed them cruising in sleek, unhindered elegance.
Yet when the rubber of innovation has hit the road of reality, it’s made for a bumpier ride. Those seeking to turn the cars of the future into reality still face notable challenges before mass production and adoption are embraced.
One major concern for automated vehicles is technological security.
A major challenge for many auto manufacturers is that the number and diversity of components in each vehicle are sourced from numerous producers, with millions of lines of code from disparate sources, making it difficult to monitor and ensure safety.
While they were smart cars and not completely autonomous vehicles, Chrysler had to recall 1.4 million vehicles in 2015 after black hat hackers illustrated vulnerability in one of its components that allowed for hackers to commandeer complete control of the vehicle.
A May 2015 Black Hat presentation and related publication revealed how it’s possible to manipulate LiDAR, cameras and other smart car sensors to alter a car’s performance and to violate the privacy of car users with relatively cheap and available technological tools.
Accidents & human error
Accidents of both the robot and humanoid variety are also a key concern for producers. Determining user versus autonomous vehicle fault for accidents also poses unique challenges.
The New York Times detailed some of the ways autonomous car systems can also flub up in ways humans don’t.
Google’s vehicles were in 20 crashes as of April 2016, one of which Google claimed was automated-vehicle caused, the rest of which it asserted was from drivers rear-ending the possibly too-conservatively driving cars.
Tesla has also had several accidents where the confluence of user error and system error was ambiguous. In January 2016, a man in China was killed driving a Tesla Model S when it collided with a street cleaner. The logs were destroyed, making it impossible to tell if the vehicle’s autopilot played a role.
In May 2016, a man in Florida was killed driving a Model S on autopilot when the car’s censors failed to distinguish a white truck from the bright sky and did not engage the break (the driver was believed to be watching a video at the time of the crash). The National Highway Traffic Safety Administration later cleared Tesla of safety defects but said Tesla needed to design cars based on actual use instead of intended use and educate drivers on the car’s limitations and when and how it was supplementing the driver.
In July, a Pennsylvania driver’s Tesla Model X struck a guardrail, heavily damaging the vehicle. The driver claimed the car was on autopilot, while Telsa emphasized the beta nature of the autopilot feature.
In November 2016, a Tesla Model S resulted in an additional fatal crash in Indiana, with the log data also destroyed in the crash.
In December 2016, South Korean celebrity Ji Chang Son filed a suit against Tesla, saying his Model X spontaneously accelerated, crashing through his garage wall. Tesla asserted it was operator error, saying the car data showed he pushed the accelerator instead of the break.
The confluence of human error, system error, potential for malevolent outside operators, destroyed crash data and independent data evaluations of crash data have shown the necessity of improved, transparent data gather and evlaution as well as education for drivers in the brave new world of autonomous vehicles.
In Sept. 2016 the U.S. Department of Transportation unveiled a new 15-point safety standard for driverless cars including technological failure, digital security, as well as passenger privacy and crash protection.
However, ethical concerns still abound. While sensors are meant to prevent accidents, they don’t have the moral judgment to take on trolley problem-style ethicla dilemmas. Such as, if your car should protect you at all costs or let you crash fatally into a wall to avoid a child in the road. Or which of two cars you should hit when facing an unavoidable accident.
Engineers, ethicists, lawyers, politicians and consumers will have to collectively work to find acceptable designs, uses and implementations of autonomous vehicles as they become increasingly autonomous of human drivers and are more embraced by consumers and prevalent on the road.
New players in the field
While the current players Google, Tesla continue to navigate the rewards and challenges of autonomous vehicles, new players are also entering the playing field.
Apple, Ford, Volvo, Jaguar and Mercedes are making headway. Volvo will be using London to test its new line of semi-autonomous vehicles in 2017. Ford plans to launch a fully autonomous fleet of commercial vehicles in 2021 offering ride-sharing services. Jaguar has started to explore both on-road and off-road autonomous technology. Mercedes has also begun testing its self-driving truck as well.
While the technical, ethical and competitive pressures for companies increase, the growing field also offers opportunity for innovation, more partners for discussion, debate and collaboration and more real-world testing to gather more data and discover the real-world use of cars by consumers to better protect and serve them.
The full-fledged use and implementation of autonomous vehicles may be a work in progress, but it’s one that promises to be revolutionary and society-changing in its journey.
Free Email Updates
Get the latest content first.
Congratulations. Welcome to the family.