Autonomous Vehicle Safety Isn’t Just Code and Mirrors—It’s Also a Policy Issue
- As autonomous cars are entering the market, the US Department of Transportation is evaluating how to integrate the technology into everyday life.
- Manufacturers must grapple with how self-driving cars will share the road with other drivers, pedestrians, bicyclists, and public transit.
- In lieu of road and vision tests for human drivers, governments must now develop data points by which to test an autonomous vehicle’s safety.
A future with driverless cars is like a utopia the world is speeding toward. To the hopeful masses, autonomous vehicles offer the possibility of moving more people efficiently and even reducing hazards brought on by human errors. But while the technology has been heralded as a revolution as significant as the horseless carriage, it also introduces logistics issues and new risk. What remains unclear is exactly how and when the dream of autonomous vehicles will be realized and the ongoing concern about autonomous vehicle safety.
Despite eagerness in densely populated states to roll out autonomous vehicles, a fatal crash in Arizona has slowed the momentum on testing self-driving cars. But without being able to test automated vehicles, how can progress be made? And when something goes wrong, who is to blame?
When it comes to safety, “There’s a whole set of issues around the law in autonomous vehicles, and it includes liability, cybersecurity, and privacy,” says Anthony Foxx, secretary of transportation during the Obama administration. Addressing these issues before the technology becomes widespread is the responsibility of citizens, federal and state governments, and the auto industry.
The Role Humans Will Play in Sharing the Road with Driverless Cars
Most predict that mass-market vehicles offering some level of autonomy will become common over the next two decades. During his time as secretary of transportation, Foxx and his team anticipated a transition period with a mix of machine-directed and human-directed vehicles sharing the road.
The US Department of Transportation’s (USDOT) National Highway Traffic and Safety Administration (NHTSA) established standard levels of autonomy, ranging from Level 0, defined as all aspects of driving tasks being completely controlled by a human, to Level 5, defined as a totally machine-driven car. “If an individual is in a Level 3 car and there are certain functions that the car can do on its own, how do you teach the human being to be ready when the car needs to assign the task back over to a human being?” Foxx asks. “How does that interplay work? I think that’s going to be a huge area. And we’re still learning.”
One promising option lies in the way autonomous vehicles will communicate with other vehicles, bicycles, infrastructure, and pedestrians. While these autonomous technologies indicate a future with fewer crashes on the road, human nonverbal driving cues need to be considered, especially as these vehicles deploy alongside manually driven cars. For example, when another driver waves at you, she might be signaling for you to go ahead. When entering an intersection on a bike, you may make eye contact with drivers to make sure they see you before proceeding. But if you encounter an autonomous vehicle, how do you know that it has “seen” you and given you the nod to proceed?
That’s something auto companies are working on, says Chan Lieu, senior policy advisor at law firm Venable LLP, where he focuses on autonomous vehicle policy and regulations. “There are all sorts of different scenarios and questions that the industry is actively trying to figure out,” Lieu says. “For example, companies are experimenting with things like different colored lights and projecting messages onto the ground to say, ‘Please proceed. I’m yielding to you.’ But this might not make sense for blind pedestrians, who might need some sort of auditory chime.”
Governments and Testing Autonomous Vehicle Safety
Before being licensed to drive a car, US citizens have to demonstrate that they’re qualified by taking government-administered driving and vision tests. Because this process doesn’t make sense for autonomous cars, during Foxx’s time at the USDOT, the government established that once one vehicle passes those standards, it won’t need to be recertified each time someone buys that same vehicle type.
“The important thing here is that the government is not trying to get ahead of technology in terms of the regulations,” Lieu says. “We don’t know what the right answer is. It’s changing very rapidly, and there’s no way government regulation can keep up. So the best way to go about it right now is watch the technology develop and let it mature a bit more before regulating.”
“One thing we were focused on in government was making sure we weren’t behind the times when it came to emerging technology,” says Dan Katz, former chief of staff at the USDOT under Foxx. Katz is currently head of global public policy and North American projects at Virgin Hyperloop. “Now that I’m on the other side, it’s important that we focus on how to work with governments cooperatively, understanding that they don’t want to be left behind,” he says. “Many governments are eager to understand what we’re trying to do and be a part of it rather than try to catch up with it.”
Are Autonomous Vehicles Safe?
Lieu says that in order to prove that autonomous car testing can be safe, the government needs to develop performance metrics by which these vehicles can be measured. For example, a vehicle needs to be able to stop within x feet when traveling at y miles per hour.
“The National Highway Traffic Safety Administration is a data-driven agency; everything that it does is based on data,” he says. “The major point is, we don’t have enough data right now to fully understand how to go about regulating fully autonomous vehicles.”
The NHTSA says that 94% of serious crashes are due to human error. By reducing human errors through driverless cars, everyone on the road will be protected.
How Liability Changes in an Autonomous Vehicle Future
While there was some tension between federal and state governments about roles in regulating autonomous vehicles, those lines are becoming more clear. “If a human being isn’t driving it, the car is driving itself; that is a total open question, and you will see different states approach that question very differently,” says Foxx, who considers cybersecurity and privacy central issues to be addressed by the industry.
According to Lieu, liability for manufacturers won’t be much different because they are currently liable for vehicle defects. “If there’s something wrong with the car, and that leads to a loss of life, like, say, the Takata airbag recall … those people who’ve died, Takata’s responsible for that,” Lieu says. “So that doesn’t really change significantly, because you’ve still got these component manufacturers or the car manufacturers themselves who are responsible for loss of life.”
No matter what pace the United States takes in advancing its autonomous vehicle technology, the rest of the world isn’t likely to slow down. China has been ramping up its autonomous-vehicle efforts and has even invited foreign automakers to test there.
Katz says there is a legislative middle ground: “I think that governments will have to make sure they don’t overreact and shut down innovation over single incidents. But at the same time, incidents must be fully investigated to make sure the industry improves its practices and always puts safety first.”
This article has been updated. It was originally published in May 2018.