Can we talk about the delicate topic of how many deaths are allowed in the name of progress? Cars driven by humans kill 1.25 million people annually around the globe, according to the World Health Organization.
In 2017, no one died in a passenger jet anywhere in the world — though 79 people were killed in cargo planes and smaller prop-powered aircraft, according to the Aviation Safety Network, an information service.
Self-driving cars, or cars with sophisticated autopilot capabilities, have caused two deaths in the last decade, the earliest phase of their commercial development — one in 2016, and one earlier this month. Both episodes made front-page headlines worldwide. Instead of the dull old story of people killing people, we now have software killing people.
On the way there, however, there are likely to be more high-profile accidents. But unlike most accidents caused by humans — whoops, turns out driving Mass. Ave while tweeting is a bad idea — the producers of self-driving vehicles will be able to learn from each one, and use them to improve their response to similar situations in the future.
But the makers and operators of these vehicles will have to deal with two kinds of negative publicity. One will be headlines like “Robot truck squashes duckling.” The other will be “UPS lays off 20,000 drivers in shift to autonomous fleet.”
And each mishap, like the one involving a pedestrian killed by an experimental Uber vehicle in Arizona, can have layers of ugliness. In that case, one layer was the technology’s failure to see a woman crossing the street. Another was the inattention of the human “safety driver” sitting behind the wheel of the autonomous Volvo SUV. The final layer was Arizona’s extremely hands-off regulatory stance when it came to tests of self-driving vehicles. It was the first place, for instance, that Waymo began testing cars without a safety driver, in 2017. (Waymo is part of Alphabet, the Silicon Valley company that also owns Google.)
On Monday, Arizona decided to suspend Uber’s ability to test self-driving cars in the state — an order that didn’t affect other companies operating there, including Waymo, Ford, and General Motors.
In Massachusetts, we’ve taken — surprise, surprise — a more buttoned-down approach to allowing self-driving vehicles on public roads. At the state level, Governor Charlie Baker created an Autonomous Vehicles Working Group in 2016 that includes most relevant state officials.
The City of Boston requires data about previous off-street testing, including any crashes, before it will allow a company to test vehicles on public streets. There’s a limited area set aside for testing, in the marine industrial park and Seaport. An initial phase of testing has to be done during daytime and in good weather before testing after dark and in inclement weather can take place.
Both companies currently testing in Boston, nuTonomy and Optimus Ride, have a driver behind the wheel and a software engineer in each vehicle. Boston put testing on pause after the Uber accident in Arizona but allowed it to resume on Tuesday, after conducting a safety review with both companies, according to Tracey Ganiatsos, seapking for the Boston Transportation Department.
Just as federal agencies set safety standards for the aviation, automotive, and pharmaceutical companies that are developing new products, they should for self-driving vehicles. Essentially: What’s the minimum level of safety we want before we allow them to be tested on public roads? The House in 2016 passed a bill called the Self Drive Act, which would establish the beginnings of a federal framework, but it stalled in the Senate.
But as we start to set up a framework, we can’t assume that prototype vehicles, or those being manufactured by the thousands, will have perfect safety records. The hope, though, is that they’ll help reduce overall deaths, not unlike seat belts and air bags.
“One hundred years from now, when we live in a world that’s highly automated, there will still be weather issues,” says Bryan Reimer, a research scientist at the Massachusetts Institute of Technology who studies new systems being integrated into cars. “If hail falls from the sky, it breaks sensors on autonomous vehicles,” which they use to see and avoid things around them. “Is that something we can’t live with? You’re never going to be perfect. But the question is, how good can we get?”
Reimer notes that we use the phrase “we’re only human” to excuse lots of mistakes.“We’re comfortable with human error, but many of us are not comfortable with the concept of robots harming people,” he says. Even, he adds, if eventually the robots are “proven to be far safer than us.” Reimer looks at the aviation industry as a model. We accept that there will be occasional crashes, but “it’s a very trusted system,” he says. “We learn from accidents and adapt, and we’re not going to make the same mistake twice.”
James Sproul wonders if we’ll need a Ralph Nader-like activist “to ensure that this software is safe at the level we expect it to be.” Sproul admits that after the Uber fatality in Arizona, he thinks differently about the more hands-on role that regulatory agencies should play in screening the capabilities of vehicles that wind up on the public roads, rather than treating them as a “wild west” environment. Sproul is a Boston-based analyst and an organizer of the Autonomous Vehicle Summit, taking place next week in Cambridge. (Interested in attending the summit? Find out more information HERE).
Rodney Brooks, cofounder of Rethink Robotics and a frequent blogger on issues related to autonomous vehicles, cautions that the “logical” argument that self-driving cars “will demonstrably reduce total deaths will not fly if the autonomous cars are themselves the cause of any significant number of deaths — even if that is orders of magnitude less per mile driven than for human cars.”
And what, exactly, will be perceived by the public or our legislators as significant? That, Brooks says, “is the big unknown.”
Recent Comments