Uber’s Autonomous Car Crash: Who Is to Blame?
Since its invention in 1886, the automobile had gone on to dominate the world as a mode of transportation, and the United States, as one of the newer world powers, was the first country to have its cities and geography molded by it. Now, in 2018, Uber, as an extremely popular ride sharing service, has come to the forefront as an innovator in the space of transportation. As a result, futurists predict a future in which the American dream of owning a home and car will partially become a thing of the past. Their view depends on getting autonomous driving and electric vehicles done right. If this holds, Uber could own a fleet of cars that will have such reduced fares for riders that it will no longer make financial sense for individuals to own cars - or at least, that’s the idea.
However, a recent setback and growing concern about the expansion of autonomous vehicles came in the form of the first fatal autonomous car accident. Only a few weeks ago on March 18, one of Uber’s test self-driving cars crashed into and killed a woman in Tempe, Arizona. Through various investigations and reporting, it has become clear that Uber had reduced its safety protocol for these cars as they seemingly improved in capability. Note: cars in general are rather unsafe creations with 11.59 per 100,000 (or 37,000 total) people in America dying through automobile accidents in 2016 alone. Computers, instead of humans, at the wheel hopes to reduce this number, but it will never be perfect.
What happened? Late into the night on March 18, the self-driving car hit a 49 year-old pedestrian walking with a bicycle. In a video captured of the accident, the driver had kept her eyes off the road for the second leading up to the accident, as is not uncommon when operating a self-driving vehicle. Uber actually reduced the standard number of passengers from two to one. As a result, not only were there fewer eyes on the road for the eight-hour shifts, but the need to log events still remained, forcing the driver to take on this task as well. When there had been two occupants, the passenger was the one who had been vigilant of traffic, taken over driving if needed, and recorded various metrics of the car; because of Uber’s new policy, the driver had to do it all, adding a huge amount of pressure and responsibility.
It is also believed that Uber’s Lidar system, the laser detection system for pedestrians and obstacles, failed, either through hardware or software malfunctioning.
Certainly, this event was a matter of “when” rather than “if”, but the legal implications of culpability are of paramount importance. This can be a very challenging issue to resolve as both the driver was non-vigilant and the autonomous system failed. When the Uber system clearly fails as it did here, it is hard not to blame the company who devised it - much in the way we blame car manufacturers if specific parts malfunction. However, once the system reaches a threshold for safety that equals to or exceeds the human threshold, I believe the onus should then fall on the occupant who chooses to drive such a car. In many ways, you are agreeing to forego your driving skills and entrusting them in a computer system, despite knowing that accidents could still happen. In the case where it is not a glaring issue in the entire concept or design, but rather circumstance and chance, I feel that the liability would and should fall on the occupant, as neither tech companies like Google and Uber nor car manufacturers like Volvo or Toyota can take on the cost of insuring all their vehicles. Additionally, it seems unreasonable that society would want to take on such costs in the public sector.
At the end of the day, autonomous cars will very likely become a reality, and with it, myriad cost and safety benefits shall ensue. However, difficult questions surrounding legal and moral concerns that arise with them will be challenging but necessary to answer. Now, I put it to you: who is to blame for the Uber crash?