What happens when a self-driving car kills someone?

After a driverless Uber ran over a pedestrian and left her fatally wounded, Chloe Bull investigates who's responsible in such situations

While the idea of a self-driving car crashing and killing someone may sound like the storyline of the next Black Mirror episode, a driverless Uber being the cause of the death of a pedestrian in Tempe, Arizona last month has made this scenario a reality. While Uber has now taken its self-driving cars off the roads in Phoenix, San Francisco, Toronto and Pittsburgh, there are lots of questions remaining to be answered. Primarily, who is to blame in this case and in inevitable future situations of this kind?

The crash in Tempe occurred when a woman stepped outside the crosswalk and was hit by one of Uber’s driverless cars while it was in autonomous mode. That is, the machine was responsible for detecting hazards and taking action to avoid them. The car, travelling at around 40mph, did not slow down as it approached the woman; indicating that the car’s technology didn’t detect the hazard. Since the conditions on that day were clear and with little wind, there is no obvious explanation as to why the car’s sensors did not ‘see’ 49-year-old Elaine Herzberg and her bicycle which she was wheeling across the road. While this is the first reported incident of an autonomous vehicle resulting in a pedestrian’s death, Uber have previously faced problems with their driverless vehicles running red lights in California, where trials were previously being carried out.

“What does it mean to say that an accident is the fault of technology?”

Although the Uber involved in this accident was autonomous, it was being operated by a human ‘safety driver’ who is required to monitor the technology and be able to retake control of the car in the case of an emergency. A video released by police shows the safety driver to have been paying little attention to the road, hence why no attempt was made by said driver to slow down or swerve to avoid the pedestrian in his path.

Since a driver who could potentially have retaken control of the car to avoid this accident was in the driver’s seat, Uber and other authorities could make the case to blame this person instead of the car’s technology for the incident that occurred. However, others argue that the driverless car system is obviously going to result in human drivers, who are there only as a precautionary measure, to become bored and complacent, therefore inevitably not paying proper attention to the road. While the Uber driver in question could potentially have acted to avoid the accident, the fact remains that the car’s autonomous technology failed to do its job and is ultimately the cause of the fatal crash.

What does it mean to say that an accident is the fault of technology? The robot car itself cannot be held accountable for its actions; so, who can be? There are several parties who could potentially see themselves blamed in situations like this, including the car manufacturer or the company that created the finished autonomous car. Volvo has already announced that it is happy to take full responsibility for accidents which are caused by the self-driving technology installed in their cars.

“If the autonomous system has to ‘make decisions’ then this potentially includes moral ones”

A third option is the creators of the autonomous car technology itself; be this the software developers of the system, or those who manufacture the sensors which allow the car to ‘see’ its surroundings. President of Velodyne, the company who made the ‘lidar’ sensor system installed in the Uber that was involved in this particular crash, points out that pinpointing blame is difficult since it’s not just the sensors which are responsible for controlling the car as the system as a whole must “interpret and use the data to make decisions”.

If the autonomous system has to ‘make decisions’ then this potentially includes moral ones. For example, should a car about to hit a pedestrian swerve into a tree to save the life of the pedestrian, but endanger the life of the passengers? Clearly, a machine cannot form the opinions required to make these decisions, but they must be programmed into the machine by its creators. But ethical and moral implications are introduced when we’re inside a car which has had these kinds of judgements pre-programmed into it. While driverless cars, for reasons such as removing the chance of drink driving or human error, should, in theory, be infinitely safer than flawed human drivers, there is a grey area of unpredictability involved when autonomous cars are faced with making complex decisions in unpredictable situations.

While it has not been resolved exactly what, or who, is responsible for the death caused by a crash involving a driverless Uber, what’s clear is that there’s a lot of remaining research to be done before we can expect to see autonomous cars taking over our roads. Not only can we predict disputes regarding liability, but, how long might it be until the autonomous system of a vehicle is hacked by a third party? Charlie Brooker (creator of Black Mirror) would have a field day anticipating what this could entail.

Chloe Bull

Follow us on Twitter and like our Facebook page for more articles and information on how to get involved.

Image courtesy of zombieite on Flickr. Licence here.


Leave a Reply