Support road.cc

Like this site? Help us to make it better.

The ethics of self-driving car collisions: whose life is more important?

Who dies - the cyclist, the grandmother or the people in the car? How Google makes life and death decisions

In an unavoidable collision involving a robotic driverless car, who should die? That’s the ethical question being pondered by automobile companies as they develop the new generation of cars.

Stanford University researchers are helping the industry to devise a new ethical code for life-and-death scenarios.

According to Autonews, Dieter Zetsche, the CEO of Daimler AG, asked at a conference: “if an accident is really unavoidable, when the only choice is a collision with a small car or a large truck, driving into a ditch or into a wall, or to risk sideswiping the mother with a stroller or the 80-year-old grandparent. These open questions are industry issues, and we have to solve them in a joint effort.”

Google’s own self driving car gives cyclists extra space if it spots them in the lane, which theoretically puts the inhabitants of the car at greater risk of collision, but does it anyway. This is an ethical choice.

“Whenever you get on the road, you’re making a trade-off between mobility and safety,” Noah Goodall, a researcher at the University of Virginia.

“Driving always involves risk for various parties. And anytime you distribute risk among those parties, there’s an ethical decision there.”

Google is constantly making decisions based around information and safety risks, asking the following questions in a constant loop.

1. How much information would be gained by making this maneuver?

2. What’s the probability that something bad will happen?

3. How bad would that something be? In other words, what’s the “risk magnitude”?

In an example published in Google’s patent, says Autonews, “getting hit by the truck that’s blocking the self-driving car’s view has a risk magnitude of 5,000. Getting into a head-on crash with another car would be four times worse -- the risk magnitude is 20,000. And hitting a pedestrian is deemed 25 times worse, with a risk magnitude of 100,000.

“Google was merely using these numbers for the purpose of demonstrating how its algorithm works. However, it’s easy to imagine a hierarchy in which pedestrians, cyclists, animals, cars and inanimate objects are explicitly protected differently.”

We recently reported how Google has released a new video showing how its self-driving car is being taught to cope with common road situations such as encounters with cyclists. We’d far rather share the road with a machine that’s this courteous and patient than a lot of human drivers.
 

 

We’ve all been there. You need to turn across the traffic, but you’re not quite sure where, so you’re a bit hesitant, perhaps signalling too early and then changing your mind before finally finding the right spot.

Do this in a car and other drivers just tut a little. Do it on a bike and some bozo will be on the horn instantly and shouting at you when he gets past because you’ve delayed him by three-tenths of a nanosecond.

But not if the car’s being controlled by Google’s self-driving system. As you can see in this video, the computer that steers Google’s car can recognise a cyclist and knows to hold back when it sees a hand signal, and even to wait if the rider behaves hesitantly.

After an unpromising start, having to be bribed by her parents to learn to ride without stabilisers, Sarah became rather keener on cycling in her university years, and was eventually persuaded to upgrade to proper road cycling by the prospect of a shiny red Italian bike, which she promptly destroyed by trapping a pair of knickers in the rear derailleur. Sarah writes about about cycling every weekend on road.cc.

Latest Comments