Driverless Cars Present Ethical Challenges


Driverless cars are said to be safer because they reduce accidents that occur as a result of human error. However, accidents will still happen, and in a situation in which it cannot be avoided, what will direct the driverless car?

Say, you’re driving at 30 miles an hour when a child suddenly chases a ball right into the path of your car. You would brake if you can stop in time. If you can’t brake you’d swerve to avoid hitting the child. But what if swerving forces you either to hit another car with passengers in it or a truck that would cause harm to those in your car? Does self-preservation override all other consideration? Would we be driven by the emotional pull of saving a child over all else? Or would we be paralyzed into doing nothing because we can’t bring ourselves to take part in any action that causes harm?

These are the types of questions that bring ethics specialists and engineers together in addressing the challenge of directing driverless cars. Two names that come up repeatedly in articles concerned with ethically directed algorithms for such scenarios are Chris Gerdes, professor of mechanical engineering at Stanford University, and Patrick Lin, professor of philosophy at California Polytechnic State University and director of the university’s Ethics + Emerging Sciences Group.

Essentially, it boils down to a variation on the decades old Trolley Problem, which dealt with what to do with a runaway trolley. Instead of a trolley, there may be a bus full of school children. While the driverless car can be programmed to avoid hitting the bus, the question arises, what if avoiding it causes it to hit someone or something else.

As Gerdes has observed, “These are very tough decisions that those who design control algorithms for automated vehicles face every day.” In such scenarios, there are no choices that will avoid all harm, only possibilities to mitigate the harm, say, hitting the car with fewer people in it or the bus because it can survive the impact better. Or perhaps your car would drive itself off a cliff, committing suicide to avoid harming anyone else.

That’s what the article, Driverless Cars -- Pandora's Box Now Has Wheels illustrated with this “Unavoidable Death Warning.”

That picture posits that the choice will be given to a human rider; however, part of the problem anticipated for a driverless car in that situation is that the outcome will have to be determined ahead of time by the human who sets up the algorithm that directs it.

In fact, the outcome would have to be planned in advance by someone who is not directly affected by the autonomous car’s choice. That gives rise to a complaint many post in comments on articles that raise the question about algorithms making such life-and-death decisions. Some say that, no matter what, they’d want to save their own or their kids’ lives and would not want any solution involving crashing their car in a way that threaten their safety.

Objectively speaking, though, saving one’s own family is not necessarily the best moral choice when the movement to secure the safety of one results in a larger number of deaths or injuries. Another thing to consider is that any time a person agrees to be driven by another, whether that person is a taxi driver, bus driver, train conductor, or airline pilot, one is subject to another’s decision about what to do in such situations. Certainly, anyone driving others should have certain rules to follow that could be implemented in programming.

What I would suggest is a consistent code that is fully transparent with an opt-in agreement required for all who ride in the car. That means they have to be aware of the autonomous car’s programming and the possible risks they may assume in riding in it.

As for the decisions themselves, the cars would have to follow Mr. Spock’s dictum: “The needs of the many outweigh…the needs of the few…or the one.” The autonomous car would have to assess what would cause the least harm to life overall without assigning any different weights to one life over another. The people in the driverless car would not be considered of greater value than the people outside of it who might be struck, as each human life would be considered equal.

A Vulcan solution is not necessarily a human solution, but the argument for overall better safety for driverless cars is predicated on the fact that a machine will act more logically and therefore more safely than a person.

How should we address these ethical questions?

Ariella Brown,

Ariella Brown is a social media consultant, editor, and freelance writer who frequently writes about the application of technology to business. She holds a PhD in English from the City University of New York. Her Twitter handle is @AriellaBrown.

Machine Learning Taps Power of Curiosity

Learning can be motivated by rewards, but mostly it is motivated by curiosity. New research examines whether machine learning can embed curiosity, too.

Why Machine Learning Can Improve Customer Service

Machine learning-powered chatbots offer companies the possibility of saving money while improving customer service.


Re: Google's roboturtles?
  • 4/20/2017 11:31:35 AM
NO RATINGS

@Lyndon_Henry Yes, we do have a very strong drive for self-preservation. But I wonder once insurance companies start assessing risk on driverless cars how different priorities  would play into their own actuarial tables.

Re: Google's roboturtles?
  • 4/20/2017 10:55:36 AM
NO RATINGS

..

Tomsg writes


So if the car behaves like a human, should it be penalized any more severely? There are a lot of issues going on here.


 

First, let me amend my hypothetical scenario. If it's a situation where the driver just runs smack into the stalled car with the old ladies, the driver could well be held responsible for driving too fast for the road conditions (or something like that).

So let's say the driver's on the mountainside road, and the car with the three old ladies has somehow swerved into the driver's lane. The driver then has to decide whether to go for a headon collision, or drive his/her car off the road and crash to a pretty much certain death. My guess is that the driver will opt for the headon collision, maybe trying for a sideswipe to lessen impact.

This would probably be judged an accident, but in the event it goes to trial (maybe because local law enforcement or the families of the old ladies believe the driver should have been more self-sacrificing or something) the case would probably be decided by a human jury (the juridical system doesn't seem quite ready yet to allow robots into the jury pool). So my guess is that the jury would probably decide this was just a terrible accident and the driver was not at fault.

Not so, in my estimation, if it was a robocar (autonomous vehicle). In every case I can think of, even if the human driver gets killed, a jury very likely would assign guilt to the robocar (probably carmaker, AI system, and software provider, each) AND it's likely, to the human driver (or the surviving insurance company).

After all, if the human driver let the robot run the car, that would be seen as a mistake. If he/she didn't turn over control to the robot system, that would also be seen as a mistake. So the human as well as the robocar would be implicated.

Enjoy the road ...

..

Re: Google's roboturtles?
  • 4/20/2017 10:19:34 AM
NO RATINGS

So if the car behaves like a human, should it be penalized any more severely? There are a lot of issues going on here.

Re: Google's roboturtles?
  • 4/19/2017 4:59:47 PM
NO RATINGS

..

Ariella quotes:


Mercedes-Benz has made a difficult decision for its customers. While the car will examine any possible way to save all lives, it will choose to prioritize its occupants when presented with a no-win scenario.


 

I'm pretty sure that the Mercedes AI system comes closest to replicating how a human mind would operate – self-preservation. If you're driving along a mountain road and there's a stalled car ahead with three old ladies, and your only option is to drive off the cliff, I doubt you would perform a quick mental benefit-cost calculation ...

"Now let's see ... how many are in my car? How much is my life and potential usefulness worth compared to theirs? etc., etc. ..."

Instead, I'd predict your immediate reaction will likely be to stomp on the brake and plough into the stalled car and the old ladies rather than turn the wheel and crash to your death. 

I also think Mercedes would hawk the Me First feature of their software as a major selling point, and their robocar sales would skyrocket over their competitors' ...

..

Re: Google's roboturtles?
  • 4/19/2017 12:56:46 PM
NO RATINGS

@Lyndon_Henry I see this issue was discussed at the end of last year herereadwrite.com/2016/12/19/autonomous-mercedes-benz-cars-spare-occupants-pedestrians-tl4/?utm_campaign=coschedule&utm_source=googleplus_page&utm_medium=ReadWrite&utm_content=Autonomous%20Mercedes%20will%20spare%20occupants%20over%20pedestrians

It bring up the MIT Moral Machine, which  "presents you with scenarios involving different types of pedestrians – forcing you to choose who to hit, or whether or not the occupants of the car should be given priority."

It doesn't evoke the Trolley Problem, but that's what the description amounts to.

In MIT's scenarios, you make some tough decisions. Should you hit the group of three young females or five elderly females? Are dogs as important as humans? If pedestrians and occupants are of the same demographic, who should the car spare?

Google has been tackling these questions and more with its own autonomous system – recently teaching it to be considerate to different types of wheeled traffic in addition to pedestrians. Cyclists, in particular, are a difficult subject for autonomous vehicles to predict. They don't all obey the set of rules of the road, often swerving between lanes and zipping through stop signs. Because of this, Google decided to tweak its algorithm to be extra careful and courteous around them.

Mercedes-Benz has made a difficult decision for its customers. While the car will examine any possible way to save all lives, it will choose to prioritize its occupants when presented with a no-win scenario.

 

Re: Google's roboturtles?
  • 7/9/2016 11:45:20 AM
NO RATINGS

..

Ariella writes


 I'd guess that as we come closer to realizing a workable driverless car, people are waking up to the possible problems that we'd have to deal with beyond the technical issues. I'd also guess that some people would prefer to have the car make that life and death decision to making it themselves.


 

In today's "Me First" culture, it's hard to imagine that new robocar buyers would accept anything less than a system which gives absolutely highest priority to protecting the occupants of the car at all costs. I'd bet the effectiveness of each make/model's autonomous software in protecting the driver and passengers would become a competitive issue in the marketing of these vehicles.

I see all this leading to an eventual bonanza for personal injury lawyers...

..

Re: Google's roboturtles?
  • 7/5/2016 8:59:18 AM
NO RATINGS

Thanks for the links @Lyndon_Henry. I have noticed a number of articles on the topic sprouting up recently. I'd guess that as we come closer to realizing a workable driverless car, people are waking up to the possible problems that we'd have to deal with beyond the technical issues. I'd also guess that some people would prefer to have the car make that life and death decision to making it themselves. Some people would just instinctively save themselves even if it means killing others but then be wracked by guilt ever after. 

Re: Google's roboturtles?
  • 7/4/2016 11:40:29 PM
NO RATINGS

..

The issue of incorporating an "ethical" decisionmaking algorithm in driverless cars' machine intelligence has seen new public exposure because of a survey recently published in Science magazine – which is why I'm resuscitating this discussion from last fall.

The issue was discussed June 29th on CBS This Morning. You can read a text synopsis here: The ethical dilemmas facing self-driving cars.

You can see an interesting YouTube video of the CBS This Morning segment here:

 Should Driverless Cars Make Ethical Decisions

I think all this suggests that the prospect of your car being programmed to decide to kill somebody is making a lot of the general public uneasy. I wonder if some motorists who've been looking forward to laying back and watching TV or texting happily while being whisked along by their robotic drivers will start having second thoughts, pondering whether their particular vehicle has the algorithm which will sacrifice them instead of the other guy if a life-or-death decision has to be made...

..

Re: Google's roboturtles?
  • 9/15/2015 9:07:52 PM
NO RATINGS

<In my categorization of problem drivers (Maniacs, Serial Killers, Dingbats, Zombies, Kamikazes, etc.), these human turtles would fall under the heading Dingbats.

I suspect that, as they multiply, robocars will also be perceived in this category, both because of strict rules adherence, and also because insurance companies will insist on slow speeds to minimize accidents and the invocation of liability.>

@Lyndon_Henry Yes, that's  the way I see it. I really love your classification system; I think it would make a very entertaining piece to get the details and examples for each. 

Re: Google's roboturtles?
  • 9/14/2015 6:24:30 PM
NO RATINGS

Ariella observes that "...you could have the same issue [annoyingly slow speed] with a human driver who plays it very safe."

Of course, and I'm sure most of us encounter these hazards occasionally (excessively slow driving can sometimes be as bad as excessively fast).

In my categorization of problem drivers (Maniacs, Serial Killers, Dingbats, Zombies, Kamikazes, etc.), these human turtles would fall under the heading Dingbats.

I suspect that, as they multiply, robocars will also be perceived in this category, both because of strict rules adherence, and also because insurance companies will insist on slow speeds to minimize accidents and the invocation of liability.

But accidents will happen, and almost surely, people will get killed. When that happens, I feel sure that the "ethical algorithm" issues that Ariella has highlighted will become a major focus in the courtroom.

 

Page 1 / 17   >   >>
INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +