Comments
Driverless Cars Present Ethical Challenges
View Comments: Newest First | Oldest First | Threaded View
Page 1 / 17   >   >>
Re: Google's roboturtles?
  • 11/2/2017 3:36:40 PM
NO RATINGS

While we're back on the topic of driverless cars and safety ... 

The New York Times last month posted an interesting (and somewhat alarming) editorial (Would You Buy a Self-Driving Future From These Guys?) that presented data showing strong public concern about the d=safety of this technology.

Much of the editorial was focused on safety issues, particularly given the pro-industry impulses of the federal government and their canvalier attitude about safety:

People have good reason to doubt grand promises about world-changing technology. They have lost countless hours to unreliable software and had their personal data hacked. They have been let down by companies that hid safety defects (General Motors and Takata) or lied about emissions (Volkswagen).

Further, experts warn that the hype around automated cars is belied by the struggles these machines have in the rain, or when tree branches hang too low, or on bridges or on roads with faded lane markings.

Yet, members of Congress, encouraged to do so by auto and tech lobbyists, have proposed bipartisan bills that would let industry roll out automated cars more quickly by exempting them from existing safety regulations, like those that govern the performance of steering wheels, airbags and brakes, and by directing the Department of Transportation to come up with new rules instead.

Lawmakers have to do better than that if they care about what the public is saying. A bill passed by the House last month would let manufacturers sell up to 25,000 automated cars a year without meeting all federal safety standards, and up to 100,000 cars after three years. The companies would not even have to establish that their cars are as safe as conventional vehicles before the number of exemptions increased.

A bill approved by the Senate Commerce Committee is a bit better. It would require a safety evaluation before raising the cap on exemptions. It would also limit the total number to 80,000.

A larger problem with both bills is that they would do nothing to increase the size of the budget or staff of the National Highway Traffic Safety Administration, which is responsible for overseeing the industry and will have to write new rules for self-driving cars.

The agency has been underfunded for years and has struggled to investigate auto defects and improve safety standards. Worse still, President Trump has proposed cutting the agency's operations and research budget by $24 million, or 7.5 percent, in the 2018 fiscal year.

Today, the agency is woefully unprepared to regulate self-driving cars, particularly at the scale proponents hope to see down the line. More electrical engineers, programmers and cybersecurity specialists who can evaluate such cars have to be hired.

While the Times also cites some of the benefits of the technology, it makes a pitch for more safety-focused federal standards.

 

Re: Google's roboturtles?
  • 4/20/2017 11:31:35 AM
NO RATINGS

@Lyndon_Henry Yes, we do have a very strong drive for self-preservation. But I wonder once insurance companies start assessing risk on driverless cars how different priorities  would play into their own actuarial tables.

Re: Google's roboturtles?
  • 4/20/2017 10:55:36 AM
NO RATINGS

..

Tomsg writes


So if the car behaves like a human, should it be penalized any more severely? There are a lot of issues going on here.


 

First, let me amend my hypothetical scenario. If it's a situation where the driver just runs smack into the stalled car with the old ladies, the driver could well be held responsible for driving too fast for the road conditions (or something like that).

So let's say the driver's on the mountainside road, and the car with the three old ladies has somehow swerved into the driver's lane. The driver then has to decide whether to go for a headon collision, or drive his/her car off the road and crash to a pretty much certain death. My guess is that the driver will opt for the headon collision, maybe trying for a sideswipe to lessen impact.

This would probably be judged an accident, but in the event it goes to trial (maybe because local law enforcement or the families of the old ladies believe the driver should have been more self-sacrificing or something) the case would probably be decided by a human jury (the juridical system doesn't seem quite ready yet to allow robots into the jury pool). So my guess is that the jury would probably decide this was just a terrible accident and the driver was not at fault.

Not so, in my estimation, if it was a robocar (autonomous vehicle). In every case I can think of, even if the human driver gets killed, a jury very likely would assign guilt to the robocar (probably carmaker, AI system, and software provider, each) AND it's likely, to the human driver (or the surviving insurance company).

After all, if the human driver let the robot run the car, that would be seen as a mistake. If he/she didn't turn over control to the robot system, that would also be seen as a mistake. So the human as well as the robocar would be implicated.

Enjoy the road ...

..

Re: Google's roboturtles?
  • 4/20/2017 10:19:34 AM
NO RATINGS

So if the car behaves like a human, should it be penalized any more severely? There are a lot of issues going on here.

Re: Google's roboturtles?
  • 4/19/2017 4:59:47 PM
NO RATINGS

..

Ariella quotes:


Mercedes-Benz has made a difficult decision for its customers. While the car will examine any possible way to save all lives, it will choose to prioritize its occupants when presented with a no-win scenario.


 

I'm pretty sure that the Mercedes AI system comes closest to replicating how a human mind would operate – self-preservation. If you're driving along a mountain road and there's a stalled car ahead with three old ladies, and your only option is to drive off the cliff, I doubt you would perform a quick mental benefit-cost calculation ...

"Now let's see ... how many are in my car? How much is my life and potential usefulness worth compared to theirs? etc., etc. ..."

Instead, I'd predict your immediate reaction will likely be to stomp on the brake and plough into the stalled car and the old ladies rather than turn the wheel and crash to your death. 

I also think Mercedes would hawk the Me First feature of their software as a major selling point, and their robocar sales would skyrocket over their competitors' ...

..

Re: Google's roboturtles?
  • 4/19/2017 12:56:46 PM
NO RATINGS

@Lyndon_Henry I see this issue was discussed at the end of last year herereadwrite.com/2016/12/19/autonomous-mercedes-benz-cars-spare-occupants-pedestrians-tl4/?utm_campaign=coschedule&utm_source=googleplus_page&utm_medium=ReadWrite&utm_content=Autonomous%20Mercedes%20will%20spare%20occupants%20over%20pedestrians

It bring up the MIT Moral Machine, which  "presents you with scenarios involving different types of pedestrians – forcing you to choose who to hit, or whether or not the occupants of the car should be given priority."

It doesn't evoke the Trolley Problem, but that's what the description amounts to.

In MIT's scenarios, you make some tough decisions. Should you hit the group of three young females or five elderly females? Are dogs as important as humans? If pedestrians and occupants are of the same demographic, who should the car spare?

Google has been tackling these questions and more with its own autonomous system – recently teaching it to be considerate to different types of wheeled traffic in addition to pedestrians. Cyclists, in particular, are a difficult subject for autonomous vehicles to predict. They don't all obey the set of rules of the road, often swerving between lanes and zipping through stop signs. Because of this, Google decided to tweak its algorithm to be extra careful and courteous around them.

Mercedes-Benz has made a difficult decision for its customers. While the car will examine any possible way to save all lives, it will choose to prioritize its occupants when presented with a no-win scenario.

 

Re: Google's roboturtles?
  • 7/9/2016 11:45:20 AM
NO RATINGS

..

Ariella writes


 I'd guess that as we come closer to realizing a workable driverless car, people are waking up to the possible problems that we'd have to deal with beyond the technical issues. I'd also guess that some people would prefer to have the car make that life and death decision to making it themselves.


 

In today's "Me First" culture, it's hard to imagine that new robocar buyers would accept anything less than a system which gives absolutely highest priority to protecting the occupants of the car at all costs. I'd bet the effectiveness of each make/model's autonomous software in protecting the driver and passengers would become a competitive issue in the marketing of these vehicles.

I see all this leading to an eventual bonanza for personal injury lawyers...

..

Re: Google's roboturtles?
  • 7/5/2016 8:59:18 AM
NO RATINGS

Thanks for the links @Lyndon_Henry. I have noticed a number of articles on the topic sprouting up recently. I'd guess that as we come closer to realizing a workable driverless car, people are waking up to the possible problems that we'd have to deal with beyond the technical issues. I'd also guess that some people would prefer to have the car make that life and death decision to making it themselves. Some people would just instinctively save themselves even if it means killing others but then be wracked by guilt ever after. 

Re: Google's roboturtles?
  • 7/4/2016 11:40:29 PM
NO RATINGS

..

The issue of incorporating an "ethical" decisionmaking algorithm in driverless cars' machine intelligence has seen new public exposure because of a survey recently published in Science magazine – which is why I'm resuscitating this discussion from last fall.

The issue was discussed June 29th on CBS This Morning. You can read a text synopsis here: The ethical dilemmas facing self-driving cars.

You can see an interesting YouTube video of the CBS This Morning segment here:

 Should Driverless Cars Make Ethical Decisions

I think all this suggests that the prospect of your car being programmed to decide to kill somebody is making a lot of the general public uneasy. I wonder if some motorists who've been looking forward to laying back and watching TV or texting happily while being whisked along by their robotic drivers will start having second thoughts, pondering whether their particular vehicle has the algorithm which will sacrifice them instead of the other guy if a life-or-death decision has to be made...

..

Re: Google's roboturtles?
  • 9/15/2015 9:07:52 PM
NO RATINGS

<In my categorization of problem drivers (Maniacs, Serial Killers, Dingbats, Zombies, Kamikazes, etc.), these human turtles would fall under the heading Dingbats.

I suspect that, as they multiply, robocars will also be perceived in this category, both because of strict rules adherence, and also because insurance companies will insist on slow speeds to minimize accidents and the invocation of liability.>

@Lyndon_Henry Yes, that's  the way I see it. I really love your classification system; I think it would make a very entertaining piece to get the details and examples for each. 

Page 1 / 17   >   >>


INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +