Killer Robots: How Much Authority Should We Give to AI?


(Image: RM Studio/Shutterstock)

(Image: RM Studio/Shutterstock)

Should killer robots be able to make their own decisions about whether to take a human life, or should the US military continue to require that a human be the final decision maker? That was the question posed by Senator Gary Peters of Michigan to the second highest ranking general in the US military this week during a Senate Armed Services Committee hearing. Air Force General Paul Selva was testifying in a confirmation hearing for his reappointment to be vice chair of the Joint Chiefs of Staff.

Peters noted that there's currently a restriction that requires a human make the decision, but that the restriction was due to expire later this year. Selva voiced his support for the restriction.

"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life," he told the committee. He added that didn't mean we shouldn't be examining how to defend against such robots being developed by our adversaries.

[Learn more about AI and Machine Learning. The most recent edition of A2 Academy available now on-demand.]

That's about as high stakes as you can go on the question of whether a particular task should be automated and turned over to artificial intelligence to perform. Many organizations are looking at the potential to automate particular functions and take humans out of the equation, from customer service to medicine. Such actions could speed up operations, improve efficiency, eliminate human error, and reduce costs. But they come with caveats, and some of the great strategic thinkers of today have expressed caution when it comes to how we deal with artificial intelligence as a society.

Indeed, Selva's remarks came a few days after entrepreneur Elon Musk told the National Governors Association that he believes the government needs to regulate AI now before it becomes dangerous to humanity.

Speaking at the organization's summer meeting Musk said, "I have exposure to the very cutting-edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal."

We're not at the killer robot stage, yet, however. Selva said in his testimony that such things were future threats and not something that was a concern right now.

He told the committee that all of the branches of the armed services were engaged in a campaign to understand where advanced artificial intelligence and autonomy can be inserted into current operations and where they can be used in new and imaginative operations.

And Selva confirmed that US adversaries were also experimenting with AI and may not always consider the same ethical concerns that the US considers when implementing programs.

"We should all be advocates of keeping the ethical rules of war in place lest we unleash on humanity a set of robots we don't know how to control," he said.

What do you think? Are there some decisions that only a human should make? What are they? What distinguishes the decisions that should be made only by humans from the decisions that we empower an AI to make?

Jessica Davis, Senior Editor, Enterprise Apps, Informationweek

Jessica Davis has spent a career covering the intersection of business and technology at titles including IDG's Infoworld, Ziff Davis Enterprise's eWeek and Channel Insider, and Penton Technology's MSPmentor. She's passionate about the practical use of business intelligence, predictive analytics, and big data for smarter business and a better world. In her spare time she enjoys playing Minecraft and other video games with her sons. She's also a student and performer of improvisational comedy. Follow her on Twitter: @jessicadavis.

Success Secrets of Top Omnichannel Retailers

It's a tough and changing environment for retailers. Yet some are enjoying continued success during turbulent times. We take a closer look at how they do it.

A2 Radio: Lean Analytics for 2018

Lean Analytics author Alistair Croll joins AllAnalytics radio to talk about how to apply the process for 2018.


Insidious unintended consequences
  • 10/2/2017 4:49:15 PM
NO RATINGS

..

Maryam writes that 

... while a robot may be more indestructible than a human it can still have other vulnerabilities that humans don't have. They are only as smart as their programming, and they cannot think outside that programming. 

Maryam's comment about potential unforeseen vulnerabilities of robots with nonetheless smart programming made me think about a recent Guardian article with the ominous title The rise of robots: forget evil AI – the real risk is far more insidious.

The theme of the article is succinstly summarized in a couple of sentences:

... the real risk posed by AI – at least in the near term – is much more insidious. It's far more likely that robots would inadvertently harm or frustrate humans while carrying out our orders than they would become conscious and rise up against us.

To illustrate what is meant, the article cites an example from computer science professor Stuart Russell (described as an "artificial intelligence pioneer") who leads the Center for Human-Compatible Artificial Intelligence:

He uses autonomous vehicles to illustrate the type of problem the center will try to solve. Someone building a self-driving car might instruct it never to go through a red light, but the machine might then hack into the traffic light control system so that all of the lights are changed to green. In this case the car would be obeying orders but in a way that humans didn't expect or intend. Similarly, an artificially intelligent hedge fund designed to maximize the value of its portfolio could be incentivized to short consumer stocks, buy long on defence stocks and then start a war – as suggested by Elon Musk in Werner Herzog's latest documentary.

"Even when you think you've put fences around what an AI system can do it will tend to find loopholes just as we do with our tax laws. You want an AI system that isn't motivated to find loopholes," Russell said.

"The problem isn't consciousness, but competence. You make machines that are incredibly competent at achieving objectives and they will cause accidents in trying to achieve those objectives."

  I find this a particularly intriguing notion – the possibility of disastrous unintended consequences arising out of otherwise benign attempts to program AI machines for supposedly desirable missions.

..

Re: We, the Killer Robots
  • 9/5/2017 10:57:21 AM
NO RATINGS

@SaneIT, I haven't found anything on ship passing distances, but this is interesting -

<According to a shipping safety report this year by Allianz, the German insurance giant, there were 17 commercial vessels sunk in collisions as recently as 2007; last year, there was only one, a smaller vessel.>

The above, plus the fact that a Navy destroyer is much more capable of changing course than a large commercial ship, would seem to imply that the investigation should start with the US Navy.

The quote is from an 8/21 article in the NYT (After Dangerous Collisions, Navy Will Pause for Safety Check)

Re: AI, war, and Elon Musk
  • 9/5/2017 8:27:35 AM
NO RATINGS

I suspect the AI war will be fought on a couple of fronts, one being very physical as Elon Musk seems to believe but the other will be a financial warfare.  A few years ago we saw what happened when a trading algorithm went off the rails on the NYSE.  While the armed robots are storming cities the trading bots will be siphoning off the countries assets.  

Re: We, the Killer Robots
  • 9/5/2017 8:21:18 AM
NO RATINGS

@PC I do assume they keep records of passing distances but if someone is paying close attention to them what is their tolerance for a "close call"  from a layman's point of view a 10 meter pass is too close for comfort but then again a 100 meter pass between ships over 100 meters in length would probably feel too close for me as well.  I think it would be interesting to see where moderate and close passes fall in a distance perspective and how often ships pass at those distances on the open ocean.  Shipping channels exist for a reason so I suspect it's not unusual to see other ships while traveling those channels but how often are they ever in close enough contact that a collision is a real danger?

AI, war, and Elon Musk
  • 9/4/2017 9:21:01 PM
NO RATINGS

..

SaneIT writes

Maybe the next time a pair of superpowers go to war against each other this is what we'll see but when it's a smaller army that is not funded by a large government then they are going to be taking their chances against the machines. 

On the subject of AI and war ... an article in today's Guardian seems particularly pertinent:

https://www.theguardian.com/technology/2017/sep/04/elon-musk-ai-third-world-war-vladimir-putin
Elon Musk says AI could lead to third world war
North Korea 'low on our list of concerns' says Tesla boss following Putin's statement that whoever leads in AI will rule world

Here's an excerpt ...

Elon Musk has said again that artificial intelligence could be humanity's greatest existential threat, this time by starting a third world war.

The prospect clearly weighs heavily on Musk's mind, since the SpaceX, Tesla and Boring Company chief tweeted at 2.33am Los Angeles time about how AI could led to the end of the world – without the need for the singularity.

...

He's less worried about North Korea's increasingly bold nuclear ambitions, arguing that the result for Pyongyang if they launched a nuclear missile "would be suicide" – and that it doesn't have any entanglements that would lead to a world war even if it did. His view is that AI is "vastly more risky" than the Kim Jong-un-led country.

Musk's fear of AI warfare has been a driving force in his public statements for a long time. Last month, he was one of more than 100 signatories calling for a UN-led ban of lethal autonomous weapons.

"Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend," the letter read. "These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.

"We do not have long to act. Once this Pandora's box is opened, it will be hard to close."

 

 Holy Moly ...

 

Re: We, the Killer Robots
  • 9/1/2017 11:29:37 AM
NO RATINGS

At least in the US, aircraft have a lot of requirements around reporting near misses. I don't know that the same system exists for naval vessels, but I hope so.

No ship ever wants to pass within 10m of a large container ship. There is some larger buffer zone that is normal and any incident where this buffer was violated is interesting. Knowing a little about the Navy, I expect there is some reporting.

You are correct, that covert missions would make this more difficult, but all ships and subs have to be aware of what's around them at all times, hiding or not.

 

@Broadway - No, I don't expect this data to be in the public domain, but I have often been surprised at what information governments will or won't release.

Re: We, the Killer Robots
  • 9/1/2017 8:52:52 AM
NO RATINGS

I agree that the number of near misses would tell us a lot but we also have to ask how many near misses go unreported.  Had the collisions not happened would the Navy have flagged the distances between their vessels and these commercial vessels as unusual?  What is their tolerance for proximity and how do they determine who is responsible for that proximity?  I don't know if US navy ships make themselves visible to ensure their presence is known or if they work to say unnoticed.  Maybe there's a data set out there with every passing distance of every vessel but how do you determine which are intentional and which are accidental?       

Re: We, the Killer Robots
  • 8/31/2017 10:38:49 PM
NO RATINGS

PredictableChaos, that's an excellent point. You don't think the Navy has that near miss data public, do you? Or perhaps we can get similar data for a lesser navy? You would expect their ratios to be even worse.

Re: We, the Killer Robots
  • 8/31/2017 1:32:00 PM
NO RATINGS

There are ways to tell if it's likely malicious or not.

If I were in Navy Analytics, one piece of data I would want to see would be about the "near misses". How many times have Navy ship passed uncomfortably close to commercial craft? Is this 2x, 5x or 10x of the rate of actual collisions?

To me, the higher multipliers would seem to imply normal errors in large and complex human systems. If we've had, say, 67 "near misses" in that last x months, there is no reason to be surprised that two of these resulted in collisions.

OTOH, if the number of "near misses" is small or non-existent, that would make it more likely that the collisions have a cause that is, at least partly, intentional.

Re: We, the Killer Robots
  • 8/31/2017 8:14:58 AM
NO RATINGS

In most instances I suspect it's the fact that the US Navy doesn't announce it's presence or let anyone know what their course will be.  As big as the oceans are, ships tend to follow specific lanes and when something the size of a container ship sees a submarine surfacing they don't have the option of slamming on the brakes.  I'm not saying that every marine accident lately has been malicious, there is a fair amount of human error to go around too.

Page 1 / 8   >   >>
INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +