Should killer robots be able to make their own decisions about whether to take a human life, or should the US military continue to require that a human be the final decision maker? That was the question posed by Senator Gary Peters of Michigan to the second highest ranking general in the US military this week during a Senate Armed Services Committee hearing. Air Force General Paul Selva was testifying in a confirmation hearing for his reappointment to be vice chair of the Joint Chiefs of Staff.
Peters noted that there's currently a restriction that requires a human make the decision, but that the restriction was due to expire later this year. Selva voiced his support for the restriction.
"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life," he told the committee. He added that didn't mean we shouldn't be examining how to defend against such robots being developed by our adversaries.
[Learn more about AI and Machine Learning. The most recent edition of A2 Academy available now on-demand.]
That's about as high stakes as you can go on the question of whether a particular task should be automated and turned over to artificial intelligence to perform. Many organizations are looking at the potential to automate particular functions and take humans out of the equation, from customer service to medicine. Such actions could speed up operations, improve efficiency, eliminate human error, and reduce costs. But they come with caveats, and some of the great strategic thinkers of today have expressed caution when it comes to how we deal with artificial intelligence as a society.
Indeed, Selva's remarks came a few days after entrepreneur Elon Musk told the National Governors Association that he believes the government needs to regulate AI now before it becomes dangerous to humanity.
Speaking at the organization's summer meeting Musk said, "I have exposure to the very cutting-edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal."
We're not at the killer robot stage, yet, however. Selva said in his testimony that such things were future threats and not something that was a concern right now.
He told the committee that all of the branches of the armed services were engaged in a campaign to understand where advanced artificial intelligence and autonomy can be inserted into current operations and where they can be used in new and imaginative operations.
And Selva confirmed that US adversaries were also experimenting with AI and may not always consider the same ethical concerns that the US considers when implementing programs.
"We should all be advocates of keeping the ethical rules of war in place lest we unleash on humanity a set of robots we don't know how to control," he said.
What do you think? Are there some decisions that only a human should make? What are they? What distinguishes the decisions that should be made only by humans from the decisions that we empower an AI to make?