Artificial Intelligence: The Good, The Bad, The Ugly

When the likes of Elon Musk and Stephen Hawking go on record warning about the dangers of AI, it’s probably prudent to take notice. However, before rushing off into full panic mode, some definitions and perspective would be in order.

The type of artificial intelligence Musk and Hawking are referring to is known as Strong AI, or AGI (Artificial General Intelligence). This is the level at which a machine could readily pass itself off as indistinguishable from a human in cognitive, perceptual, learning, manipulative, planning, communication, and creative functions -- a thinking machine that can pass the Turing Test. We’ll close with some perspectives on Strong AI, but first let’s take a look at Weak AI, also known as Applied AI or Enhanced Intelligence (EI).

The Good
There have been several methods, tools, and approaches taken on the road to artificial intelligence. SAS is all over this field: machine learning, analytical/Bayesian statistics, natural language processing, neural/deep neural networks, and cognitive computing to name a few.

To see the difference between traditional software development approaches and machine learning, consider first how the earliest chess playing computers were designed. You programmed in the rules of the game, the goal, and then layer upon layer of "If-Then-Else" commands that captured the aspects of strategy gleaned from the game's master players. The computer will then play at the level determined by the algorithmic representation of that domain knowledge, but no better. For the computer to up its game, you had to change the code, recompile, redeploy, and rerun -– a linear process that doesn’t scale well.

The machine learning and neural network approaches are illustrated in this short video clip of Google’s Deep-Q learning how to play Atari Breakout. Starting out, the only four things the algorithm knows are the sensory inputs (the screen), action in the environment (how to move the paddle), a measure of success (score), and an objective to maximize future rewards. Not even a single rule or tactic, it learns these as it goes. After that, the software teaches itself how to play Breakout, eventually including the trick of tunneling through near the wall.

Where is machine learning being applied?

  • Robotics that learn by doing rather than being specifically programmed for each task
  • Detecting satire in customer comments via text analytics
  • Personal agents that learn and understand your buying/entertainment preferences
  • Autonomous vehicles, self-landing airplanes and space shuttles, self-docking spacecraft
  • Patient diagnosis -– not just chess-like expert systems, but learning and improving with each case
  • Speech recognition and language translation, without the non sequiturs and cultural faux pas
  • Management of smart cities: traffic and energy
  • Facial/visual recognition and other biometric applications
  • Smart implants: brain (to control Parkinson’s), pacemakers, insulin pumps, cochlear and retinal
  • Control of exoskeletons and prosthetic limbs
  • A DARPA project to reverse damage caused by brain injury with neuroprosthetics
  • Computer-aided interpretation of medical imaging
  • And on a lighter note, Roomba’s and robotic pets

That’s a pretty cool list, only partial and still growing, of both recent accomplishments and a peek at what lies in our near future, that can’t help but echo Ken Jennings’ Final Jeopardy answer upon losing to a computer: “I, for one, welcome our new computer overlords”.

The Bad
The fear over Strong AI is of course that of the runaway, uncontrollable sentient machines, where biological intelligence has outlived its usefulness, and the robots take appropriate steps to disinfect their environment of the human virus.

But you don’t have to enter the realm of science fiction to find wickedness of this magnitude. A little common sense will suggest that long before sentient machines start designing and building their own ever more powerful replicas of themselves, actual humans will either deliberately or accidentally program much weaker AI to perform equally nefarious deeds. Battlefield droids and weaponized biologics are under development today. The harm and destruction that cyber criminals will be able to realize as they hack into our increasing complex, interconnected and interdependent infrastructure, the systems, devices, economies, and society is already an ongoing concern that will only get worse.

Personally, I would consider it a great success were humanity to survive to the point where it needed to concern itself over self-conscious, self-replicating machines bent on global, nay, galactic, domination. It’s not the machines that should be our primary concern, but the humans that program, run and hack them, with AI likely playing a central role in assuring a secure future for humankind.

The Ugly
So how likely is a future scenario where we need to concern ourselves with what machines think about us? For this year’s EDGE question, What to Think about Machines that Think, editor John Brockman gathered the opinions and judgments of several hundred of the greatest thinking machines of our species. These assessments run the gamut from it’s impossible, ever, to the inevitable emergence of a “singularity” between man and machine before the end of this century, and everything in between.

I’m no anthropocentric chauvinist, I don’t hold the human brain and consciousness to any standard that transcends the physical laws of our universe, but after considering all the arguments, I come away persuaded that several of them make powerful, compelling cases that Strong AI is not in the cards.

  • The human brain is not a Turing machine, it is not merely algorithmic, as are, however, all AI constructs so far. There are a countably infinite number of computable functions for AI to address and solve, but there are an uncountably infinite number of non-computable functions. Put another way, there is a second order infinity of mathematical truths that can never be proven, but can nonetheless be understood by the human mind.
  • The subjective experience of self is something we are far from understanding let alone creating within a machine. Qualia such as color does not exist in nature -– there are only 700nm electromagnetic waves which the brain interprets as RED.
  • The only mechanism known so far that can create a subjective experience is evolution, and many suspect that it will take the same on our part, directed biological evolution, in order for humans to build a self-aware intelligence. The human brain is comprised of some 100 billion neurons, each making 10,000 connections, for a total of 1,000,000,000,000,000 synapses. All in a compact three-pound package that consumes just 20 watts of energy. A single eukaryotic cell is vastly more complex than our most powerful CPU chip. Moore’s Law ain’t gonna get us there.
  • Meaning and metaphor. There is a difference, a big difference, perhaps an insurmountable gulf, between manipulating symbols versus grasping their meaning. Consider a piece I wrote upon the fifth anniversary of my brother’s death some years ago now:

    “A permanent vase-like illusion: your brother, or emptiness? But time has a way of regenerating the devastated landscape. A visit to his memory now finds that the birds and blossoms have returned, a melancholy meadow where it is forever late summer.”

What is a Turing machine to make of melancholy meadows where it is forever late summer? Metaphors are neither true nor false – that is not how we interpret them. Will a machine ever experience revulsion at the image of a swastika or a burning cross? What sort of algorithm would it take to comprehend the gaps and relationships between symbol, language and meaning in Magritte’s The Treachery of Images (“Ceci n'est pas une pipe”)?

The Doomsday Clock currently sits at three minutes till midnight without any assistance from Evil AI -- humanity has managed that all by itself. A noble planetary goal for the 21st century would be to see it moved back to before noon, and I, for one, welcome the help of Weak AI, our servant and partner in all its forms and applications, in urging it in that direction.

Leo Sadovy, Performance Management Marketing, SAS

Leo Sadovy handles marketing for Performance Management at SAS, which includes the areas of budgeting, planning and forecasting, activity-based management, strategy management, and workforce analytics. He advocates for SAS’s best-in-class analytics capability into the offices of finance across all industry sectors. Before joining SAS, he spent seven years as Vice President of Finance for Business Operations for a North American division of Fujitsu, managing a team focused on commercial operations, customer and alliance partnerships, strategic planning, process management, and continuous improvement. During his 13-year tenure at Fujitsu, he developed and implemented the ROI model and processes used in all internal investment decisions, and also held senior management positions in finance and marketing.

Prior to Fujitsu, Sadovy was with Digital Equipment Corp. for eight years in sales and financial management. He started his management career in laser optics fabrication for Spectra-Physics and later moved into a finance position at the General Dynamics F-16 fighter plant in Fort Worth, Texas. He has an MBA in Finance and a Bachelor’s degree in Marketing. He and his wife Ellen live in North Carolina with their three college-age children, and among his unique life experiences he can count a run for US Congress and two singing performances at Carnegie Hall.

Big: Data, Model, Quality and Variety

A fresh look at big data. It's time to apply "big" not just to the data, but to the model, quality, and variety.

Yelling Analytics in a Crowded Theater

Leo Sadovy reflects on six years of writing his Value Alley blog for SAS and how analytics have matured over those same years.

Re: To Be or Not To Be
  • 4/17/2016 7:54:33 PM

I do wonder how many jobs actually are being lost to robotics. Look at something like the auto industry. Some of those companies would have disappeared if it wasn't for robotics (and other inititiatives), and the old jobs would have been gone anyway. Are many low-level jobs going to robots (jobs that would have disappeared even without the robots)?

To Be or Not To Be
  • 4/16/2016 11:52:44 AM

I'm in whole hearted agree that we have to worry about the people programming artificial intelligence.  If a computer could feel emotion it could be programmed to find human suffering a beautiful thing.  A true test would be for it to decide that was wrong. But if it could do that, it could also overwrite any programming that would prevent it from harming humans.

Besides just to see if it can be done, there is little reason to build a computer that is self-aware.  If we have robots designed to do our house hold chores and it decided it wanted to be an artist instead someone would want their money back.  It's enough just to give it the algorithims to do the job.

I agee we are a long way from worrying whether we have to give machines voting and civil rights.  Right now the real threat is robots taking over low skilled labor jobs.