Google/Alphabet executive Eric Schmidt predicted huge growth for machine learning and startups that develop machine learning tools. Other executives highlighted the potential for cloud-based machine learning, and hundreds of celebrities and tech leaders, including Bill Gates, Elon Musk, and Stephen Hawking, have endorsed a letter calling for care in the use of artificial intelligence and machine learning.
Lack of care appears to have been behind one of Microsoft's ventures into machine learning, a project that led the company to step in the smelly substance that you should use care to avoid. That was the Tay chatbot mishap.
I think we've all learned over the years that there are two basic but vital questions that we have to ask when implementing any technology: What can this technology do for us? What can go wrong? I guess the second question could be rephrased to, "What can this technology do TO us?
Entrepreneurs specialize in the first question. I've interviewed hundreds of executives in startups, spinoffs, and new business units. Their brains are packed with ideas, all about the potential for their new tech. On the "glass is half full" debate, those folks take the approach that the glass is completely full, no matter how it appears. They see no reason to not be cock-eyed optimists.
The second question about what can go wrong is where corporate cybersecurity professionals, investors, and risk managers come into play. True, they are sometimes the Dr. No's of the tech world. Yet, they serve a vital purpose. They want to know what port might be exposed and who will have access to data, or, if they have a financial stake, what happens if people don't buy a product and if another company has a competing product. They keep our dreams honest.
So, what was Microsoft thinking when it unleashed Tay on the world? Maybe they weren't thinking, or didn't have enough of the checks the Dr. No's of the world provide.
The Tay chatbot -- designed to communicate like a 19-year-old woman -- learned, or at least parroted, all the wrong things by the end of its 24-hour lifespan. Its activity quickly degenerated into crude, racist, and myoginistic Twitter posts that espoused violence, as it was "trained" by the sick minds in the online world.
In exploring what went wrong, one article noted, "In its apology, Microsoft's Peter Lee, corporate vice president of Microsoft Research, writes that the company did test her under a range of conditions to ensure that she was pleasant to talk to. It appears that this testing did not properly cover those who would actively seek to undermine and attack the bot."
One challenge with technology such as a chatbot or an audio guide like Apple's Siri is that they can't necessarily add context to what they hear or say. For example, we discussed Siri's limitations when faced with queries about rape and suicide in Tech Airs Its Downside, and Some Good News. I don't believe the flaw is in Siri itself but how heavily we rely on technology. Machines can learn from us or from their own activity, but they are only tools, not human. With Tay, Microsoft provided a tool, but didn't set limits on how it could be used. One observer said that technologies like Tay, Siri, and IBM's Watson need to be programmed with an ever-evolving code of conduct that keep pace with our ability to do evil.
Machine learning could work wonders in helping us to accomplish tasks such as improving product maintenance or customer support. Machine learning offers an opportunity to comb millions of files to help find the best treatments for disease or strategies for dealing with environmental threats. Machine learning as a social tool? Maybe that was the first mistake.
In a world where someone, somewhere is just waiting to pounce on any vulnerability in a new technology the burden is on the shoulders of developers to find and close loopholes. They can be cock-eyed optimists but they also have to switch modes and play Dr. No for a while, particularly when we see how quickly something with a bit of promise can turn so bad.