Validating AI Adoption in the Enterprise


A welter of artificial intelligence products has been launched and more are expected at an accelerated pace in the future.

Anand Rao
Anand Rao

Their effectiveness in individual use cases is an unknown at the outset in the absence of specific disclosures about the patent pending intellectual property. Consulting firms like PricewaterhouseCoopers (PwC) have adopted methodologies for evaluating their effectiveness with actual operating data.

Kishore Jethanandani spoke with Dr. Anand Rao, a partner at PwC and the Innovation Lead of its Data and Analytics, at the recent Emtech Digital conference of the MIT Technology Review in San Francisco. Rao is responsible for research and commercial relationships with academic institutions and start-ups, and commercialization of innovative AI, big data and analytic techniques.

You have characterized the recourse to machine intelligence in business as augmented intelligence. What does it tell us about the roles of human and artificial intelligence? How does it impact business?

RAO: Human business decision-making, whether strategic or operational, is knowledge-based. Artificial intelligence is pervasive and will impact decision-making at each stage of the value chain. Machine intelligence augments human intelligence by sifting through humongous volumes of data that would otherwise not be possible. In the foreseeable future, humans will continue to make decisions while machines will help us to cope with complexity.

As for the impact of artificial intelligence, it will have a ubiquitous impact on increasing revenues, enhancing productivity, reducing costs, managing risks and improving user experience. The insurance industry illustrates how emerging competitors, armed with artificial intelligence, are disrupting the old business model. Insurance has been primarily reactive by paying claims after an unexpected loss. Emerging competitors, by contrast, leverage artificial intelligence to reduce losses, mitigate and even eliminate risk. They use a diversity of sources of information, sensors embedded in assets, wearable devices, cameras tracking human behavior, to predict and prevent a loss. Genomics data, for example, can predict predisposition to specific ailments and inform decisions for prevention or mitigation of costly diseases like diabetes.

A variety of new models for artificial intelligence were discussed at the Emtech Digital conference, such as natural language processing, image and video analysis, deep learning, etc. How mature are these technologies for adoption in the enterprise and what are their uses?

RAO: Natural language processing, in combination with structured data analysis, has been very much in use for the last three to five years. It helps to comb through vast volumes of documents to find useful information.

Video and image data is useful for isolating suspicious and anomalous behavior. Algorithms can now filter useful information from video images and remove long intervals when nothing meaningful happens.

A great deal of audio data from customer service can also be used to improve customer experience. Financial service companies are using audio data and correlating with other structured data, like performance of funds to offer suggestion on how advisors should interact with customers to obtain the maximal impact.

Deep learning works very well for human tasks that involve perception (e.g., recognizing objects in images), but needs to be used with caution when it comes to cognition or decision making. For compliance with money laundering laws, for example, regulators are not likely to be persuaded by a deep learning algorithm that is unable to explain its reasoning adequately in human terms.

For something as simple as making notes, humans exercise judgment in deciding what information is valuable. How would you ensure that a machine distills notes with the most useful information?

RAO: First, the machine has to be taught on what is important in a given piece of text. This is done by presenting a lot of situational data that is labeled with what is important. The system learns from countless examples and is then able to do a reasonable job of extracting notes or summaries. However, at each stage, we test whether the machine is doing an adequate or superior job. The algorithms evolve at each stage of testing till the boundaries of their competence, and conversely those of humans, is understood.

Humans donít necessarily have all the information or the answers. Contextual intelligence is an example where cultural norms or legal conventions, across countries, vary so much that humans cannot possibly assimilate all of the nuances. Machines, on the other hand, can store information and bring it to bear on decision-making. Just like people arbitrate when they have differences of opinion, we can do the same to determine the relative role of humans and machines.

A great deal of human decision making happens within the context of conflicting viewpoints, wide variety of use cases and outcomes. How can artificial intelligence help us make better collective decisions?

RAO: Yes, humans develop their viewpoints and make decisions based on a number of different sources of data, assumptions, and mental models. Artificial Intelligence or intelligent agents, as they are called, can be used to model the different viewpoints of consumers. The emergent behavior of these collections of intelligence agents can be used to determine the right decision. Hence, the artificial intelligence itself is arbitrating the multiple viewpoints under a range of scenarios using war gaming techniques.

Humans resist the adoption of artificial intelligence for decision making for several reasons. For one, the conclusions from data analytics may be against the "gut feel" or experience of the human decision maker. Under such circumstances it is easier to find fault with the machine logic than to accept an unexpected outcome.

In medical diagnosis, for example, machines crunch vast volumes of data and their conclusions can be more accurate than those of doctors. But in order to accept these conclusions the doctors need to start developing a trust in the artificial intelligence.

Similarly, machine intelligence has reduced the false positives that were common in the past in the practice of compliance with money laundering laws. However, this is not something that regulators can easily accept and it has to be demonstrated over a period of time. It takes a great deal of testing before humans overcome their initial subjective responses and recognize the business value of artificial intelligence.

How does artificial intelligence for arbitration work in the process of evaluating the decisions made by artificial intelligence?

RAO: The arbitration starts with setting goals (e.g., profitability) that an artificial intelligence engine is expected to achieve both in the long-term and short-term. Additionally, non-profit goals, such as reducing accidents in the case of autonomous cars, are also set. In order to measure the performance of the artificial intelligence engines, success metrics are set. This includes acceptable trade-offs between the multiple goals. The performance is tested on actual data just as it was reported for autonomous cars in the conference.

Much has been said of developing new products and services using vast volumes of data. The annual survey of the insurance industry of PwC mentions cyber insurance. Could you explain how the ideas of arbitrating AI work in the course of developing cyber insurance especially given the fact that not much of actuarial data exists given a short history of network computing?

RAO: Cyber-attacks come from different sources, including lone hackers, fraud groups or nation states. The intrusion activity provides clues on what type of cyber-attack is taking place and the kind of weaknesses that hackers exploit. The eventual loss depends on the type of attack, such as whether it was meant to be denial of service, or theft of valuable information like intellectual property. It can also depend on who the actors are (e.g., lone hackers or nation states) and the type of controls that were implemented prior to the attack. In general, war gaming techniques help to determine expected losses in a variety of scenarios and the options for mitigating risk.

Cyber insurance is already available, but the market is in its state of infancy. Exploratory analysis is underway to understand the problem and to build rudimentary models. While actuarial data is limited, there is a great deal of data available from the footprints hackers leave online.

Kishore Jethanandani, Market Researcher

Kishore Jethanandani's current professional focus is content development and communications of complex subject matter for clients in the technology and financial services industry in the San Francisco Bay and Boston area. His career has evolved from industrial economic research with an accent on policy reform to a business journalist while he was in India to marketing writing and research in the USA. A technology buff and a futurist, he has an instinct for spotting emerging technology and market trends and their likely impact on business strategy and competition.  His personal slogan is "A hedgehog who senses the future". My website can be found at www.futuristlens.com

Mesh Networks Promise to Tap a Wealth of IoT Data

The Internet of Thing's last mile holds rich data seams that can be opened for data mining with mesh networks.

What You Need to Know About Prescriptive Analytics

Even though companies have just scratched the surface with adoption of prescriptive analytics, some are getting real-world benefits, based on better decision making.


Acceptance Is Early
  • 6/30/2016 7:00:44 PM
NO RATINGS

Anand is right for highlighting how acceptance is early. It will take acceptance for everyday items - Alexa is creating that acceptance in the retail space. Amazon is finally demonstrating that product searches begin outside of a browser with Alexa, so that day is close. 

Trusting A.I.
  • 6/30/2016 7:33:46 PM
NO RATINGS

I think trusting A.I. conclusions is the same as trusting a possibility.  It's the degree of how likely that possibility is to being true.  I'm as likely to trust A.I. as I am to trusting anything else which is nothing is infallibile. 

Re: Trusting A.I.
  • 7/31/2016 10:48:10 AM
NO RATINGS

That trust will be earned as we understand what is impacting the machine learning models that are at the heart of A.I. usage.

INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +