Vocal Commands Arrive for Analytics


The whole purpose of analytics is to translate raw data into something that's understandable to the human eye and can be actioned in a meaningful way. That involves collating that data and actually forming it into something that's easily readable by a person who isn't necessarily versed in parsing complex collections of information. It's not always easy.

One method of making it easier to understand could be on the horizon though: voice commands. Google announced back in July that it was bringing voice controls to its analytics platform, letting users ask natural language questions of their data, removing the need for dashboard navigation and for custom graph making.

One of the key reasons that Google spent as long as three years porting over its vocal translation services to its analytics platform was to enable anyone within a company to make use of the reams of data they store and analyze. You don't need to be an expert in analytics to ask a question of the system, which means less middle-management between individual employees and more direct access to the data which can make a huge difference to how people do their jobs.

This should come as no surprise though. With the expansion of consumer products such as Amazon's Alexa, Apple's Siri, Microsoft's Cortana and a host of other smart, digital assistants, voice commands are part of a distinct evolution of how we interact with our devices -- there should be no reason we don't do the same with analytics.

(Image: Amazon)

(Image: Amazon)

Indeed there's a true crossover of consumer and data analytics with services such as Sisense Everywhere, which recently integrated its analytics engines with Amazon's Alexa, and Slack. As VentureBeat highlights, it's made it possible for users to make natural language requests of their data. It does point at the potential limitations of such a system -- visual representations of data are still far more efficient than auditory ones -- but the interaction itself is certainly enhanced through voice commands.

Another aspect that will be certainly bolstered by this sort of interaction is machine learning. Although a system can learn our preferences through the data we look at through traditional means, that restricts learning to users who are already well versed in accessing, curating, and analyzing data themselves, even with the assistance of an analytics system. With vocal commands making it possible for comparative laymen to also make use of analyzed data sets, there's a real potential for analytics systems and the data they spit out to become much more user friendly.

That layman access covers the whole gamut of users, too. While those on the lowest rungs of a business' ladder can make use of it then for their own jobs, regardless of data qualifications, it also means that board members can leverage data more than ever before. They don't need to call in the IT guy just to have them present certain data sets to them. As Economic Times India explains, this could lead to much greater data-driven leadership from the top, which is where it could ultimately be the most impactful.

Data is becoming democratized, not necessarily through its analytics alone, but through a more naturalistic way to read it in a way that's most actionable for the user.

Jon Martindale, Technology Journalist

Jon Martindale is a technology journalist and hardware reviewer, having been covering new developments in the field for most of his professional career. In that time he's tested the latest and greatest releases from the big hardware companies of the world, as well as writing about new software releases, industry movements,and Internet activism.

Brexit Negotiations Drive Analytics Growth

Could Britain's exit from the EU drive a new wave of analytics investment and growth? Here's a closer look.

Vocal Commands Arrive for Analytics

Voice interfaces may give many more users access to self-service analytics. Here's a closer look.


Re: beh
  • 10/24/2017 2:12:20 PM
NO RATINGS

@Joe That could easily lead to major misunderstandings. We get so used to certain acronyms that we don't often consider how other people may not know them and won't even register the letters correctly just from hearing them.

Re: beh
  • 10/24/2017 2:06:25 PM
NO RATINGS

@Ariella: For those reasons (and others), I tend to do most of my own transcribing.

Recently, a transcription of a conversation that I had transcribed by a third party quoted me as using an acronym with which I had zero familiarity. It took quite a while to figure out what I really said. (The transcriber drastically misheard/misunderstood me and apparently wasn't familiar with the subject.)

Re: beh
  • 10/23/2017 6:19:30 PM
NO RATINGS

@Joe well, that makes sense. Even common names have variations, like Sarah with and without the h. And there were any number of ways that my former last name tended to be misspelled. Also sometimes you think you hear one thing, but it is really something else. I had that with an acronym I wasn't familiar with. I spent quite a bit of time looking for it until I discovered that I had a letter wrong.

Re: beh
  • 10/23/2017 4:33:44 PM
NO RATINGS

> and a personal favorite -- people talking over each other!

Fortunately, I'm pretty good at this, considering all the time I've spent studying the plays of David Mamet. ;)

Re: beh
  • 10/23/2017 4:32:43 PM
NO RATINGS

@Ariella: Court reporters are far more trained, overall.

That said, I have seen typos/mistranscriptions here and there in court transcriptions. And, after and in between depositions, court reporters have asked me how to spell various names or to clarify something that was said.

Re: beh
  • 10/23/2017 1:11:16 PM
NO RATINGS

@Terry Actually, that's a great idea! Accuracy may matter if you're relying on Alexa to prevent suicide. That was in today's WSJ: "Alexa, Can You Prevent Suicide?"

From the article:

 

We spoke with Toni Reid, the vice president of Alexa experience and Echo devices, and Rohit Prasad, the vice president and head scientist for Alexa, about the process of sensitivity training for artificial intelligence.

 

WSJ: Why might people talk with Alexa differently?

REID: I think it has to do with having Alexa as part of the Echo—a device that is sort of meant to disappear in the home. You don't have to pull out a phone and unlock it or turn on a computer.

PRASAD: Once you're in the hands-and-eyes-free mode, speech becomes the natural way you interact. And when that happens, the dynamics of the conversation are much smoother.

REID: People started having conversations with Alexa. There were emphatic statements—"Alexa, I love you"—which don't require a response. But they also wanted to know about Alexa. (What's her favorite color? It's infrared.) And that part honestly surprised us a little. Customers treated Alexa as a companion, someone they could talk to. One thing that really surprised us was the number of times customers were asking Alexa to marry them. (She says, "Let's just be friends.")

When did you first realize that Alexa was being asked questions about loneliness, depression and suicide?

REID: As soon as we launched. Customers started to share more information about themselves.

PRASAD: "Alexa, I'm depressed," was definitely one of the early ones we spotted. Alexa didn't understand. Those are easy to catch.

REID: We had some answers prepared, and we went back and made sure those responses felt in line with Alexa's personality and, more important, with what customers needed in each case. We wanted Alexa to be compassionate and then helpful—to give the customer the kind of information they needed. In the case of depression, it was the depression and suicide hotline number.

Re: beh
  • 10/23/2017 1:04:21 PM
NO RATINGS

Thanks for the prompt, Ariella... I just had a vision of a "transcribe-off" between Alexa and Siri. Maybe Judge Judy would agree to referee, or it becomes a sublot on The Good Fight.

Re: beh
  • 10/23/2017 1:00:09 PM
NO RATINGS

Point taken, Joe... humans transcribing humans is fraught enough given accents, ambient noise, mumbling, and a personal favorite -- people talking over each other!

Since AI is math-based, it's dealing in probabilities, and will match the most probable answer. And that can be a long way from the actual, correct answer, as we know from countless bad auto-transcriptions or auto-correct fun.

Re: beh
  • 10/22/2017 11:00:18 AM
NO RATINGS

@Joe do the ones who take down everything said in court tend to be more accurate than that? They don't seem to have the option to say "repeat that."

Re: beh
  • 10/22/2017 10:43:48 AM
NO RATINGS

@T Sweeney: Heck, humans can't even keep up with accurate transcription necessarily.

Back when I had a brief stint as a transcriptionist long long ago, I would have to listen to a few things multiple times--and I wasn't alone. Ultimately, transcriptionists--human and computer alike--resort to using their best guess. In the case of humans, they may even put a "(?)" in case they're not sure (and are savvy enough to know that they aren't sure).

Sometimes when I'm transcribing interviews today, I still have trouble and have to listen to things multiple times. Once in a while I'll have someone else listen to something if I'm having an especially hard time.

Maybe that's what we need in this space and others. AI collaboration.

Page 1 / 5   >   >>
INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +