An unchecked quantity of low-quality data is all it takes to kill your organization's reputation -- a reputation that may require great effort to resuscitate.
And one big flub can do the damage, as Tom Redman, "The Data Doc" and founder of Navesink Consulting, discusses in his blog from earlier this week. In the blog, he explores the much-reported $2 trillion error by Standard & Poor's, which downgraded the US credit rating in a historical move.
"The error may not have mattered to the fact of the downgrade. But it can't help S&P's reputation," elaborated Redman in a follow-on AllAnalytics.com e-chat.
But more important than the S&P debacle itself, Redman added, is that the factors leading to the error are hardly unique.
"I want to build on this point," Redman said. "The most important questions in my blog pertain to other companies, not S&P. If they think they are not at risk, they should think again."
During the live chat, Redman cited one study that suggested 20 percent of all data records contain at least one error, enough to possibly cause considerable problems further on. Sadly, this fact is not news to most companies and organizations in that many simply accept or ignore the uncertainty and considerable risks this situation poses, he said.
"I find it a bit of a paradox here. Almost anyone I talk to readily agrees their data is bad, costs them big-time, and subjects them to risk. But somehow individual awareness does not translate into group action."
So, how can companies assess the threat posed by potentially bad data quality and begin to amass a better quality of data upon which they can base smarter decisions?
The answer involves a series of decisions by a company or organization about how best to manage its data.
"I've always just said we need to correct current data errors and prevent future errors,” said Danette McGilvray, principal of Granite Falls Consulting, and chat participant. "Of course, there is always debate about which of those two should be addressed."
However, McGilvray and Redman agreed on the first step. And that, they said, is for companies to decide why they want the data in the first place. The decision will help improve the data collection process and, quite possibly, simplify matters by eliminating collection of data for which there is no specific purpose.
"Understanding where the data flows after it is entered and the impact if the quality is poor can go a long way when training people, about not just the 'how' but the 'why,' " McGilvray said.
Everybody would do well to consider this comment McGilvray relayed from one of her clients: "If I had known years ago when I was entering data that this is how it was used, I would have been a lot more careful."