The inherent obstacle to good survey data is subjectivity -- that every respondent is different -- with disparate backgrounds, motivations, and prejudices. Without objectivity, you have no benchmark, and survey data lacks meaning.
Additionally, respondents may, for their own reasons, respond outright dishonestly to surveys (especially customer surveys and workplace surveys).
Even more problematically, many surveys encourage subjective, untruthful responses because of how they are written.
But with some subtle trickery, organizations can force objective honesty upon otherwise subjective respondents -- and thereby ensure higher-quality data, says Doug Williamson, president and CEO of The Beacon Group, an organizational consultancy.
Eradicate middle ground. "The first thing you do is eliminate the fence people sit on," Williamson says. Most surveys ask respondents to rate something on a five-point scale -- or, sometimes, a 10-point scale. "Statistically... there is a disproportionate number of respondents who pick the middle of the scale," he says. "That middle of the scale, that 'three out of five,' doesn't provide the organization with any valuable data."
To keep people from selecting midpoints (e.g., threes in a five-point scale, fives in a 10-point scale, etc.), Williamson advises using a four-point scale, which has no midpoint. In his words: "You're either bad or really bad, or you're good or really good."
The Beacon Group has found that in four-point surveys, respondents are more honest, often alternating more between the three and the four and between the one and the two. Consequentially, respondents, who would be otherwise disposed to blindly giving all "perfect" scores, are more comfortable giving -- and hence more inclined to give -- lower, more objective scores.
Sharpen definition. Theater revolutionary Konstantin Stanislavski famously said, "Generalization is the enemy of all art." It is also the enemy of analytics.
Most surveys ask respondents in a rather general way to rate someone or something with little to no specificity beyond "best" and "worst." What this does is make people feel better about haphazardly giving a rating based upon a subjective feeling.
To avoid bad data, Williamson advises, "You set a high standard on the descriptors, on the rating scale. You don't go namby-pamby." In other words, to get truly objective results, surveys must only offer answers that are so sharply defined as to impose thoughtful objectivity upon the respondent.
In one of The Beacon Group's four-point scales, Williamson says, a typical definition for a "4" might be "mastery": "The salesperson never fails to deliver masterful service." The drama of seeing wonderfully descript absolutes like "mastery," "never," and "masterful" on a survey forces a respondent to step back mentally and scale responses to the exacting language of the questions and answers.
Use frequency as your basic metric. The two basic types of survey questions are those of frequency and those of extent. Frequency questions ask "How often?" Extent questions ask "How much?" Respondents provide more honest answers when responding to frequency questions than when responding to extent questions.
Respondents tend to "inflate" their responses when answering "How much?" questions. Conversely, Williamson explains, "When you're asked the more tangible question of 'How often did you get good service?'... you tend... to be more comfortable, and maybe even more objective."
By taking these three simple steps to ask better survey questions, organizations can reap greater objectivity and greater honesty -- and therefore dramatically improve the quality of their business intelligence.
It's true, there are many surveys that are not coded properly.
Regarding the 'Eradicate middle ground' section is the blog, this is especially important due to diverse cultures. Some cultures are inclined to say things are okay because they tend to understate the negative when giving the option.
It would be in a network's best interest to ask the right questions. The right questions should include something about renewing for another season (as you stated before). I can think of many shows that might have been renewed instead of cancelled if networks had the sense to ask the right questions of the right audience. It seems so much is decided without bothering to check with the audience. I know there must be more involved in these decisions but I can't help wondering what good could come from data gathered in a good survey.
Joe Great points in today’s create a web surveys on the fly mentality you make and excellent point developing a great survey that yields meaningful results requires skill and understanding of research methods. Of late I am seeing so many bias developed surveys sent out it is very scary that companies are making decisions based on the results of these surveys that are obiviously very suspect.
Shawn, to use your journalism analogy a bit more, it is common wisdom now that articles for the web ought to be shorter than what you might be able to get away with in print. Why? People tend to have shorter attention spans when reading online.
Good point, Cordell. I myself find myself, when responding to surveys, personality tests, and so on, having to step back and say to myself: "No, that's the answer I *want* to be true, but THIS is the true answer."
I also found myself in a similar situation earlier today. I responded to a survey asking me about the most recent episode of House. One of the questions asked, "How excited are you about watching this season of House?" The choices were "Very Excited," "Somewhat Excited," and so on. I couldn't decide between those choices, because "Very" seemed to overstate it, and "Somewhat" seemed to understate it. Because "Very" was the most extreme answer, however, I was going to settle for "Somewhat."
Then I remembered that there is speculation that this season of House could be the last. Because I would like to see the series keep going, I answered "Very excited," in fear of a less positive response being taken into account when Fox is deciding whether to renew the show.
In my case, my "adjusted" response was because Fox wasn't asking the right question: "How much would you miss this show if we didn't renew it?"
Another thought Joe since you're the expert on surveys! I recently read several books on neuromarketing. One in particular "How We Decide" by Jonah Lehrer suggests that when we respond to surveys we tend to overthink our answers and don't really respond truthfully. Not the we intentionally are trying to be deceptive it's just that when we think about what we're thinking we're influenced by all kinds of exterior things. We know what we want but we typically say otherwise.
This is really interesting. Joe, when you talk about frequency I assume you mean how many respondants answered a certain way. What about multiple questions in the survey asked in different way to confirm preferences? How common or useful is this?
Dun&Bradstreet is tapping its huge database of company information to identify suppliers that have potential human trafficking violations that would be of concern to customers concerned about sustainability.