Three Little Tricks for Boosting Survey Effectiveness

The inherent obstacle to good survey data is subjectivity -- that every respondent is different -- with disparate backgrounds, motivations, and prejudices. Without objectivity, you have no benchmark, and survey data lacks meaning.

Additionally, respondents may, for their own reasons, respond outright dishonestly to surveys (especially customer surveys and workplace surveys).

Even more problematically, many surveys encourage subjective, untruthful responses because of how they are written.

But with some subtle trickery, organizations can force objective honesty upon otherwise subjective respondents -- and thereby ensure higher-quality data, says Doug Williamson, president and CEO of The Beacon Group, an organizational consultancy.

  1. Eradicate middle ground. "The first thing you do is eliminate the fence people sit on," Williamson says. Most surveys ask respondents to rate something on a five-point scale -- or, sometimes, a 10-point scale. "Statistically... there is a disproportionate number of respondents who pick the middle of the scale," he says. "That middle of the scale, that 'three out of five,' doesn't provide the organization with any valuable data."

    To keep people from selecting midpoints (e.g., threes in a five-point scale, fives in a 10-point scale, etc.), Williamson advises using a four-point scale, which has no midpoint. In his words: "You're either bad or really bad, or you're good or really good."

    The Beacon Group has found that in four-point surveys, respondents are more honest, often alternating more between the three and the four and between the one and the two. Consequentially, respondents, who would be otherwise disposed to blindly giving all "perfect" scores, are more comfortable giving -- and hence more inclined to give -- lower, more objective scores.

  2. Sharpen definition. Theater revolutionary Konstantin Stanislavski famously said, "Generalization is the enemy of all art." It is also the enemy of analytics.

    Most surveys ask respondents in a rather general way to rate someone or something with little to no specificity beyond "best" and "worst." What this does is make people feel better about haphazardly giving a rating based upon a subjective feeling.

    To avoid bad data, Williamson advises, "You set a high standard on the descriptors, on the rating scale. You don't go namby-pamby." In other words, to get truly objective results, surveys must only offer answers that are so sharply defined as to impose thoughtful objectivity upon the respondent.

    In one of The Beacon Group's four-point scales, Williamson says, a typical definition for a "4" might be "mastery": "The salesperson never fails to deliver masterful service." The drama of seeing wonderfully descript absolutes like "mastery," "never," and "masterful" on a survey forces a respondent to step back mentally and scale responses to the exacting language of the questions and answers.

  3. Use frequency as your basic metric. The two basic types of survey questions are those of frequency and those of extent. Frequency questions ask "How often?" Extent questions ask "How much?" Respondents provide more honest answers when responding to frequency questions than when responding to extent questions.

    Respondents tend to "inflate" their responses when answering "How much?" questions. Conversely, Williamson explains, "When you're asked the more tangible question of 'How often did you get good service?'... you tend... to be more comfortable, and maybe even more objective."

By taking these three simple steps to ask better survey questions, organizations can reap greater objectivity and greater honesty -- and therefore dramatically improve the quality of their business intelligence.

Joe Stanganelli, Attorney & Marketer

Joe Stanganelli is founder and principal of Beacon Hill Law, a Boston-based general practice law firm.  His expertise on legal topics has been sought for several major publications, including U.S. News and World Report and Personal Real Estate Investor Magazine. 

Joe is also a communications consultant.  He has been working with social media for many years -- even in the days of local BBSs (one of which he served as Co-System Operator for), well before the term "social media" was invented.

From 2003 to 2005, Joe ran Grandpa George Productions, a New England entertainment and media production company. He has also worked as a professional actor, director, and producer.  Additionally, Joe is a produced playwright.

When he's not lawyering, marketing, or social-media-ing, Joe writes scripts, songs, and stories.

He also finds time to lose at bridge a couple of times a month.

Follow Joe on Twitter: @JoeStanganelli

Also, check out his blog .

Clean Your Data with Visualization and Algorithmic Tests

Speakers at Bio-IT World explore techniques for biotech researchers and others working with big data to identify the accurate data in their data files.

Data Sharing: A Matter of Life and Death

Cooperation among medical researchers -- done right -- very simply can mean lives saved, but the research community needs education on how to execute on that collaboration.

Re: Survey effectiveness
  • 10/20/2011 8:33:08 PM

It's true, there are many surveys that are not coded properly. 

Regarding the 'Eradicate middle ground' section is the blog, this is especially important due to diverse cultures.  Some cultures are inclined to say things are okay because they tend to understate the negative when giving the option.  

Re: *Re: *Re: data over time
  • 10/19/2011 5:57:17 PM

It would be in a network's best interest to ask the right questions.  The right questions should include something about renewing for another season (as you stated before).  I can think of many shows that might have been renewed instead of cancelled if networks had the sense to ask the right questions of the right audience.  It seems so much is decided without bothering to check with the audience.  I know there must be more involved in these decisions but I can't help wondering what good could come from data gathered in a good survey.

Re: Survey effectiveness
  • 10/19/2011 2:29:54 PM

@Maryam, can you give us an example or two (without naming names if you don't wan of course)?

Survey effectiveness
  • 10/19/2011 12:52:17 PM

Joe Great points in today’s create a web surveys on the fly mentality you make and excellent point developing a great survey that yields meaningful results requires skill and understanding of research methods. Of late I am seeing so many bias developed surveys sent out it is very scary that companies are making decisions based on the results of these surveys that are obiviously very suspect.

Re: *Re: *Re: data over time
  • 10/19/2011 10:41:40 AM

That takes a great deal of objectivity, Joe. Not everyone can distinguish between what they want to be true and what is true in a concrete sense.

Re: More specifics, fewer responses?
  • 10/18/2011 8:55:02 PM

Shawn, to use your journalism analogy a bit more, it is common wisdom now that articles for the web ought to be shorter than what you might be able to get away with in print. Why? People tend to have shorter attention spans when reading online.

Re: *Re: *Re: data over time
  • 10/18/2011 3:57:14 PM

Good point, Cordell.  I myself find myself, when responding to surveys, personality tests, and so on, having to step back and say to myself: "No, that's the answer I *want* to be true, but THIS is the true answer."

I also found myself in a similar situation earlier today.  I responded to a survey asking me about the most recent episode of House.  One of the questions asked, "How excited are you about watching this season of House?"  The choices were "Very Excited," "Somewhat Excited," and so on.  I couldn't decide between those choices, because "Very" seemed to overstate it, and "Somewhat" seemed to understate it.  Because "Very" was the most extreme answer, however, I was going to settle for "Somewhat."

Then I remembered that there is speculation that this season of House could be the last.  Because I would like to see the series keep going, I answered "Very excited," in fear of a less positive response being taken into account when Fox is deciding whether to renew the show.

In my case, my "adjusted" response was because Fox wasn't asking the right question: "How much would you miss this show if we didn't renew it?"

Re: *Re: *Re: data over time
  • 10/18/2011 3:52:07 PM

No, by frequency, I mean/Doug Williamson means: "How often?" As in, "How often did the clerk provide masterful service? Always, Sometimes, Seldom, or Never."

The same question asked a different way could be useful -- Myers-Briggs and similar tests employs this method.  That said, it doesn't do anything to stave off response inflation.

Re: *Re: *Re: data over time
  • 10/18/2011 3:48:45 PM

Another thought Joe since you're the expert on surveys!  I recently read several books on neuromarketing.  One in particular "How We Decide" by Jonah Lehrer suggests that when we respond to surveys we tend to overthink our answers and don't really respond truthfully.  Not the we intentionally are trying to be deceptive it's just that when we think about what we're thinking we're influenced by all kinds of exterior things.  We know what we want but we typically say otherwise.

Re: *Re: *Re: data over time
  • 10/18/2011 3:43:02 PM

This is really interesting.  Joe, when you talk about frequency I assume you mean how many respondants answered a certain way.  What about multiple questions in the survey asked in different way to confirm preferences?  How common or useful is this?

Page 1 / 2   >   >>