As source material, I'm using my recent experience taking a written survey on a television show about which I knew nothing -- including its existence. To start, the survey asked how often I watch the show, offering various frequencies for choices. Naturally, I selected, "Never."
The next question was, "Why did you say you never watch [the television show]?" Here, I could pick the "most correct" answer from several -- at least eight -- options. The first answer was, "Didn't know it was a television show."
I didn't know it was a television show. Again, I had never heard of it. So I promptly selected that first answer without bothering to read the rest.
I was just about to move to the next page and move on when my gaze happened to catch an option, much farther down the list, that said, "Never heard of it." Because that response was more accurate, I changed my answer accordingly.
This was serendipity, however. I was not particularly up for carefully reading through every one of the multiple options of several questions (a symptom of the "survey fatigue," discussed here, I suppose). I easily could have missed the correct response and given the survey less accurate data.
Indeed, "never heard of it" is even a more likely response than "[heard of it but] didn't know it was a television show," but a person ignorant of the show's existence would be well tempted (as I was) to select the latter option if it was presented first.
If you want the best data from your written surveys, make sure people understand the questions you're asking -- which means that they read them in full. Keep this in mind when determining how to order the responses. To get the best data, make sure that the more likely catch-all response comes before the less likely qualifier.
Also, because you want to make sure people read what they're responding to (at least one major retailer asks survey respondents a special question, as a fellow blogger wrote about here, to see if they're paying attention), limit the number of possible responses. Precision is great, but any increase in the length of your survey is a tax on your respondents' time -- and thus potentially a tax against your return on investment.
As organizational consultant Doug Williamson mentioned in an AllAnalytics.com interview several months ago, more precise but intermediate "hedge" answers may be less helpful. He recommends infusing your survey responses with sharp definition while limiting the number of possible responses, forcing respondents to pick the best option. (For questions asking people to rate something on a scale, Williamson advises limiting the scale to four points.)
If, for whatever reason, you deem that none of this will work for you, you have an alternative: Allow for open responses and employ text analytics to categorize the answers you get (such as in this example).
It all comes down to planning your survey efforts effectively (hint: plan backwards). Figure out to the penny how much each piece of information is worth to you. From that, determine the information that it is really worth it to you to find out.
If you're spewing out survey questions each with seven or more responses, you probably haven't done your homework. Therefore, the answers probably aren't optimally ordered for encouraging understanding and accuracy, either.
The likely result of your haphazard selection of haphazardly ordered answers? Haphazard responses by people who just don't care.
What are your survey peeves? Share on the message board below.