Survey Again & Again for 'Just Right' Results

Taking customer surveys these days makes me feel like Goldilocks.

In a previous piece, Why One-Question Surveys Make for Bad Analytics, I described a time when I was on the phone with a very polite, well-spoken Netflix customer service representative who assured me that he would handle my problem. After the phone call, I took a customer response survey, gladly giving a "satisfied" rating. Later, I found out that the representative had handled the problem incorrectly -- making me far more dissatisfied about the problem (and Netflix) than when I had called originally.

In this case, Netflix surveyed me too soon.

In another instance, I received a phone call from my bank asking me detailed questions about my most recent teller service at my local branch. I had last been to the bank so long ago that I couldn't remember which of two visits (one mildly positive and the other very negative) was my most recent. I gave a few vague answers as best as I could, but ultimately I had to confess to the person on the other end of the line that I couldn't remember enough about my most recent customer service experience with the bank to honestly answer his questions one way or the other.

In that case, the bank surveyed me too late.

Customer surveys are like porridge. To be any good at all, they must be done just right -- including being given at just the right time.

Like Goldilocks, customers (and all human beings, really) are fickle. In this constantly fluctuating and changing world of ours, the way we feel about a company and our dealings with it while we are interacting with it may not reflect how we feel about it the next hour, day, week, or month. If a company waits to gauge our feelings, however, it risks surveying a customer who cannot remember -- and, consequently, who may provide no feedback, embellished feedback, or patently false feedback.

The problem is that nobody really knows when the best time to survey a customer is. The solution is therefore simple: Survey the customer multiple times for a more complete picture.

That is not to say one has to be overly obsessive about surveying. Maintaining a high participation rate is one of the keys to successful survey efforts, so you don't want to annoy your customers with overly frequent requests.

Alas, businesses are so afraid of scaring off respondents that customer survey innovation is stunted. Instead of finding ways to gather higher quality information from a customer survey, many companies are designing oversimplified, "executive summary" surveys and carelessly tossing them at consumers. Some broad strokes of information may be captured from such surveys, but action often can't be taken on the data.

A brief survey administered a few times (with minor modifications) at particularized intervals after an interaction can provide highly actionable data. Measuring how a customer's attitude changes (or doesn't) over time will give a clearer picture of the impact of certain types of positive and negative customer service experiences -- and, therefore, a better picture of the ROI of how various aspects of the customer service experience are managed and prioritized. This is far more useful and practicable than a single snapshot.

It is this quality of practicability that is the key to cutting-edge analytics. Only by having the most practicable data possible can you ensure that you're getting your business just right.

Joe Stanganelli, Attorney & Marketer

Joe Stanganelli is founder and principal of Beacon Hill Law, a Boston-based general practice law firm.  His expertise on legal topics has been sought for several major publications, including U.S. News and World Report and Personal Real Estate Investor Magazine. 

Joe is also a communications consultant.  He has been working with social media for many years -- even in the days of local BBSs (one of which he served as Co-System Operator for), well before the term "social media" was invented.

From 2003 to 2005, Joe ran Grandpa George Productions, a New England entertainment and media production company. He has also worked as a professional actor, director, and producer.  Additionally, Joe is a produced playwright.

When he's not lawyering, marketing, or social-media-ing, Joe writes scripts, songs, and stories.

He also finds time to lose at bridge a couple of times a month.

Follow Joe on Twitter: @JoeStanganelli

Also, check out his blog .

Clean Your Data with Visualization and Algorithmic Tests

Speakers at Bio-IT World explore techniques for biotech researchers and others working with big data to identify the accurate data in their data files.

Data Sharing: A Matter of Life and Death

Cooperation among medical researchers -- done right -- very simply can mean lives saved, but the research community needs education on how to execute on that collaboration.

Re: Timing
  • 10/10/2011 9:47:37 AM

You're absolutely right, adhand. Imposing on people like that is not the way to go.

Re: Timing
  • 10/10/2011 1:03:10 AM

I feel that surveys should be targeted properly. You cannot run surveys by uploading in public sites and making it a hassle for all the users to do it (E.g.: pop-up surveys). That way you will not get any information in the right manner. Targeting or allowing people to select the survey if they really want to go through it would be the best.

  • 10/6/2011 12:57:06 AM

I agree, timing is important.  I hate it when I go to a website and popup immediately comes up asking if I want to take a survey.  I just got there, how can I respond to a survey!

I wonder about the completeness of the data in terms of getting people complete multiple surveys.  I like the hospital approach of kind of priming the respondant to reply to the more formal survey.  But if you have a series of three surveys to get a more complete picture what do you do when you they don't complete the second and third survey?  Throw out the data? Segment it into a seperate population?

Re: *Re: Budget impact
  • 10/5/2011 1:33:16 AM

Oh Yes.  They take a very serious look.  Hospitals survey results are also reported to Medicaid and Medicare and many other state and federal agencies.  Also the scores are shared and compared with hospitals around the nation. 

Unlike other insurance programs, Medicare's  insurance payments are not only based on the service provided, but also on the quality of care. As oif 2009 Medicare will not pay for preventable matters that occur with a hospital.  So hopefully all hospitals are paying more attention to this data.  

Re: *Re: Budget impact
  • 10/4/2011 9:21:24 PM

Hi Seth,

I'm assuming from the way you put this that at one time the hospital did things differently. I'm wondering made them decide to go to a double survey approach and whether they made any effort to measure (beyond anecdotally) the change in the quality of responses.

Re: *Re: Budget impact
  • 10/4/2011 2:43:20 PM
1 saves

For us the oral survey is unofficial and is one tool  to root out issues before they become problems and to increase customer satisfaction on the spot.  Also, it's easier for the patient because everything is fresh on their minds.  It is also easier for us to discover the root of the issue since we can ask more follow up questions. We don't ask the customer to rate us at this time, but rather ask more open ended questions, make sure that their medications have been explained to them and inspect the room

I'm sure yes, that some patients want to be polite and may not bring up issues that exsist.  At the same time the fact they are giving the opportunity to do so helps increase patient satisfaction. Then there are those of course who make it there feelings quite clear. 

The follow up written/multiple choice survey is the official survey which the patient will receive in the mail later and send back.  If there is a negative remark, we do go back to the unoffical survey to see if there was something we could have done differently however there results are kept seperate. 

Re: *Re: Budget impact
  • 10/4/2011 2:27:09 PM

That's terrific, Seth.

What happens if there is a conflict/discrepancy between the two surveys (for instance, a favorable oral survey followed by an unfavorable written survey, and vice versa)?

Also, do you find that, based upon the subsequent responses in the written surveys, that the oral surveys' results are inflated in any way? (For instance, from the social pressure to respond favorably to the person asking face-to-face?)

Re: *Re: Budget impact
  • 10/4/2011 2:21:15 PM

At the hospital I work for, we survey twice.  First is a brief oral survey where we inform them we are going to send them a more detailed written survey later. We've found this approach increases the response rate and gives more detailed responses.  


Re: *Re: Budget impact
  • 10/4/2011 10:22:17 AM

Hi Joe,

It occurs to me that another problem is the idea of what surveys really tell us. Today, responding to complaints quickly and proactively would seem to represent the best ROI. People are not shy about complaining when they feel mistreated (take this guy) and sometimes developing an effective way to "listen" for these comments as well as positive comments spontaneously offered can be the best kind of data gathering there is.

*Re: Budget impact
  • 10/4/2011 9:50:04 AM

Hi, Ariella.

It's more than that too.  By being able to chart fluctuating customer sentiment, companies will be able to pinpoint what areas they need to improve upon and where the greatest ROI to be had is in terms of action.

Page 1 / 2   >   >>