Taking customer surveys these days makes me feel like Goldilocks.
In a previous piece, Why One-Question Surveys Make for Bad Analytics, I described a time when I was on the phone with a very polite, well-spoken Netflix customer service representative who assured me that he would handle my problem. After the phone call, I took a customer response survey, gladly giving a "satisfied" rating. Later, I found out that the representative had handled the problem incorrectly -- making me far more dissatisfied about the problem (and Netflix) than when I had called originally.
In this case, Netflix surveyed me too soon.
In another instance, I received a phone call from my bank asking me detailed questions about my most recent teller service at my local branch. I had last been to the bank so long ago that I couldn't remember which of two visits (one mildly positive and the other very negative) was my most recent. I gave a few vague answers as best as I could, but ultimately I had to confess to the person on the other end of the line that I couldn't remember enough about my most recent customer service experience with the bank to honestly answer his questions one way or the other.
In that case, the bank surveyed me too late.
Customer surveys are like porridge. To be any good at all, they must be done just right -- including being given at just the right time.
Like Goldilocks, customers (and all human beings, really) are fickle. In this constantly fluctuating and changing world of ours, the way we feel about a company and our dealings with it while we are interacting with it may not reflect how we feel about it the next hour, day, week, or month. If a company waits to gauge our feelings, however, it risks surveying a customer who cannot remember -- and, consequently, who may provide no feedback, embellished feedback, or patently false feedback.
The problem is that nobody really knows when the best time to survey a customer is. The solution is therefore simple: Survey the customer multiple times for a more complete picture.
That is not to say one has to be overly obsessive about surveying. Maintaining a high participation rate is one of the keys to successful survey efforts, so you don't want to annoy your customers with overly frequent requests.
Alas, businesses are so afraid of scaring off respondents that customer survey innovation is stunted. Instead of finding ways to gather higher quality information from a customer survey, many companies are designing oversimplified, "executive summary" surveys and carelessly tossing them at consumers. Some broad strokes of information may be captured from such surveys, but action often can't be taken on the data.
A brief survey administered a few times (with minor modifications) at particularized intervals after an interaction can provide highly actionable data. Measuring how a customer's attitude changes (or doesn't) over time will give a clearer picture of the impact of certain types of positive and negative customer service experiences -- and, therefore, a better picture of the ROI of how various aspects of the customer service experience are managed and prioritized. This is far more useful and practicable than a single snapshot.
It is this quality of practicability that is the key to cutting-edge analytics. Only by having the most practicable data possible can you ensure that you're getting your business just right.