26 October 2017

The problems with polling

Thomas Brennan-Siegert

By Thomas Brennan-Siegert LinkedIn

Looking only at customer polling scores can make companies blinkered. So what can be done?

I really do hate my bank. I’m not going to name and shame them but you’ll just have to believe me when I say they are truly awful.

I hate the way that if there is the slightest inclination of fraud they freeze my account and call me afterwards. I hate the way their app seems to be designed for neither right nor left handed people. I also hate the way they text me more often than my mum.

But most of all I hate the way I’ve been saying all these things consistently for just short of 19 years.

And herein lies the problem: I’ve whinged and whined to anybody who’s had the displeasure of accidently mentioning anything related to the bank-who-shall-not-be-named. And yet I’ve made absolutely no attempt to change to a different one.

Don’t believe a word I say

The issue is that people – myself included – say one thing whilst doing something else entirely. If I had £1 for every Sunday morning I proclaimed I’d never drink again, I’d have enough to go clubbing this Saturday. This means that opinion polling, by itself, is useless.

Brexit is the perfect example. The vast majority of polls predicted a Remain win (a nod to our friends at Opinium who successfully forecast the correct result) and yet we all know the outcome.

Polling predicaments

The issue is that there are a huge number of factors influencing how people answer surveys. A few major ones include:

1. Cognitive dissonance

This is when a belief or attitude can conflict with a person’s behaviour. They then attempt to resolve this by changing either their behaviours or their belief.

For example, if I hate my bank – which I’m not sure if I’ve mentioned – but have stayed with them for nearly 20 years then I might either switch banks or think ‘they must not be that bad’ or ‘I bet others are worse’.

2. Timing bias

When you ask people their opinion has a massive effect on the response they give. My girlfriend recently bought a dress for a wedding. If you’d asked her what she thought about it straight after it arrived she would’ve given it 5/5.

Now, if you’d asked her two hours into the reception when the zip completely split, leaving her to wear jeans and my Batman t-shirt, I think the story would’ve been slightly different. My point is that opening up an account or buying insurance is generally a positive experience – the issues come later.

3. Experimenter bias

Who asks me the question might affect the answer I give. If my car insurer asks me what I think of them, I might give a different answer compared to when a friend asks me.

4. Response bias

Certain types of people take part in surveys. Either these are people with a particular opinion or the prize is big enough to entice the fence-sitters to take part. Mostly it’s the former.

The extremely positive customers will respond. The negative customers will generally avoid responding (I try to avoid contact with my bank at all costs so the last thing I’d do is a survey for them!) unless their opinion is extremely negative.

This tends to leave customer polling quite positive - unless you force people to respond. You then ask them about their bank, which they might not have thought about or wouldn’t have otherwise responded to.

How long to look at customer data

So you’ve minimised all the biases and collected what you think is accurate and reliable data. Now comes the question of how long it’s relevant for. 

We’ve recently changed how we score customer polling data for our Customer Experience Ratings. We collect data for all the brands in our tables every six months. We’ve changed how we weight each wave of data so that whilst the most recent still accounts for the largest individual portion of the marks, polling from the last year now accounts for 65%. Polling from the previous year then accounts for 25%, and the final 10% is made up of data from 3 years ago. Any data collected before that was discarded.

This was a relatively minor change, but how long rating agencies look at customer data varies massively depending on the customer polls you look at. For example, Google and Reevoo based their rating on every rating that’s ever been given.

For 1st Central Car Insurance that’s meant a Google review of 6 years ago is affecting their score equally compared to a score made yesterday.

Meanwhile, Feefo say they’ve limited the ratings that appear on Google Ads to 12 months. But the ratings on providers’ websites aren’t limited so could be from years and years ago. These methods aren’t right or wrong, they’re just different – and something to factor in when looking at these ratings.

The Solution

Think of research as a three-legged stool with each leg representing another piece of research you could look at.

The first leg is what people say. Ask them questions. Ask a range of people at a range of times. Talk to those who don’t seem interested in talking to you. And think carefully if it should be you asking the questions or someone else.

The second leg is what people or companies do – their behaviour. We look at transparency and complaints data because these are measureable identifiers of what companies are doing. You might suggest we look at how many customers switch banks but customers might be more influenced by pull factors – such as a £125 for switching – than push factors. Looking at transparency is a specific measure of how the bank is behaving towards its potential customers.

The last leg is neuroscience – what people are thinking. Expensive and time-consuming but potentially hugely rewarding.

Poll with purpose

So survey often. Survey correctly. And don’t survey in isolation.

It’s all well and good that 98% of customers surveyed love you.

But when 2% of your customers are trashing you on Twitter, it matters far more what people are doing, than how many ticked a box.