Thursday, 14 May 2015

Polls Apart?

I'm not a great General Election watcher, but have been intrigued by the subsequent debate on why the opinion polls got it so wrong. Various reasons, explanations and excuses have been advanced – mostly blaming the public rather than the pollsters (it’s all our fault apparently) – for their failure to predict the Conservative’s overall majority. Apparently there's going to be an industry enquiry into this – presumably different from the enquiries into why previous General Election surveys also got it wrong. 

Some reasons for inaccuracy are fairly obvious, although not necessarily well-publicised. For instance, we know about "don't-knows", but what about those who do know but won't say? Those you might call the "****-off"s, to put it bluntly. If you discount them, that immediately means you have a non-random sample – I’m not aware of any basis for assuming that this group divides its support in the same way as those who will answer.

Secondly, there's the issue of "segmentation" – in other words, how the overall results breaks down amongst different voting groups and areas. A survey may be statistically sound at national level, but much less so broken down by 650 constituencies where the voting actually occurs. And whilst SNP gains and LibDem losses were largely predicted, the question of how former LibDem voters might switch their allegiance is a much tougher one, and certainly varied between constituencies.

More uncertain still is the impact of "survey fatigue". Getting a sufficient response rate is a perennial problem for surveys of all types, and anecdotally we know that some people will respond to surveys in the way that gets rid of them quickest. So doorstep campaigners are told "yes, I'll vote for you" no matter which party they represent. The extent to which this will affect independent opinion polls is unclear, but the “shy Tory” theory certainly implies people will mislead these pollsters as well.

The moral of all this is that numbers and percentages alone are of limited use unless you really understand what is changing (in this case, people's minds) to produce those numbers. The complex interaction of national, local and personal issues means that this will never be easy, and no one should pretend that it is. Unless someone can produce a really convincing Theory of Change (see previous blog!), then pre-election surveys will always carry a high risk of inaccuracy.

On which basis I will make a prediction: when we next have a General Election, the pre-election opinion polls will get it wrong again. Not because they haven't studied what went wrong this time, but because the factors that affect people's voting intentions, and what they tell opinion poll surveys, will change again.

And actually that's OK. We should after all be governed by real democracy, not what the opinion polls tell us.

No comments:

Post a Comment