Poll Position; what went wrong for the general election pollsters?

Poll Position; what went wrong for the general election pollsters?

Opinion pollsters don’t have an enviable job. We regularly work with surveys and we know that it’s often an inexact science. Trying to capture how the public feel about a subject is not an easy thing and depends on finding the right people to talk to, asking them the right question, in the right setting and at the right time. Add in the fact that politics is an area where people are liable to change their minds at short notice, and the fact that there will be a definitive answer on polling day against which your predictions will be mercilessly scrutinised, and you have to admire the bravery of those who put their forecasts out there!

With all that in mind though, how did the UK pollsters do in advance of the recent general election? The BBC poll of polls on the eve of the election had the Tories on 34%, Labour 33%, UKIP 13%, Lib Dems 8% and Greens 5%.  So on average, polls were spot on in estimating UKIP (13% of national vote in the election), Lib Dem (8%) and Green (4%) vote shares. They were also right for the most part in predicting the SNP’s landslide in Scotland. Where they got it wrong (largely without exception) was in calling it a dead heat between the Conservatives and Labour – in actual fact we saw a wider gap than anyone had predicted of 37% to 30%.

People like to talk a lot about margin of error when it comes to polls. If we combine the samples for the most recent surveys from eight of the biggest polling companies, we would be asking the question to at least 8,000 people and would have a theoretical margin of error of just +/- 1%. The problem is that this figure assumes a perfect random sample of the British public, with no selection biases, answering perfectly truthfully – not something that is easy to achieve!

So what went wrong? One possibility is just that people changed their minds at the last minute from Labour to Tory. This seems unlikely – the trends over the last six months have consistently shown a dead heat or a narrow Labour lead. There has been no evidence of a trend to the Tories and little by way of a dramatic event in the campaign to suggest that people suddenly changed their minds in the final 24 hours. If the polls were accurate, it would be very surprising indeed to see such a last minute change without a very strong explanation for why so many people would change their minds at once.

More likely is that the polls consistently and systematically underestimated the Conservative vote share. This is known as the Shy Tory phenomenon, where people are reluctant to admit voting for ‘the nasty party’, and has been observed in polling since the Thatcher years. Traditionally, polling companies have got around this by asking people how they voted last time (a more factual question) and reallocating some of the “don’t knows” to this party. However, in 2010 the Shy Tory effect did not seem to exist – in fact polls very slightly underestimated Labour’s vote share on the whole. This may have led some companies to overcorrect this time around – assuming that 2010 was the new normal, rather than a one-off blip.

Another possibility is that the mechanisms polling companies use to allow for likelihood to vote misfired in a way that systematically overestimated the proportion of people likely to actually turn out and vote for Labour. Polling companies rarely talk about unadjusted numbers – i.e. the raw share of the public who say that they would vote for one party or another. These figures are generally corrected for how likely a person says they are to vote, for demographics and a number of other factors. Generally those groups more likely to vote Conservative (older, better off) are also those more likely to vote. Therefore, any system that inaccurately predicts turnout will be expected to create biased results, one way or the other.

Perhaps the most worrying aspect of the polls during this campaign was the fact that they tended to cluster together so closely around what turned out to be the wrong answer. It is a reminder that in polling, as in so many other areas of life, the crowd is not always right! Nate Silver and Harry Enten at FiveThirtyEight have written in the past about “herding” – the phenomenon where polls tend to follow close to their peers.

There are a number of ways in which this could have happened in this situation. One is that polling companies adapted their internal adjustments to figures to better match results from other companies, while another is simply not publishing polls that don’t match the general trend. This is what Survation did with their last poll of the election, which saw them tantalisingly close to the overall result with Labour on 31% and Conservatives on 37%. Ultimately their CEO Damian Lyons Lowe decided against publishing as he thought the poll was too much of an outlier. The lesson is clear; companies should publish all of their polls and allow the rest of the country to decide what is and is not an outlier!

It is worth saying, however, that it is not simply that the polling companies got it wrong. All polls aim to do is provide an estimate of the national vote share. It is up to other forecasters to predict how this will convert into seat share and it’s an important distinction.

In fact, most seat forecasters would still have overestimated Labour’s seat share even if they had been given the correct vote share by the opinion polls. This is because the Conservatives did especially well in marginal seats, better even than you would expect from their national share of the vote. This is probably what tipped them over from a minority or coalition government (as predicted by the exit poll) into the narrow majority that they ultimately achieved. This underlines the importance of seat level considerations, particularly in a first past the post political system where the share of votes a party picks up can often be wildly different to its final number of seats. While not every company has the resources of Lord Ashcroft to poll individual constituencies, it is worth thinking how polls can more accurately reflect the experience of constituency-level voting.

Rational analysis and deconstruction of what happened with those general election polls is hugely important and useful to all of us who work in research with the public. We will be watching with interest to see what we can learn from the election post-mortem. At the moment all we have are hunches, but once the facts are established, all of us who work in research with the public will be a little wiser.

Cian Murphy
 

Leave us your thoughts below on what went right or wrong for pollsters this election.

Add new comment

The content of this field is kept private and will not be shown publicly.

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.