Pollsters have not had an easy time of it in recent years. 18 months ago, I wrote an article trying to understand what went wrong with political polling after the 2015 general election, when no one succeeded in predicting the eventual Tory majority. During the summer, polling companies were roundly criticised for failing to correctly call Brexit, while more recently the public shock at the election of Donald Trump in the USA has been blamed on a failure of polling.
But how much of this criticism is fair? To take the most recent example, pollsters were hardly far off in predicting the popular vote in the USA. The New York Times’ final polling average predicted Hillary Clinton to win the popular vote by about three points. A miss in the popular vote stakes of about one percentage point is quite accurate by historical standards – so why were people so shocked when they woke up on the 9th of November to find Trump had been elected president?
The dominant media narrative was poor at conveying uncertainty to the public. Although Clinton led consistently in the polls, her lead was narrow and there were many undecideds right up to polling day – as it happens these broke strongly to Trump and were a major factor in his election. In addition, few in the media sphere acknowledged the possibility of Trump winning in the electoral college while losing the popular vote (indeed most pundits considered the opposite more likely despite polling evidence suggesting Trump outperforming his national numbers in rust-belt swing states such as Michigan, Pennsylvania and Wisconsin – all three of which he ultimately won).
The quality of how a poll is interpreted is as important as the quality of how it is carried out. Based on exactly the same polling evidence, different models in the US had Clinton as a favourite to different extents. While the Huffington Post saw the polls and gave Clinton a 99% chance, 538 saw the same polls and gave her just a 70% chance, recognising a trend towards Trump in the final weeks and a high number of undecideds meaning more uncertainty. While 1% seems far too low, 30% doesn’t feel like an unreasonable estimate of where Trump was placed going into election day. This is close enough to saying that Trump would win the election if he rolled a five or a six on a single dice – hardly a shocking occurrence.
This suggests that the problem, and the reason so many were shocked to see Trump elected, was more in the way polls were analysed and conveyed to the public than in how they were carried out. Similarly there is a word of caution for organisations hoping to interpret research data that without the expertise in house it is easy to draw misleading conclusions.
Despite all this, these are still trying times for those conducting polls of the public. The fragmentation and digitalisation of media consumption, the decline of the landline (about half of US households do not have a landline) and an increase in volume of research have all made it harder to find a truly representative sample of the general public. Those of us in the market research industry are aware of these challenges and work hard to ensure quality of response and spread of respondents for as representative a sample as possible.
Luckily, for most of us, we are not predicting a binary outcome (Trump or Hillary, Leave or Remain), but trying to understand more about an audience. Where being one or two points out in a political poll can mean entirely missing the result of an election, this is less crucial in most market research. For us, it is as important to look at comparisons (how does your brand awareness compare to others in your sector) and trends (have more people heard of your campaign this wave than last) which can give us meaningful insights in their own right.
For all their challenges, political polls are the best way to gauge a snapshot of public opinion outside of an election. While we should be cautious in how we interpret them, they nevertheless remain the best method we have for predicting voting patterns. Similarly, research with donors and supporters may not be accurate to a precise percentage point, but it remains the best way of gaining an insight into how your key audiences think, feel and act. To simply cast aside the idea of polling and research after a couple of very narrow misses would be to throw the baby out with the bathwater. The question remains for those who criticise research – were their own predictions of the results of the US election closer than the polls?