Cobalt Sky
news

NEWS

Color image of some people voting in some polling booths at a voting station.
26 May 2017

Political Correctness

Rebecca Tolen - General - 0 comments

A few weeks ago it was announced a snap election would take place in June of this year, bringing forward the general election by 3 years. Many are calling this a very smart political move by Theresa May, as at the time, the Conservatives were 24 points ahead of Labour – the highest vote share since 2008. At present, everyone will be looking towards the polls for insights as to the likely outcome, but after the disastrous results of the 2015 general election opinion polls can we really trust what we are being told?

Looking back to the 2015 general election the opinion polls were neck-and-neck between Labour and the Conservatives, suggesting to voters that the result may incur a hung parliament (as seen in 2010). However, once the polling stations were closed and vote counts started to come in it became increasingly clear that the polls had, in fact, been very wrong. Instead of the close result we were expecting it happened that not only had the opinion polls underestimated the Conservative numbers, but they had also overestimated Labour and Liberal Democrat support. This meant the Conservatives had enough seats to form a majority government and left Labour with 26 fewer seats than previously.

So what went wrong in 2015 and does this mean that we can no longer trust the pollsters? Post-election, an inquiry was commissioned to investigate the inaccuracies of the 2015 opinion polls.
One hypothesis for the inaccurate prediction was that a late swing caused the final result, however this was likely not the case. While a small late swing is almost expected in elections, the ‘swing’ we saw in 2015 between the opinion polls and the final results were: Conservatives +3% and Labour -3%. Even the final polls did not indicate a clear leader – this meant the Conservatives had effectively ‘gained’ a 6 point lead overnight. A post-election re-contact survey showed weak evidence of a late swing; it may have accounted for some proportion of the inaccuracies but certainly was not the cause of the ultimate predictions.

The overestimation of Labour support being caused by so-called ‘lazy Labour’ – those voters who tell pollsters they intend to vote Labour but do turn out to vote – was found to contribute a very small amount, if anything, to the polling errors. Likewise, the underestimation of Conservative support being caused by so-called ‘shy Tories’ – those who intend to vote Conservative but tell pollsters they intend to vote for other parties – was found to not be a contributing factor.

Throughout the election campaign phone polls showed a higher rate of Conservative support than that of online polls. This could be due to the nature of telephone data collection; there are notable differences between landline/mobile use and demographic groups, this could skew the data towards an older demographic in the sample. However, despite this, no differences were found between polls conducted online versus telephones calls.

The report was left to conclude that the primary cause was unrepresentative samples – the polls conducted pre-election systematically over-represented Labour support and under-represented Conservative support and the weighting which was applied to the raw data did not lessen the impact of this. The report, however, was unable to pinpoint the exact variable which had lead to this – in fact, it has been assumed that the there is no one variable the pollsters need to be looking at but instead a “complex multi-variable system”. In order to best combat this in the future, the report put forward twelve recommendations to British Polling Council members, including: stating explicitly which variables were used to weight the data and clearly indicating where changes have been made to the statistical adjustment procedures applied to the raw data since the previous published poll.

So what can the industry and voters learn from these findings? We know that polls are not to be taken as gospel but they are still the most accurate representation of the possible outcome of an election. With so many different factors to consider when predicting how people will vote there will always be some level of uncertainty due to the nature of human behaviour. Perhaps this is what is most exciting about an election.

The full report and 12 recommendations can be read in full here: http://eprints.ncrm.ac.uk/3789/1/Report_final_revised.pdf

Leave a comment

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *