Covid-19 Resource Center

Understand how consumers are responding.

Online Trends

Published December 19th 2019

How We Predicted The 2019 General Election Results

Qriously CTO Abraham Muller and colleagues look back on the methodology behind Qriously’s impressively accurate election prediction

It’s been a pretty crazy couple of weeks for the team at Brandwatch Qriously.

On 12 December, the people of the UK voted in the General Election. That night we stayed up late, a little nervously, waiting to see if the predictions we’d shared with the world were correct. There was a lot of pressure after our successful 2017 General Election and Brexit predictions.

A lot of work and late nights had gone into the lead up to that night, with a small skilled team of data scientists getting their heads into the data and developing a tight methodology that would translate our mobile survey results into an accurate prediction for the UK election.

Developing a unique methodology with our tech

Brandwatch Qriously is a unique member of the British Polling Council, which is a big deal for us. We’re the only member which uses already established mobile advertisement infrastructure to collect data for an accurate prediction.

To reach our respondents, we bid on advertising spots in ad-supported mobile apps, and show a neutral survey question instead of an advert. When someone answers that question, they’ll be shown the expanded survey.

A representative sample means we can make accurate predictions by polling a tiny fraction of the population

Here’s Billiejoe Charlton, one of our data scientists, on how mobile allows Brandwatch Qriously to reach a varied demographic:

“The thing people worry about when they hear how Qriously works is that they will get a biased sample – “you’ll only get young people answering”, “older people don’t have smartphones” and so on.

“Firstly that’s largely untrue because we do get respondents across all sections of society. Secondly, we don’t just ask people about their political views. We also ask them about themselves – how old they are, whether they’re male or female, what qualifications they have, etcetera.

“So if there is a skew in the sample, we know about it and know how to account for it, so we can still fairly represent the views of the whole population.”

After all of the above, the prediction calculations are carried out using well-established techniques for scientific polling.

Here’s how Peter Fairfax, another of our data scientists, explains how the team account for demographic complexity with what some might call a small sample:

“People often ask how it’s possible that a poll of a couple of thousand people can accurately predict the behaviour of tens of millions of voters.

“Polling scientists tend to answer with a parable involving soup – essentially “if you stir the soup properly (i.e. ensure a representative sample), you only need to taste a teaspoon to know if it’s too salty”. Counter to that, you could guzzle ladle after ladle from the top of an unstirred soup pot and get a totally wrong idea of what the soup generally is like.

“In short, you absolutely need your data to represent the people whose behavior you’re trying to predict. This is why gigantic polls involving huge numbers of people can produce bad predictions – the extreme end of these unscientific polls can often be found on news sites and social media, leading to comments like “but your sample size is only two thousand people – I saw a poll with 100 times as many people and it shows a totally different result!”.

“Let’s take a more tangible example. If 50.6% of the adult population are women, 50.6% of the perfect sample would be women. If 34.2% of the population are university educated, then 34.2% of the perfect sample would be university educated.

“Because it’s hard to get those exact figures, we make small adjustments to our data to get the best representation. For example, if we find that our sample is actually 50% men and 50% women, we need the women to contribute slightly more than the men towards the final prediction.

“For the 2019 General Election prediction, we ensured our data was representative of the UK adult population in terms of age, gender, region, the type of constituency (counties versus boroughs) and education level.”

People were asked if they are registered to vote, and how likely they were to vote, on a scale of 0-10. Only respondents that stated 9 or 10 were then included for the prediction.

The results

After all our work, this is how things went:

While we weren’t perfect, we got things pretty damn close.

  • For the Conservatives we predicted 43.2 compared to the actual result of 43.6 (an error of only 0.4 points)
  • For Labour we predicted 30.4 compared to the actual result of 32.2 (error of 1.8 points)
  • For the Liberal Democrats we predicted 11.6 compared to the actual result of 11.5 (an error of only 0.1 points)

You can read the full post with our predictions, data downloads, and further election analysis here.

Looking forward

What’s coming for the team?

The 2020 US Election feels like a good place to focus our prediction efforts.

Share this post
Search the blog
React Newsletter

Sign-up to receive the latest insights into online trends.

Sign up
facets Created with Sketch.
facets-bottom Created with Sketch.
New: Consumer Research

Be Consumer Fit.

Adapt and win with Consumer Research, our new digital consumer intelligence platform.

Crimson Hexagon has merged with Brandwatch. You’re in the right place!

From May 8th, all Crimson Hexagon products are now on the Brandwatch website. You’ll find them under ‘Products’ in the navigation. If you’re an existing customer and you want to know more, your account manager will be happy to help.