Wading into the Europe debate

Wading into the Europe debate

Over the weekend Opinium waded into the European Union referendum debate with our first poll of the campaign. Given the well-publicised differences between what online and telephone surveys are showing for how people will vote, it’s worth stating upfront that our survey was online and the results were very much in line with what other online polls have been showing.

The results we reported were almost as ‘unmassaged’ as it’s possible to get. The overall sample was weighted to our usual demographic and political criteria and while this, to an extent, takes into account what happened with the general election last year, the European issue cuts across party lines in a way that’s difficult to model or predict.

With that in mind, here is some of the context around the poll so we can see more than just the headline numbers.

Turnout

In a general election we have decades of turnout data to draw on when attempting to model how people will turnout and we can still get it badly wrong (see 2015, general election, performance of polls in). For a one-off event like a referendum though, we’re shooting in the dark to an extent. Will turnout be as high as a general election or even the 2014 Scottish independence referendum or will it look more like the 2011 AV referendum which sank without a trace? As the chart below shows, this matters enormously for the result.

Our headline number of 43% leave vs. 39% remain just includes all GB adults without taking account of high likely they are to vote, if at all. As with general election questions we also ask a likelihood to vote question using a 10 point scale where 10 is “would definitely vote” and 1 is “definitely would not vote”.

Here’s what happens when we apply stricter filters to that headline number.

A turnout filter increases the lead for

“Leave” supporters are more likely to be certain to vote than “Remain” supporters which is not terribly surprising given the correlation between support for Brexit and age. The older age groups who are most likely to vote are also the most pro-Brexit.

How to deal with the “don’t knows”

One of the other features of online polls vs. telephone polls has been the relatively high number of “don’t know” (DK) responses. In the headline number for our poll, we had a DK figure of 18%. A BMG poll conducted a few days earlier had DKs of 14%, recent ICM and YouGov polls have shown figures of 17% and 19% respectively. In contrast, the last ComRes telephone poll had 11% DK and Ipsos MORI had 10%.

One suggestion (in an interesting paper by Matt Singh and James Kanagasooriam) is that in telephone polls “don’t know” is not a prompted answer – i.e. interviewers do not tend to read it out after giving the options for “Remain” and “Leave” – meaning that some do not know it is an available option and therefore give one of the prompted answers instead. With online polls (such as ours) it’s either an available option or it’s not and is shown along with the other answers.

With that in mind, we asked what we’re calling a “nudge” question to those saying they don’t know. An early experiment with this had the same options and “don’t know” and asked “if you were forced to choose” and unfortunately had 70% simply repeating “don’t know”. However, with some slightly less declarative options and a change of the “don’t know” to “no opinion”, we manage to get an indication of how most of these people are leaning.

The suggestion from the Singh / Kanagasooriam paper was that many of the equivalent people answering DK online would answer “remain” on the phone and our results bear that out to an extent. Here’s how the chart from earlier would look if we include our “nudged” respondents:

The gap is closer as 45% of our DKs either lean towards remaining or say that they are more likely to vote this way than to vote to leave vs. 29% who lean the other way and 26% who really have no opinion. However, as one might expect, these people are generally less likely to vote and therefore the gap re-appears as we apply more strict turnout filters.

Social attitudes and weighting

The final thing to look at is another issue mentioned in the Singh / Kanagasooriam paper. With our political polling we weight the sample by demographic indicators as well as various political indicators. Ours takes the form of party propensity which is based on asking people on a 10-point scale how they feel about each major party and assigning them to groups based on who they would, might, and definitely would not consider voting for. However, a referendum cuts across party lines in a way that makes that weighting insufficient. It’s all very well knowing that you have x% of Conservative sympathisers because you can be reasonably confident that you know which party they’ll support in a general election. What you can’t know is if you have the right proportion of pro-EU or pro-Brexit Conservatives.

Singh and Kanagasooriam found that some of the difference between online and telephone polls can be explained by online polls accessing a sample that is too socially conservative and telephone polls accessing a sample that is too socially liberal based on the answers to a series of questions about national identity, gender and racial equality. Here is the table from their paper comparing these results to the face-to-face British Election Study:

Online samples are too socially conservative and telephone too liberal

As we can see, online audiences are more likely to say that efforts to improve gender equality have gone too far and much more likely to say that about efforts to improve racial equality. They are also more likely (within England) to say that they feel more English than British, this likely correlates to the fact that people from ethnic minorities are generally more likely to identify as British than English.

When we tested this, we found that our numbers again tended to be like other online surveys in showing a more socially conservative sample than the telephone or face to face surveys. So which one is correct?

Singh and Kanagasooriam treat the BES as the ‘gold standard’ in that of the three methods, it’s likely to be as close to the actual state of public opinion as it’s possible to be. This makes some degree of sense given the cost, resources available and the longstanding academic history of the BES and their figures fall between more socially conservative online samples and more socially liberal telephone samples. However, I think it’s more realistic to assume that the “true” figure for many of these is somewhere between the BES and the online surveys given what we know about interviewer effect. Online research is supposed to be best for sensitive subjects where respondents aren’t subconsciously trying to say something they think the interviewer will approve of and for making respondents feel more comfortable saying things that may be unpopular. Long story short, if there is any area where interviewer effect is likely to kick in, surely it’s the subject of race and identity.

This is certainly something that the Opinium team are going to keep in mind for how we approach future EU referendum polling.