Before I jump in, let me be clear on two points:
1. This model has evolved over the past 3 years (
though Stockwell Day will deny it has) and each step in it involved a lot of thought, testing, and consideration.
2. I’m completely 100% open to criticism and suggested improvements. I’m sure it can be improved.
That said, let’s jump in.
THE PROJECTION FRAMEAt the basic level, this is a uniform swing projection using regional data (Atlantic Canada, Quebec, Ontario, Prairies, Alberta, BC). If the Liberals go up by 5 points in Alberta, the model swings every riding up 5 points. If the NDP drops 8 points in Atlantic Canada, the model drops every riding down 8 points.
I experimented with geometric swings (including the variation 538 used for the
UK election) and mixed models, but they simply didn’t work as well on any of the elections in my data set.
The reason for this is simple – if the Liberals swing up 5 points in Ontario, the geometric swing is going to give them almost all of their newfound support in rural areas, since that’s where there are the most votes available to swing to them. And that’s
not what usually happens in reality.
Now, I’ll concede in a WTF election like, say
1993, when the political landscape of the country is dramatically changed, uniform swing may not work. I wouldn’t use this model to project the next Alberta election. But uniform swing is simply the best frame to build this model out of, especially since
its deficiencies are easy to correct at the simulation stage.
THE BASE VOTEProjection models generally use the most recent election as their base. The problem with this is that if a party has a good or bad showing in a riding,
the data becomes skewed beyond repair. It's the same reason you use more than 1 year worth of data to project hockey or baseball stats - even though Aaron Hill hit 36 home runs last year, it’s foolish to expect him to repeat the feat.
In the political arena, there are prominent examples of this that stand out – the Greens won’t match their 2008 totals in
Central Nova next election and, no matter how bad the campaign goes, the Liberals will exceed theirs. The NDP results from last election in
Saanich Gulf Islands were obviously hurt by some
naked truths that (hopefully) won’t be repeated in the next election.
But even on a more subtle level, parties get good candidates, candidates run bad campaigns, local issues emerge. There needs to be a way to
smooth these events out.
To test this, I used a regression model to "predict" the 2008 vote in each riding, based on the 2006 and 2004 riding results and each riding’s predicted vote based on demographics (
click here for the low down on this). In all cases, these numbers were adjusted using uniform swing.
And the results were clear –
the best model used all three predictors (2004, 2006, and the demographic regression).
So keeping the ratios the same, I’m using the following results to get the “base” vote for each riding:
2008 election: 38.7%
2006 election: 17.4%
2004 election: 7.8%
Demographic regression: 36.2%
OTHER FACTORSIncumbency exists. It means something. It should be taken into account. I’m not going to argue the point any further, because every study ever run on Canadian politics has come to this conclusion, as has
my own research. So, I’m adjusting for incumbency based on the effect it had on the 2004, 2006, and 2008 elections.
By elections are a different beast. They’re unpredictable, and it’s hard to say how good a gauge they are of future results.
My research on them has been limited, but the numbers tell me the best prediction model will weight the by election for 44% of the base. And who am I to disagree with the numbers?
The final bit of finessing I’ve used relates to the polling data (which I’ll talk about in a second -
patience...). The Green Party has consistently under performed their polling numbers at the provincial and federal level in Canadian elections. As a result, I’ve scaled the Green polling numbers back to 78.55% of their value. Just make sure the angry hate mail is written on recycled paper before you send it.
My spreadsheet is set up so that I can easily remove these correction factors or change their impact. But in each case, I’ve given them the impact the data tells me to.
POLLING DATARight now, I’m taking the most recent poll from each polling company and assigning a weight to it based on the sample size and the company’s accuracy in provincial and federal elections over the past 5 years. Under this weighting system, the “best” poll is worth about twice as much as the “worst” poll.
This is the aspect of my projection model most likely to change in the coming months, and I’m open to suggestions. Things to consider are:
-Ekos releases
massive amounts of data compared to other companies. They’ll interview 7,000 people a month – but is it “fair” to give that data seven times the weight of an n=1,000 poll from another company?
-Should weight be given based on the freshness of data? Is 3 week old data worth as much as 3 day old data? And if not, what’s the half life of polling data? Does this change during an election campaign?
-Is it fair to judge the accuracy of a polling company on past election results? Right now, pollster accuracy is being based on 8-12 data points. Hardly a large sample.
ADDING VARIANCE TO THE SIMULATION MODELUp to this point, I’ve described a very thorough uniform swing model. But a probabilistic model can do so much more. In most models, a seat the Liberals are projected to win by 1% counts just as much in their tally as a downtown Toronto seat they’re projected to win by 40%. Yet in reality, if we project them to win a seat by 1%, it’s basically a coin flip election – it could go either way.
So we need to make the data messy. Unfortunately, I worry my explanation will be messy as well. But here goes.
The first step is to find the regional support for each party in a given election simulation. This is done using the margin of error on the polling data. If I have 1000 interviews from Atlantic Canada, then the Atlantic Canada data carries a margin of error of +/- 3.1%. So the numbers get simulated under a normal distribution accordingly. What that means is if the Liberals are polling at 35% in Atlantic Canada, in some of my sim elections they’ll come in at 37% for the province. In others, 32%. Most of the time, they’ll be close to 35% but we’re talking about 10,000 simulations here, so in some of these “elections”, they may very well get 31% or 40% in the province. That’s just how margins of error and variance work.
After that, we need to add some noise when transferring polling data from the regions down to the ridings. To do this, I looked at how regional shifts have carried through to the riding level in previous elections – for example, if the Liberals drop 8 points in Ontario, they won’t drop 8 points in
every riding. They’ll fall by 4 in same and by 12 in others.
So variance is added, keeping that overall regional polling number the same. Based on my research, the variance gets larger when the change gets larger (i.e. if a party goes up by a lot, their gains are a lot
less uniform at the riding level) but even if a party’s support is unchanged in a region, their support will still change at the
riding level. In English, even if the Liberals are polling at the same level in Quebec now as they got last election, they’ll go up in some ridings and down in others – but it will even out.
I won’t go into the exact mechanics of this, but I’ve test driven this numerous times and the program produces riding variance at the same level it should, based on what’s happened in the 3 previous elections.
But there’s one more level of variance this
doesn’t take into account:
The polls being wrong. There
are elections when pollsters overshoot the margin of the error. We can have 10,000 interviews from 7 polling companies, and miss the bullseye by 3%. That’s not a knock on the pollsters – some people don’t vote, some people lie, and some just change their mind at the last minute. There’s no use pretending otherwise. Think of the 2004 Canadian election when the electorate swung back to the Liberals on the last weekend.
So, I’ve gone back and looked at how much,
beyond normal sample size variance, pollsters have missed the mark by (in Canadian provincial and federal elections). And I’ve built this in to the very first step of the model. So even if we have reams of data showing the Tories at 35% nationally, in some of my sims they’ll “actually” be at 32%. In some, they may be at 37% or 38%. Again, the variance is added based on what we’ve observed in recent Canadian elections.
RECAP - HOW IT WORKS1. Public polling data is grouped together by region.
2. In every simulation, the polling data is adjusted based on sample size variance and on how often polling companies just “miss the mark”.
3. From this, every riding is simulated based on how the numbers tend to transfer from the regional level down to the riding level.
4. The riding simulations take into account past election results, demographics, incumbency, and by elections.
Using this, my laptop simulates 10,000 elections. From this, I can calculate odds of a given party winning the election or a given seat changing hands.
As I said off the top, I’m open to suggestions on changes – I’m sure there are improvements that can be made.