Sunday, March 22, 2009

March Poll Dance

Here's what Canadians have been telling pollsters this March:

Nanos (March 13-18, n=1,000)
Lib 36%
CPC 33%
NDP 13%
BQ 10%
Green 8%

Angus Reid (March 10-11, n=1,000 online)
CPC 35%
Lib 31%
NDP 16%
BQ 10%
Green 7%

Strategic Counsel (March 5-8, n=1,000)
CPC 35%
Lib 31%
NDP 16%
Green 10%
BQ 9%

Decima (Feb 26-March 8, n=2,000)
Lib 33%
CPC 32%
NDP 14%
Green 10%
BQ 9%

Ipsos (Feb 24-March 5, n=1,000)
CPC 37%
Lib 33%
NDP 12%
BQ 10%
Green 8%


MEAN (change since February in brackets)
CPC 34.4% (+0.4%)
Lib 32.8% (+0.8%)
NDP 14.2% (-1.3%)
BQ 9.6% (+0.8%)
Green 8.6% (-0.7%)

Labels:

18 Comments:

  • I fail to understand how any poll can be accurate.

    You ask 1000 people what 33 million think and you get a lousy answer.

    Nanos(the poll queen) was dead wrong last election.

    Nanos always uses the media to influence the public.

    Not surprising it works seeing how there are no debates about Canadian politics on Canadian T.V.

    But lots of American ones.

    How is that possible?

    I guess question for 4 hours a week is sufficient. And then Canadians wonder why only 59 percent votes last Fall.

    What say you?

    By Anonymous Anonymous, at 6:20 p.m.  

  • I meant to say *question PERIOD.

    TY.

    By Anonymous Anonymous, at 6:21 p.m.  

  • Actually the Nanos poll the day before the election was almost perfect. Taking the average, like what Calgary Grit has done, will actually give you a very close representation of what the population is thinking.

    By Blogger Scott, at 6:28 p.m.  

  • "I fail to understand how any poll can be accurate.

    You ask 1000 people what 33 million think and you get a lousy answer."

    Well, there is statistical methodology to back up the results, usually. But statistics don't "prove" anything, they merely serve as more or less reliable "indicators".

    One factor that affects reliability is the size of the sample relative to the population. Larger sample, higher degree of confidance in the result. Usually. There's also such a thing as a "rogue" poll, which is why survey and polling results always state a degree of confidance "19 times out of 20". The twentieth time is the rogue result.

    In this instance, the fact that these results are roughly equivalent suggests that none of these polls are "rogue"; the odds of getting 5 rogue polls are infinitly small.

    You're right, though, that 1000 is a pretty small sample. On the other hand, the population being sampled is not 33 million, it is whatever the number is of people who actually vote. Under 18 can't vote, non-citizens can't vote, and a whole lot of people are disenchanted with the whole process and DON"T vote.

    By Blogger Party of One, at 8:38 p.m.  

  • It never hurts to go over the theory again.

    If you ask a very small random sample of the population about an issue, you'll get a reasonably accurate representation of how that random sample feels, but you won't be able to extrapolate very well because the sample isn't very representative, being so small.

    Get a larger sample, and you get a more informative result.

    The challenge for the polling firms is how to get a representative sample.

    Men and women might feel differently on a subject, so they want to measure the answers women give separately from the answers the men give.

    Different levels of education may also bias the result, so a "representative" sample will measure the respondents against the general population.

    Urban, suburban, and rural residents tend to vote differently, so pollsters will monitor each as they respond to the questions.

    The better pollsters might "normalize" results based on how their "random sample" represents the (voting) population. So, if the "random" sample had, say, 60% men and 40% women, they might re-weight the answers to better represent the general population. And so on for other axes.

    But at the end of it all, it's a lot like tossing a coin a thousand times, and extrapolating the results to represent a million coin tosses.

    By Blogger Paul, at 9:19 p.m.  

  • The population size does not affect the precision of the sample very much. I suggest if you're interested in the methodology that you google "stratified random sampling" rather than demonstrating a clear lack of statistical knowledge.

    By Blogger JG, at 9:42 p.m.  

  • Oh, and for anyone curious, the Greens are no longer winning Quebec according to Strategic Counsel...

    By Blogger calgarygrit, at 10:08 p.m.  

  • Harper should have called the Coalition bluff and let them reign during these terrible economic times.

    By Blogger Robert Vollman, at 10:29 p.m.  

  • "I fail to understand how any poll can be accurate."

    There's a lot of math that goes into those numbers. When you see something like "accurate to within 3%, 19 times out of 20", that's not some number they came up with.

    You'd need to consult a stats textbook to see why, but it turns out you can get a pretty good answer from some small samples. Larger samples are still better, but it's not as big a difference as you might think. See Wikipedia for a (poor) explanation.

    The real problem with these surveys isn't the sample size, it's the sampling bias. If you phone only during 9-5, you're likely to get a disproportionate number of housewives or unemployed.

    By Anonymous Anonymous, at 11:38 p.m.  

  • Actually the Nanos poll the day before the election was almost perfect.

    And the Nanos (SES) poll the day before the 2006 election was way off. It had the Conservatives and Liberals tied, IIRC.

    I agree that Anon's skepticism is misplaced, but I'm rather tired of this "Nanos is such an accurate predictor" meme.

    By Anonymous Anonymous, at 4:42 a.m.  

  • Harper should have called the Coalition bluff and let them reign during these terrible economic times.

    Talk about chess vs checkers... it certainly would've been interesting. And the Liberals would be able to say that all their Leaders become PM.

    By Anonymous Anonymous, at 8:52 a.m.  

  • I think the theory behind these polls is relatively sound - sure, there are problems because you can't reach everyone and most people hang up on polling companies, but the theory is good, and it usually works in practice.

    I guess the biggest issue is that no one really pays atention to politics between elections, so where the parties sit now may not neccesarily predict how people will react during a campaign. But, at the very least, you see how things are shifting (or, in this case, not really shifting much, save for a slight dip in NDP support).

    By Blogger calgarygrit, at 9:39 a.m.  

  • Yes, how people react during a campaign will matter. The Conservatives seem to be holding up despite the recession, but what about attack ads blaming them for job losses? The Liberals are rising and the NDP falling, but what if the NDP campaings against the "Tarsands Twins"? And will the Greens now attack the Liberals instead of being an ally? The BQ numbers seem to be holding.

    By Blogger nuna d. above, at 2:49 p.m.  

  • "And the Nanos (SES) poll the day before the 2006 election was way off. It had the Conservatives and Liberals tied, IIRC."

    I seem to recall them having the numbers for the top 4 parties with 0.1% of the final results.

    By Anonymous Anonymous, at 1:47 a.m.  

  • Anon - Nanos' 3 day average was on the mark in 2006, although his final "one day" total was a bit off (as pointed out above).

    Then in 2008, his 3-day average was off (not bad, but not as good as Angus, Leger, and a few others), but his "one day" total was close to the mark.

    So, not surprisingly, he spun the better result each time.

    By Blogger calgarygrit, at 10:39 a.m.  

  • Inbred CTV execs will probably air Ignatieff taking a shit on election eve. I'm losing faith in the political process and gaining faith in asymptotic warfare as a means to our survival.

    By Anonymous Anonymous, at 3:40 p.m.  

  • "Anon - Nanos' 3 day average was on the mark in 2006, although his final "one day" total was a bit off (as pointed out above).

    Then in 2008, his 3-day average was off (not bad, but not as good as Angus, Leger, and a few others), but his "one day" total was close to the mark.

    So, not surprisingly, he spun the better result each time."

    Except that the wording in the post above was 'way off' -- that's simply not true.

    Of course a pollster is going to spin things to their advantage, but it's rare we see any numbers from other pollsters that can even be spun, i.e. they're often so far off the mark and on a consistent basis that they simply don't compete with Nanos (yes, I'm looking at you Strategic Council).

    By Anonymous Anonymous, at 5:17 p.m.  

  • Except that the wording in the post above was 'way off' -- that's simply not true.

    IIRC, the final day of SES polling (Sunday, Jan 22) had the Conservatives and Liberals pretty much exactly tied, versus the actual 6.1% gap on election day. I would personally consider that "way off", but YMMV.

    I tried to track down the exact SES Jan 22 results, but came up empty. Interestingly though, multiple bloggers who linked to this PDF said that it has "daily results" showing a "dead heat", but there is no such showing in that document. Did SES edit their PDF after the fact to remove the daily results?

    By Anonymous Anonymous, at 10:30 p.m.  

Post a Comment

<< Home