A list of training programs available to Canadian researchers is now available on the MOSRCanada website. Please feel free to comment with links to other market research training and education opportunities. https://mosrcanada.com/training/

Poynter has a free online course on polls and surveys in partnership with three international opinion research organizations, thanks to the support of the American Association for Public Opinion Research (AAPOR), the World Association for Social, Opinion and Market Research (ESOMAR) and the World Association for Public Opinion Research (WAPOR).

The full descriptions of these polling courses are available here:

 

How to report on trends in opinion polling

Many polls ask the same question of different samples at different times to measure change. Just as the margin of sampling error has to be taken into account when determining a difference in public opinion, it also matters when writing about a change in public opinion over time.

Determining whether a new poll shows a “real” or statistically significant change from past polls often involves complex calculations, but some simple rules apply for most comparisons:

  • If a difference between two polls is smaller than either poll’s margin of sampling error, it is probably not significant and should not be classified as a “change” without asking researchers to test it or investigating further on your own.
  • If a difference between two polls is greater than 1.5 times the margin of sampling error for both, it is almost always significant and can be confidently classified as a “change.”

 

Why you should be cautious about poll results

Numbers imply precision, so it can be tempting to accept poll results.

Here are reasons journalists should be cautious about the numbers:

  • Polls are not predictions; they are snapshots of opinion at the time they were conducted.
  • Polls can report only on the questions that were asked. There may be other important issues that were not included in the poll. For example, polls may not ask why people feel the way they do.
  • Polls cannot tell us about sub-groups in the population that are very small in number.
  • If a poll was conducted with unrepresentative sampling, it cannot represent the population.

How to identify a “push poll”

It happens  every election cycle. You’ll get a call that sounds like a political poll but is really a campaign tactic. Some calls are “push polls,” political telemarketing that attempts to create negative views of candidates or issues. Others are legitimate message-testing surveys, used by campaigns to see which types of messages will be most successful.

Here’s how you can tell the difference.

Push polls

  • Often ask only one or very few questions, all about a single candidate or a single issue
  • Usually ask questions that are strongly negative (or sometimes uniformly positive) describing the candidate or the issue
  • May not name the organization conducting the calls, or sometimes use a phony name
  • Do not ask for demographic information
  • Can give evasive answers when you ask for information about the survey
  • Usually call very large numbers of people, sometimes many thousands
  • Do not use a random sample
  • Rarely, if ever, report results

Message testing

  • Usually based on a random sample of voters
  • The number of calls is within the range of legitimate surveys, typically between 400 and 1,500 interviews
  • Usually contains more than a few questions, including demographic data
  • Will often share results on request

 

Taken from Understanding and Interpreting Polls (International), a self-directed course at Poynter NewsU, developed in partnership with the American Association for Public Opinion Research (AAPOR), the World Association for Social, Opinion and Market Research (ESOMAR) and the World Association for Public Opinion Research (WAPOR).