Pre-election surveys in the Philippines are doing Filipinos more harm than good as survey firms employ flawed and misleading methodologies long discredited in more advanced democracies.
This was revealed by former Senator Francisco “Kit” Tatad in a presentation entitled “Philippine Pre-election Surveys Are Fatally Flawed” before the Fernandina media forum at Club Filipino Wednesday (17 February 2010).
“Local pollsters have used methodologies and techniques that are flawed and discredited, and have long been discarded in the US, where public opinion polling was invented and turned into a billion-dollar industry,” Tatad said.
Among the flawed practices of local pollsters like Pulse Asia, Social Weather Station (SWS) and other less known practitioners involved in pre-election surveys which Tatad cited are Face-to-face Interviewing, Quota and Cluster Sampling, Loaded and Lengthy Questionnaires, Trial Heat Polls and Pick 3 Polling, all of which he said are stated in the methodologies made public by these respective companies.
Tatad likewise pointed out Philippines pollsters have ignored standards for professional and ethical practice of public opinion polling that elsewhere are regarded as sacred by polling associations and reputable pollsters.
“Professional standards are virtually non-existent in the local opinion polling industry,” the former Senator said. “No law regulating the conduct of opinion polling, and no professional association of pollsters either to set and enforce standards of conduct and standards of disclosure and ensure “the reliability and validity of survey results.”
Tatad further said that the media have been an unsuspecting purveyor of dubious findings to the detriment of the election campaign and the public.
“The public would have had a better appreciation and understanding of public opinion polling had the media been a little more critical and vigilant,” he said.
Tatad added that in the US, the media normally ask the following 20 questions before publishing the results of any opinion poll:
- Who did the poll?
- Who paid for the poll and why was it done?
- How many people were interviewed for the survey?
- How were those people chosen?
- What are (nation, state or region) or what group (teachers, lawyers, Democratic voters, etc.) were those people chosen from?
- Are the results based on the answers of all the people interviewed?
- Who should have been interviewed and was not? Or do response rates matter?
- When was the poll done?
- How were the interviews conducted?
- What about polls on the Internet or World Wide Web?
- What is the sampling error for the poll results?
- Who’s on first?
- What other kinds of factors can skew poll results?
- What questions were asked?
- In what order were the questions asked?
- What about “push polls”?
- What other polls have been done on this topic? Did they say the same thing? If they are different, why are they different?
- What about exit polls?
- What else needs to be included in the report of the poll?
- So I’ve asked the questions. The answers sound good. Should we report the results?
“The reason for asking these questions is plain enough: there are good polls and bad polls,” Tatad pointed out. “It appears that every poll should be judged guilty until proven otherwise.”