Tatad Asks COMELEC Sanction vs Pulse Asia, SWS

April 28, 2010

Senatorial Candidate Francisco “Kit” Tatad formally asked the Commission on Elections en banc this Wednesday to enforce the Fair Election Act (Republic Act No. 9006) against Pulse Asia, SWS and other polling firms for not complying with the law’s provisions on election surveys. (See Letter to COMELEC En Banc HERE)

The former senator pointed out that the law requires polling firms and media organizations to fully disclose the identities of survey sponsors and to open their data and survey methods to candidates, political parties and the COMELEC whenever survey results are published or reported.

Tatad pointed out that Pulse Asia has repeatedly refused to disclose the sponsors of its surveys on the grounds that their contracts are “confidential.” “This is an election offense,” he said.

He also complained of the refusal of SWS to provide full information about its surveys, including questionnaires, methodologies and sponsorships, despite repeated written requests. “This is also an election offense,” he said.

Tatad said that the controversy over the surveys can only be resolved through immediate and decisive COMELEC action because the survey firms have become more brazen in releasing highly doubtful survey results.

He cited the unbelievable claim of SWS that it was able to conduct three nationwide surveys in the span of 28 days: one on March 19-22, a second one on March 28-30, and a third one on April 15-17.  “This is impossible to  do in our archipelagic and multilingual country,” he said. “Even one survey per month is a daunting task.”

He said all this only proves that the surveys are being manipulated for candidates who are paying for them. “All the contracts for the conduct of surveys should be disclosed so that the public will know who and what are really behind the surveys,” he said. #

Comelec Urged to Sanction Pulse Asia, SWS, etc.

April 23, 2010

SENATORIAL candidate Francisco “Kit” Tatad has challenged  the Commission on Elections (Comelec) to  sanction  the opinion polling firms and  media establishments for publishing and reporting pre-election surveys without disclosing their sponsors and other necessary information as required by Republic Act 9006, or the Fair Election Act of 2001.

At the same time, Tatad urged all presidential, vice presidential and senatorial candidates who are skeptical about the opinion polling of Social Weather Stations, Pulse Asia and other polling firms to demand that said firms open all their polling records for inspection, copying and verification, as authorized by law.

Section 5.2 of RA 9006 provides that  during the election period, any person, natural as well as juridical, candidate or organization who publishes a survey must likewise publish the necessary material information about  it  to enable the public to determine its reliability.

This information includes full disclosure of the name or names of the person, candidate, party or organization who commissioned or paid for the survey, and the survey methodology used, including the number of individual respondents and the areas from which they were selected, and the specific questions asked.

The law also provides  that the survey, together with the raw data gathered to support its conclusions, shall be available for inspection, copying and verification by the Comelec or by a registered political party or candidate or any Comelec-accredited citizens’ arm.

Tatad said the polling firms have been more interested in manufacturing public opinion in favor or against certain candidates, instead of measuring actual opinion.  He was particularly critical of the polling firms’ practice of  selling to candidates sponsorships of the surveys and the right to introduce their own questions, but without disclosing to the public their individual identity.

Pulse Asia says the matter is covered by a “confidentiality” agreement.  But Tatad says the fair election law requires them to disclose the names of candidate sponsors of the surveys.

Moreover, “it is an election-related expense which every candidate is required to report to the Comelec. It is also part of the polling firm’s income which must be reported when paying taxes,” Tatad said.

The former senator said that it is possible the pollsters “are selling not only the right to participate in the survey but also the right to appear in the ratings.”  This could be the explanation why some sitting senators who are widely ridiculed as completely useless Senate furniture are still rated as ‘popular’, and some nationally  known solid personalities do not figure in the charts at all.

Tatad was the first candidate to openly criticize  SWS and Pulse Asia for using “quota sampling” and “face-to-face interviews” after  these methods had been  abandoned in the United States, where over the years leading pollsters had made serious miscalls in the  presidential elections.

Several presidential candidates, notably Senators Richard Gordon and Jamby Madrigal, have since joined in criticism of local pollsters and their surveys. Some Filipino statisticians and survey science experts have also joined the issue. The tabloid press has run headlines and editorials about it, in stark contrast to the mainstream press which has not given it as much space.

Quoting American polling experts, Tatad also faulted the local pollsters for using the old  hypothetical question—-“If elections were held today, whom would you vote for, among the named candidates?” He said  the question compels even the undecided to give a “top of the head” answer;  that is why it is called  a “forced-choice question” which suppresses and distorts  the real numbers of  “undecided.”

US expert David Moore, a former Gallup vice-president, says that the practice leads to survey firms manufacturing public opinion instead of measuring it.

Tatad has accused  the polling firms of biasing their surveys to suppress  the torrid expressions of popular support that PMP presidential candidate, former President Joseph Ejercito Estrada has been getting  in all parts of  the country.

“This is why in every mammoth rally we have I always ask the crowd, whether they had been surveyed by anyone, and whether they knew of anyone who knew anyone who knew anyone who had been surveyed at all.  The  answer to this is always a great no,” Tatad said.

Tatad questioned the ability of the polling firms, notably SWS,  to come up with nationwide surveys almost within one week of each other, when independent experts maintain that one such survey normally takes two to three months to finish.

In its latest survey, SWS put the size of “undecided” at four to five percent, although in 1998, it said that one out of every voters remained undecided until election day,  while in 2004, 12 percent remained undecided and another 12 percent uncommitted until election day.   No explanation was offered for the statistical change, he noted.

Tatad said he may have found a “smoking gun” to support his charge of manipulation when someone called a radio program (Karambola) on DWIZ in Manila last week to report that an SWS interviewer in Cebu was asking his “random respondents” to choose between Manny Villar and Noynoy Aquino for president, and that when he protested  there were ten candidates to choose from, he was told that the choice had been narrowed down to two.

“It’s really a crooked business,” Tatad said.  He recalled that in 1992, upon his election to the Senate as a pro-life candidate, Mahar Mangahas of SWS showed a senators’ workshop the alleged results of a survey showing that any senator who did not support the government’s family planning program (now called “reproductive health”) would not get reelected.

“Not only was I reelected with flying colors in 1995, I also became Senate majority leader to five Senate presidents.  But that SWS presentation  had a lasting impression on me, on how polling could be used to promote certain advocacies.  Mangahas has not deviated from that course since.  He is still playing the same ugly game,” Tatad, whose pro-life work has expanded to the international scene,  added.

“In the 2004 presidential elections, Mangahas came up with an execrable exit poll in Metro Manila  that showed Mrs. Arroyo leading her rival Fernando Poe Jr. all the way. The official Comelec count, however,  showed FPJ taking all of Metro Manila, except for Las Pinas,” Tatad said.

“Despite that scandalous incident, SWS continues to do pre-election polling as though its reputation had never been tarnished,” Tatad lamented.  “In the US, the Literary Digest quietly folded up after it had erroneously predicted the defeat of President Franklin Delano Roosevelt to Alf Landon in 1932, and the House of Representatives as well as the US Social Science Research Council investigated Gallup, Roper and Crossley after they had unanimously but erroneously predicted that President Harry Truman would lose to Thomas Dewey in 1948,” he pointed out.  #


April 17, 2010

The following is a series of blogs by Billy Almarinez, published in his facebook notes. I am sharing this with you because it gives a revealing insight on those SWS and Pulse Asia surveys. You can likewise access his notes directly on his Facebook account by clicking HERE.

1.Analysis of Pulse Asia and

Social Weather Station Survey Methods

by Billy Almarinez

27 March 2010

Sample Size – Part 1

Not only have I been studying and teaching statistics as a college instructor, but I have also been using it as a scientist and researcher for quite some time now. However, it was only during the early morning of March 26, 2010 that I thought of trying to scrutinize SWS and Pulse Asia surveys from a statistical standpoint. Although SWS and Pulse Asia never reveal how they actually conduct the surveys (aside from indicating the questions asked, the number of respondents, and the margin of error they set) and that they usually argue that their methods are “tried and tested” ones, I think it

would not hurt if we try to take a look at how representative their survey results are of the entire population of registered voters, using another generally accepted and “tried and tested” method we use in statistics.

I’m talking about Slovin’s formula.

It is only from members of a sample (as respondents) that data would be obtained through a survey, since a census (or gathering data from the entire population) is not feasible for data gathering given a short span of time and limited resources. It is important, however, that the sample used be as representative of the population as possible, so that inferences derived from analysis of sample data may be more or less applicable to the whole population. This may be ensured by using appropriate sampling methods and using appropriate sample sizes.

In statistics, Slovin’s formula is a generally accepted way of how to determine the size appropriate for a sample to ensure better representation of the population of a known size. The formula may be expressed as follows:

n = N / (1 + Ne^2)
where n = sample size

N = population size

e = margin of error
Again, in the context of SWS and Pulse Asia surveys, these groups could argue that they use formulas other than Slovin’s formula in coming up with sample sizes of 2,100 and 1,800 respondents respectively (I did a review of news clips from the ABS-CBN News web site, and noted that these two figures are the most commonly used sample sizes of the two survey groups). However, if we use Slovin’s formula (which is, again, a generally accepted and commonly used method in statistics), rather alarming ideas may be derived (alarming, considering how some Filipinos base and defend their decisions on who to vote for on survey results).

Considering that both survey groups usually set the margin of error at plus or minus 2 percent (or 0.02), here’re what Slovin’s formula says about the SWS and Pulse Asia sample sizes (anybody with a considerable aptitude in algebra may verify these):

  • SWS’s survey over 2,100 respondents, with margin of error set at 0.02, assumes a population composed of only 13,125 individuals. In the context of election-related surveys, that would point to the survey results being possibly representative of a population of 13,125 registered voters nationwide.
  • Pulse Asia’s survey over 1,800 respondents, with margin of error set at 0.02, assumes a population composed of only 6,429 individuals. In the context of election-related surveys, that would indicate that the survey results may be representative of a population of 6,429 registered voters nationwide.

Now let’s see…. Based on Slovin’s formula, the SWS sample size seems to assume that there are only 13,125 registered voters, while the Pulse Asia sample size seems to assume that there are only 6,429 registered voters nationwide. How many registered voters are there in the country? Can anyone provide the actual population size of registered voters? Is there anybody reading this who knows anyone from COMELEC? I’m sure there’s a definite figure.

Well, with or without the actual figures from COMELEC, I believe 13,125 and 6,429 are gross underestimations of the actual number of registered voters in the country.

I’m not trying to disprove SWS or Pulse Asia here. Again, it is highly likely that they are using methods that do not include Slovin’s formula. However, here’s my case in point: Before you believe that SWS and/or Pulse Asia survey results are what can actually be expected if elections were held then and there, think more than twice; it is also highly likely that the results may not really be reflective of what the entire Filipino electorate may actually and ultimately reflect, from a statistical standpoint.

And that is not yet considering the sampling method employed by these survey groups.

Hence, to the SWS and Pulse Asia survey frontrunners and their supporters, I suggest for you not to keep your hopes too high, or you may end up disappointing yourselves if the actual results of the elections will not reflect the trends reflected by those survey results. And to survey tailenders and their supporters, there may actually be valid bases for you not to give much credence to these survey results. Quoting from Sen. Gordon: “The real ‘survey’ is on May 10, 2010.”

‘Nuff said!




By Billy Almarinez

27 March 2010

Sampling Method Part 2

Last time, I attempted to discuss the questionable sample sizes being employed by SWS and Pulse Asia in their surveys. Here, I am going to give my take on the sampling method.

Again, as an instructor of college statistics and methods of research, I am inclined to question the methodology used by SWS and Pulse Asia in their conduct of surveys. Why? Because it is actually dubious how these two survey firms come up with results almost every few weeks if they’re actually using scientifically and statistically sound methodologies. The question is rooted mainly in the reluctance (for some reason or another) to disclose the details of the procedures they employed. Scientific technical reports like those that present results of surveys should include a detailed description of how sampling was carried out, and unfortunately every time SWS and Pulse Asia come up with results of their survey they only provide findings without specifying in detail how they conducted their study. They only indicate the sample size used and the margin of error they set as well as the question they gave a respondent, but the sampling method and how they actually conducted the survey (i.e., how they distributed the survey questionnaires) seem to remain undisclosed to the general public.

Recently, news reports (mainly from ABS-CBN and GMA 7) on the most recent Pulse Asia survey results indicate that the survey firm used a “multistage random sampling method”. What did Pulse Asia mean by that? And how did they actually determine who the respondents would be? It is very easy for a researcher to say that he/she used or is going to use a random sampling method, but conducting such is actually not that simple. It is not as simple as going out in the street and handing out a survey form to somebody the researcher meets “randomly”. Such an activity is not a probability (random) sampling method, in which all members of the population are supposed to have an equal probability of being selected into the sample. In the case of surveys conducted via true random sampling, all members of the population of registered voters (including me and you) should have an equal chance of being selected as a respondent
How should sampling for an election-related survey (like the ones Pulse Asia and SWS supposedly conduct) be carried out in order for the results to be valid and reflective of the characteristics of the population? Here’s my take, and my attempt to discuss why it is not as simple and as easy as how Pulse Asia and SWS want us and gullible voters to believe:

  • Given that the population size of registered voters is actually known, the general sample size should be determined using a tried-and-tested, statistically and generally accepted formula like Slovin’s formula, which I attempted discussing in my previous entry.
  • Given the heterogeneity (i.e., differences) and at the same time homogeneity (i.e., similarities) in demographics inherent in Filipino voters, a multi-stage sampling method composed of both cluster and stratified sampling methods should be employed. This further complicates the methodology, since registered voters can be subdivided into homogenous groups in many ways. Sub-grouping can be done based on age range, economic status, occupation, and other demographic parameters. Complications further arise since the proportion of each stratum (homogenous sub-group) and the proportion of elements in a cluster (which in this case is a geographic location) in the population should be considered in the sample size. For instance, if the sample size is determined to be 2,500 and 20 percent of the population is in Metro Manila, then 500 respondents should come from Metro Manila. This is not yet considering the percentage of each strata identified (for instance, the youth sector). To make things less highfaluting, in short, sampling is not as simple as it seems.
  • Identification of respondents is another complicated aspect, especially if random sampling is to be employed. The researcher should first have a list of names of all of the members of the population. In this case, a complete list of registered voters from the COMELEC is to be used. Elements of sub-samples (following cluster and stratified sampling) are to be identified from the voters list. Here, another complication exists in the fact that although the names in the list are usually arranged alphabetically and divided by precinct, they are not grouped according to demographic parameters like gender, age range, economic status, and others. Hence, the burden of grouping the names according to gender, or age range, or economic status, or other demographic variables lies in the researcher. And that would require painstaking and time-consuming effort. And then the prospective respondents (elements of the sample) are to be chosen randomly, first by assigning numbers to each member of the population in the voters list and then choosing numbers via lottery (via fishbowl or tambiolo) or by using a table of random numbers, or by systematic sampling (where every nth member is chosen, n being a random number).
  • Another tricky part is the distribution of survey questionnaires to the respondents selected from the voters list. The researcher should exhaust all possible means of making sure a survey questionnaire is given to the specific name that has been selected as a respondent. Since the COMELEC voters list also contains addresses of the registered voters, distribution of the questionnaires can be done either by sending them through mail or courier or by conducting actual house visits. Complications may further arise if: (a) the voter selected as a respondent has already transferred residence but has not updated his/her address; (b) the selected respondent is unavailable during the time a house visit was conducted by the researcher; or (c) the selected respondent declines to send back an accomplished questionnaire that has been received by courier or mail. The third case is very common in the conduct of surveys via courier or mail, hence a researcher usually sets a sample size that is greater than the one determined via Slovin’s formula for contingency purposes.

You may ask, what or where am I getting at with those technical hullaballoo that I just presented? Well, let me put it in the way of another list of points:

  1. Surveys conducted by asking or giving out questionnaires to random passersby or just visiting random households, even if researchers conduct such methods in different locations, is not following statistically and scientifically sound sampling protocols. The burden of explaining whether or not this is the type of method employed by SWS or Pulsa Asia lies in these survey firms, and unfortunately they decline to disclose specifics of their protocols.
  2. Respondents have to be selected via systematic or simple random sampling from the complete list of registered voters. Just asking or giving out questionnaires to random passersby or visiting households at random will not suffice, because doing so is not really a probability or random sampling method as not all members of the population of registered voters will not have an equal chance of being selected as a respondent.
  3. Do SWS and Pulse Asia select their respondents randomly from the COMELEC list to ensure that the people they get data from are actually registered voters? This one is highly doubtful about these commercial survey firms, because I learned from one Facebook user that a household helper under his employ was onceselected as a respondent of a survey of either of the two firms (I can’t remember which), when in fact she (the household helper) wasn’t even a registered voter. This implies that it is possible that Pulse Asia and SWS are choosing respondents who may not actually be registered voters, further pointing to the possibility that they do not select their respondents from the COMELEC list of registered voters. It is possible that they may be merely handing out questionnaires to random passersby or conducting visits of random households without employing a true random or probability sampling protocol.
  4. Since the burden of answering with clarity the question in the previous item lies with Pulse Asia and SWS, if they do not prove that they are conducting their surveys properly by using a statistically valid and scientifically sound methodology (as they haven’t done so with their reluctance to provide specifics on the methodology that they carried out), then it is but proper for them not to present the results of their survey as if they actually reflect the characteristics of the voting population. As in any scientific report like a thesis or a dissertation, validity of findings must be established by also establishing the validity and reliability of the methodology employed in the study.
  5. Given that a properly conducted survey would entail a huge amount of effort and would consume a considerable amount of time and resources, isn’t it very dubious how SWS and Pulse Asia seem to very easily come up with results almost every month, whose trends seem to vary very minimally? Take for instance the percentage ratings of Sen. Noynoy Aquino, Atty. Gibo Teodoro, Sen. Dick Gordon, Bro. Eddie Villanueva, Sen. Jamby Madrigal, Mr. Nick Perlas, and Coun. JC de los Reyes. Isn’t it suspicious how their ratings seem to have become almost static? Unless SWS and Pulse Asia are conducting their periodic surveys over the same respondents again and again, variations should be present in the results, but that is not what we are seeing, is it? Come to think of it, it is indeed much easier if surveys are conducted over the same people; much less hassle in sampling, no need to go through all of the scientific and statistical hullaballoo I presented earlier, don’t you think? Pun intended, of course.Given the points and arguments I have discussed herein, let me reiterate what I mentioned in my previous blog entry: To the SWS and Pulse Asia survey frontrunners and their supporters, I suggest for you not to keep your hopes too high, or you may end up disappointing yourselves if the actual results of the elections will not reflect the trends reflected by those survey results. And to survey tailenders and their supporters, there may actually be valid bases for you not to give much credence to these survey results.

Unfortunately, the alarming thing is that gullible Filipinos readily believe that Pulse Asia and SWS survey results are reflective of what the results of the election would be, and that gullible voters even tend to base their decisions on who to vote for based on these surveys. Minds of the Filipino electorate are just being conditioned to believe that what the results of these survey show may be the same results that can be expected in the May 10 elections. Survey frontrunners hail and tout Pulse Asia and SWS as highly credible and almost infallible, and they and their spin-doctors already presume and arrogantly declare that the presidential elections are going to be merely a two-man competition. Their supporters are being conditioned to think that the only way they would lose is if they are cheated out of victory, so that they would have a reason to vehemently protest an outcome that may be different from what they are expecting.

Again, quoting from Sen. Gordon, “The real ‘survey’ is on May 10, 2010” (Technically, it is not a survey but a census.) Do not be surprised should the results of the May 10 elections turn out to be different from what the results of Pulse Asia and SWS surveys indicate.

Comments and rebuttals are most welcome.

‘Nuff said!

Pulse Asia Evades Disclosure of Sponsors; Election Surveys Suppress Number of Undecided Voters; Insiders Blow Whistle on Local Pollsters

March 29, 2010

Survey Watch

Bulletin No. 3

6 March 2010

Pulse Asia Evades Disclosure of Sponsors;

Election Surveys Suppress Number of Undecided Voters;

Insiders Blow Whistle on Local Pollsters

by Francisco S. Tatad

The release yesterday by Pulse Asia of the results of its February preelection survey comes at a time when there is widespread public questioning of the methods and practices of opinion polling firms and the injurious effects of surveys on the election campaign. The criticisms that many of us have raised probably came too late to influence in any manner the way this recent Pulse Asia survey was done. But some omissions could have been rectified but were not. We detail these objections in this bulletin, along with a new important research finding that should make the nation more skeptical of survey results.

The media and the public should not be lulled by the positional or percentage changes in the horse race into thinking that the Pulse Asia has improved in methodology or the malpractices have been corrected. They have not.



March 5, 2010

Because of the unduly large role being played by the polling agencies in choosing national and even local candidates, I am rerunning this article that first appeared during the 2004 presidential elections to help the reader gain a sober and intelligent perspective on the credibility of these agencies.

FST Documentary Service
All inquiries to Sen. Kit Tatad, Tel. No. 9283627



On May 10, 2004, after the counting of votes at the precincts, ABS-CBN began broadcasting a Quick Count. There was no attempt on the part of government to stop it. That would occur much later, when the Commission on Elections and the Department of Justice stopped ABC-Channel 5 from conducting its own count, and ordered the Daily Tribune not to carry ads containing hitherto unpublished election results supplied by the Opposition.

As of 4 a.m. of May 11, 2004, the first 1.224 million votes counted were distributed as follows:

NCR — 20.46 percent
Luzon —39.60 percent
Visayas —18.58 percent
Mindanao —21.36 percent

These were shared as follows:

Candidate National NCR Luzon Visayas Mindanao

FPJ 480,207 100,486 212,098 62,518 114,105
GMA 466,294 70,182 157,527 143,470 95,107
LACSON 202,929 56,954 80,452 18,655 46,858
ROCO 93,100 22,245 45.045 13,570 12,239
VILLANUEVA 102,249 27,102 40.973 13,254 20,920

At this time of day, Mr. Mahar Mangahas appeared at ABS-CBN to deliver the first results of the ABS-CBN/SWS exit poll. Based on 528 NCR respondents, this poll reported the following findings, as of 2:30 a.m.

GMA ————– 31 percent
FPJ ————— 23 percent
LACSON———- 20 percent
VILLANUEVA—10 percent
ROCO————– 8 percent
NO ANSWER—– 7 percent

Notice the big difference between the Quick Count’s figure and this one. Yet the votes of SWS’s 528 respondents tried to reverse the impact of the 276,969 NCR voters who had put FPJ ahead of GMA by at least 30,304 votes. In the SWS survey, FPJ was now 8 percent behind.

Mangahas did not find his NCR poll conclusive. In a text message to Ms. Susan Tagle, FPJ’s communications aide, received at 3:30 a.m. of May 11, 2004, Mangahas said:

“Frm Mahar: Sori 2 disappoint. She has lead, but inconclusiv sins many kept silent. Ds is ncr only. On d way to abs now. “

Despite such inconclusiveness, he submitted the results anyway.

Given the wide discrepancy between its own Quick Count and the SWS survey, ABS-CBN management did not quite know how to proceed. They could not possibly present to the public two sets of conflicting data and still claim any credibility. According to inside informants, Mr. Dong Puno could not decide, so they woke up Mr. Gaby Lopez at 5:30 a.m. But Mr. Lopez himself could see no way out either.

Someone then proposed that the Quick Count be made to support the thrust of the SWS survey, which would eventually show GMA leading FPJ nationwide. They proposed to revise the geographic distribution of the vote count, by increasing the percentage for the Visayas (where GMA was leading FPJ) and bringing down the percentage for NCR, Luzon and Mindanao (where she was trailing FPJ).

The proposed new distribution was as follows:

NCR — 18.38 percent
LUZON — 35.84 percent
VISAYAS — 26.05 percent
MINDANAO— 19.73 percent

This proposal was accepted, and Mr. Lopez immediately left for the United States.

At 7:18 a.m. same day, the Quick Count showed the following results:

Candidate National NCR Luzon Visayas Mindanao

GMA 587,027 76,893 168,555 237,616 103,963
FPJ 562,976 105,646 229,033 102,709 125,588
LACSON 231,195 60,273 88,685 29,525 52,712
ROCO 109,902 24,270 47,925 24,150 13,557
VILLANUEVA 122,881 29,618 44,211 26,406 22,646

Just by readjusting the geographic distribution of the count, GMA was able to wipe out her deficit of 22,903 votes at the 4 a.m. report, and post a lead of 24,051 votes at the 7:18 a.m. count. This tends to show that GMA had posted an additional 46,954 votes while FPJ posted zero. In reality, neither the votes nor the margins in NCR, Luzon, Visayas and Mindanao changed; only the percentage distribution of the votes did. Perception of reality changed, but not reality itself.

As of 7:18 a.m. of May 11, the Quick Count had counted a total 1,613,981 actual votes for all the presidential candidates. In this count, FPJ led GMA in NCR, Luzon, and Mindanao. The only place where GMA led FPJ was the Visayas whose percentage share of the count had been changed, from 18.58 to 26.05 percent.

At 1:00 p.m. of the same day, however, the SWS exit poll, using a base of 4,627 respondents, reported the following results:

Candidate National NCR N/Cen. Luzon So. Luzon Visayas Mindanao

GMA 41% 31% 32% 24% 62% 50%
FPJ 32 23 41 38 21 35
LACSON 9 20 9 11 4 5
ROCO 5 8 2 11 3 2
VILLANUEVA 5 10 5 6 2 4
NO ANSWER 8 7 11 9 7 4

Notice that GMA now leads FJP everywhere, except Luzon.

This trend has since migrated to the Namfrel count.

Namfrel has clearly adopted the same formula used by ABS-CBN and SWS. Simply by concentrating its Quick Count on areas where GMA has more votes than FPJ, while delaying the count in bigger areas where FPJ is leading GMA by wide margins, it is able to show GMA leading FPJ across the nation.

Thus, as of 4 p.m., May 17, Namfrel already counted 1.68 million or 51.13 percent of the 3.29 million votes of Region VII, while counting only 1.19 million of Metro Manila’s 6.9 million votes, and an average of 22.12 percent for the other regions..

In Central Luzon, where FPJ’s advantage was never disputed, except in Mrs. Arroyo’s home province of Pampanga, the same Namfrel report showed Mrs. Arroyo leading FPJ, 693,461 to 252,794. The only possible explanation was that the votes were taken mostly, if not entirely, from Pampanga.

This manipulation of public perceptions need not automatically affect the integrity of the entire data, if there were no attempt to change the same. But it is precisely part of the operation to alter the data. Once the trend is accepted, even the most militant would tend to drop their guard, and accept anything that follows, even if it was the result of invention or fraud.

The twin motu proprio orders of the Department of Justice and the Commission on Elections stopping the independent broadcast by ABC-Channel 5 of the results of elections, and the refusal by the pro-Arroyo newspapers to carry paid advertisements by the KNP containing hitherto unreported results of the elections, followed now by the Comelec order to the Daily Tribune not to carry the same ads – all these must be seen in this light. They are an integral part of the effort to steal the elections.

In the past, such attack on press freedom and the right of the people to be informed on matters of public concern would have ignited a rebellion in the ranks of the press. Not now. The mainstream media, both print and broadcast, have chosen to look the other way, while two of their own must fight for their freedoms. This shows the depth of the sinful collaboration between so many media owners and the administration. This has become the gravest danger to the democratic system.

There’s something wrong about our surveys!

March 2, 2010
Written by J.A. de la Cruz / Coast-to-Coast
Monday, 01 March 2010 21:39
For the nth time, the local political survey firms Social Weather Stations (SWS) and Pulse Asia are under withering scrutiny. Not just by candidates and their political advisers, but by a growing number of observers, including members of academe, who are increasingly concerned about the methods, practices and, yes, the results of these surveys as purveyed by the firms and their adherents.

These sectors are coming around to the view that our local pollsters are doing a great disservice to the public and to the social-science profession by using flawed, even long-discarded, methods in their determination of public opinion, and then issuing the polling results in a skewed, helter-skelter way, without as much as offering the obligatory caveats about their work. They are also being taken to task for their adroit (some suggest deceptive) “marketing” operations as they actively promote their “studies” and seek “sponsors” (subscribers is how these firms call them) to cover the costs of their operations. The critics insist that these firms have taken a larger-than-life role in public life, promoting candidates and advocacies with hardly any accountability at all. They contend that, instead of becoming enhancers, they have become degraders of our democratic aspirations. Having taken roots in our democratic discourse and playing such a key role in the shaping of public opinion, it is time these firms’ own operations are scrutinized and subjected to the rigors of real, factual and scientific research with hardly any room for intervention of any kind from any source. These concerns have become even more telling in the run-up to the May elections as the country’s future gets so closely interlinked to the “polled” fortunes of the candidates. Instead of getting scrutinized for their views, their past performance and their character and standing in private and public life, the public is fed with, at best, less-than- exemplary polling results. That these firms have had their own share of “boo-boos” in the past makes such a scrutiny even more necessary and urgent at this time.

Tatad’s view

Comebacking Sen. Kit Tatad, who has been a victim of flawed survey results in the past, has actively sought greater transparency and accountability on the part of SWS and Pulse Asia. In our regular Kapihan sa Sulô forum last Saturday, Tatad asked that these firms refrain from surveying and purveying the results in the meantime until they can clear themselves, as it were, from past mistakes and indiscretions. Said Tatad: “SWS should first explain its fatally flawed exit poll of the 2004 elections in Metro Manila before it conducts yet another opinion poll related to the May 10 elections. For its part, Pulse Asia should disclose to the public how many candidates have paid how much in order to participate in and benefit from its surveys. I believe this is the irreducible minimum ethical and professional requirement before the two firms resume their unrestrained effort to shape public perceptions on the next presidential elections. Our people have a right to make this demand in light of the far-from-exemplary record of the two firms and the unaccountable power they now seem to possess.

“On May 11, 2004, within hours of the close of balloting, SWS announced that the incumbent President Gloria-Macapagal Arroyo got 31 percent of the votes in Metro Manila as against opposition candidate Fernando Poe Jr., who reportedly got 23 percent. The exit poll, commissioned by ABS-CBN, was conducted in the homes of 528 voters in the National Capital Region [NCR]. However, when the official Commission on Elections count came, Mr. Poe took the NCR with 1,452,380 votes or 36.67 percent of the votes, while Mrs. Arroyo got 1,049,016 votes or 26.46 percent of the votes. Mr. Poe won in all Metro Manila cities and towns except Las Piñas, where he lost by a mere 1,876 votes.

“This gross misreading of the results of the 2004 presidential elections in Metro Manila was far more devastating than the costly error of the Literary Digest in predicting President Franklin Delano Roosevelt’s defeat in the hands of Alf Landon in 1932, and George Gallup’s, Archibald Crossley’s and Elmo Roper’s common error in predicting President Harry Truman’s defeat in the hands of Thomas Dewey in 1948. Why so? Because while the American pollsters had erred in their respective pre-election surveys, the best of which could never be completely free from any mistake, SWS had messed up in an exit poll, where no professional pollster should.

“Similarly, I would ask Pulse Asia to make a full disclosure of the services it has sold to politicians who are eager to rate in its surveys. Contrary to what appears to be sound ethical practice, the firm has been inviting politicians to participate in its surveys at the rate of P400,000 per head, and to introduce ‘rider’ questions about their candidacies at P100,000 each. The politicians’ names have never been published, and neither have the ‘rider’ questions,” Tatad said. (Note: Actually, both firms and others conducting political surveys should make this disclosure).

Pulse Asia’s caveats

Indeed, these firms, being the leaders in the field, have a duty to make as much of their operations (and connections) open to the public. To his firm’s credit, Pulse Asia president Prof. Ronnie Holmes gamely answered questions about their polling methodology, their practices and, yes, their “subscribers” and other “clients.” Holmes noted that their methods and practices have adhered closely to the requirements of polling, and they are an active member of the Philippine Social Services Council, which ostensibly monitors their members’ operations as the umbrella organization of this largely self-regulatory body. He also noted that their records are open to public scrutiny and they will be open to discussions with the media and other sectors as far as their undertakings, including “marketing” activities, are concerned.

He also agreed with our other guest, veteran journalist Yen Makabenta, that it may, indeed, be necessary to change the survey question in the most sought-after issue at this point—who would one vote for president if the elections were held today—as such effectively suppresses the actual percentage of undecided. Quoting pollster David Moore, Makabenta noted that such “vote choice” (a forced choice) question glosses over voter indecision, which is likely in an election campaign as a good number of voters actually make their choice right at the precinct level or just days or hours before going to the polls. “The worst sin in poll reporting,” Moore noted, “was hedging”—which is what happens with a “forced choice” question. He also noted that in the US, the undecided can range from a low of 20 percent to as high as 70 percent—depending on how far away the election is.

Curiously, with three months to go before the May elections, both SWS and Pulse Asia are reporting very low “undecided,” i.e., from 2 percent to 4 percent only—almost negligible by polling standards. Yet, these results, which gloss over the huge “undecided,” are reported as if cast in stone, bringing the candidates and their advisers to moments of ecstasy or exasperation, depending on which side one is on. To avoid this skewed, if not totally discardable, question, Moore suggested a new question which, to my mind, better captures the opinion or sentiment of a respondent. Translated into the coming polls, it should read: “In the May election, who would you vote for president, or haven’t you yet made up your mind?” And to those who made a decision, a rider to be asked should be: “Is that a firm choice, or could you change your mind before Election Day?”

Moore’s point

In his book, The Opinion Makers, Moore makes the startling conclusion that pollsters “do not measure public opinion, they manufacture it.” He anchors this contention on the practice of polling firms to gloss over “voter indecision” during an election campaign. Moore notes:

“There is crisis in public-opinion polling today, a silent crisis that no one wants to talk about. The problem lies not in the declining response rates and increasing difficulty in obtaining representative sample, though these are issues the polling industry has to address. The problem lies, rather, in the refusal of media polls to tell the truth about those surveyed and about the larger electorate. Rather than tell us the essential facts about the public, they feed us a fairy-tale picture of a completely rational, all-knowing and fully engaged citizenry. They studiously avoid reporting on widespread public apathy, indecision and ignorance. The net result is conflicting poll results and a distortion of public opinion that challenges the credibility of the whole polling enterprise. Nowhere is this more often the case than in election polling.”

So there. To those who have taken on the polling firms as oracles, and their surveys unvarnished truth on the public’s opinion of the various candidates, we can only say: caveat emptor. And let us move on to make those surveys truly reflective of the public pulse, not a skewed or, worse, manufactured one.

Tatad calls on media to be wary of fatally flawed pre-election surveys

February 17, 2010

Pre-election surveys in the Philippines are doing Filipinos more harm than good as survey firms employ flawed and misleading methodologies long discredited in more advanced democracies.

This was revealed by former Senator Francisco “Kit” Tatad in a presentation entitled “Philippine Pre-election Surveys Are Fatally Flawed” before the Fernandina media forum at Club Filipino Wednesday (17 February 2010).

“Local pollsters have used methodologies and techniques that are flawed and discredited, and have long been discarded in the US, where public opinion polling was invented and turned into a billion-dollar industry,” Tatad said.

Among the flawed practices of local pollsters like Pulse Asia, Social Weather Station (SWS) and other less known practitioners involved in pre-election surveys which Tatad cited are Face-to-face Interviewing, Quota and Cluster Sampling, Loaded and Lengthy Questionnaires, Trial Heat Polls and Pick 3 Polling, all of which he said are stated in the methodologies made public by these respective companies.

Tatad likewise pointed out Philippines pollsters have ignored standards for professional and ethical practice of public opinion polling that elsewhere are regarded as sacred by polling associations and reputable pollsters.

“Professional standards are virtually non-existent in the local opinion polling industry,” the former Senator said. “No law regulating the conduct of opinion polling, and no professional association of pollsters either to set and enforce standards of conduct and standards of disclosure and ensure “the reliability and validity of survey results.”

Tatad further said that the media have been an unsuspecting purveyor of dubious findings to the detriment of the election campaign and the public.

“The public would have had a better appreciation and understanding of public opinion polling had the media been a little more critical and vigilant,” he said.

Tatad added that in the US, the media normally ask the following 20 questions before publishing the results of any opinion poll:

  1. Who did the poll?
  2. Who paid for the poll and why was it done?
  3. How many people were interviewed for the survey?
  4. How were those people chosen?
  5. What are (nation, state or region) or what group (teachers, lawyers, Democratic voters, etc.) were those people chosen from?
  6. Are the results based on the answers of all the people interviewed?
  7. Who should have been interviewed and was not? Or do response rates matter?
  8. When was the poll done?
  9. How were the interviews conducted?
  10. What about polls on the Internet or World Wide Web?
  11. What is the sampling error for the poll results?
  12. Who’s on first?
  13. What other kinds of factors can skew poll results?
  14. What questions were asked?
  15. In what order were the questions asked?
  16. What about “push polls”?
  17. What other polls have been done on this topic? Did they say the same thing? If they are different, why are they different?
  18. What about exit polls?
  19. What else needs to be included in the report of the poll?
  20. So I’ve asked the questions. The answers sound good. Should we report the results?

“The reason for asking these questions is plain enough: there are good polls and bad polls,” Tatad pointed out. “It appears that every poll should be judged guilty until proven otherwise.”


See full presentation and related documents