Not One Voter at Erap’s Campaign Rallies Has Ever Been Surveyed – Tatad

March 29, 2010

AT every campaign rally of President Estrada and Pwersa ng Masang Pilipino – and most recently in Cabanatuan City, Nueva Ecija — former Senator Kit Tatad always asks the thousands in the audience whether anyone of them has ever been surveyed by Pulse Asia, SWS, and other firms conducting surveys in the current campaign.

After eight weeks of Pwersa rallies in Luzon, Visayas and Mindanao which millions have attended, not one voter has yet answered “Yes” to Tatad’s question.

Tatad asks who in nthe crowd has been surveyed during a Pwersa Ng Masang Pilipino Rally

When asked further whether they know of anyone who has been surveyed, the answer has been also in the negative.

In Cabanatuan City, Tatad told members of the media: “Where are these 1,800 voters that the survey firms have surveyed and whose opinions are supposed to represent our 50 million voters? Are they talking only to themselves and members of the families of their staff?”

The senatorial candidate said that there are more questions than answers coming from the polling firms, ever since he exposed last month that local pollsters are using descredited methodologies and questionable practices in their polling.

Tatad disclosed that he has written to Pulse Asia, SWS, TNS and Stratpolls for full details about their recent surveys with respect to sampling, methodology, questionnaire and sponsorship.

Tatad greets supporters in Cabanatuan

“After two weeks, only Pulse Asia has so far replied,” he said. “The others keep offering excuses. And Pulse Asia steadfastly declined to divulge the identity of sponsors on the grounds of confidentiality of its sponsorship agreements.”

Tatad contends that disclosure of the contracts is required under the election act because sponsorship of surveys is an election expense. And candidates are required by law to disclose all expenses to the Commission on Elections.

He said that if elected to the Senate, he will file a bill in Congress for the rationalization and enforcement of standards in public opinion polling in the Philippines.

“It’s time we have such a law,” he said. “If many advanced countries enforce regulations on election surveys to ensure the integrity of their democracy, even more should we protect ours from unscrupulous operators.” #


Pulse Asia Evades Disclosure of Sponsors; Election Surveys Suppress Number of Undecided Voters; Insiders Blow Whistle on Local Pollsters

March 29, 2010

Survey Watch

Bulletin No. 3

6 March 2010

Pulse Asia Evades Disclosure of Sponsors;

Election Surveys Suppress Number of Undecided Voters;

Insiders Blow Whistle on Local Pollsters

by Francisco S. Tatad

The release yesterday by Pulse Asia of the results of its February preelection survey comes at a time when there is widespread public questioning of the methods and practices of opinion polling firms and the injurious effects of surveys on the election campaign. The criticisms that many of us have raised probably came too late to influence in any manner the way this recent Pulse Asia survey was done. But some omissions could have been rectified but were not. We detail these objections in this bulletin, along with a new important research finding that should make the nation more skeptical of survey results.

The media and the public should not be lulled by the positional or percentage changes in the horse race into thinking that the Pulse Asia has improved in methodology or the malpractices have been corrected. They have not.



March 5, 2010

Because of the unduly large role being played by the polling agencies in choosing national and even local candidates, I am rerunning this article that first appeared during the 2004 presidential elections to help the reader gain a sober and intelligent perspective on the credibility of these agencies.

FST Documentary Service
All inquiries to Sen. Kit Tatad, Tel. No. 9283627



On May 10, 2004, after the counting of votes at the precincts, ABS-CBN began broadcasting a Quick Count. There was no attempt on the part of government to stop it. That would occur much later, when the Commission on Elections and the Department of Justice stopped ABC-Channel 5 from conducting its own count, and ordered the Daily Tribune not to carry ads containing hitherto unpublished election results supplied by the Opposition.

As of 4 a.m. of May 11, 2004, the first 1.224 million votes counted were distributed as follows:

NCR — 20.46 percent
Luzon —39.60 percent
Visayas —18.58 percent
Mindanao —21.36 percent

These were shared as follows:

Candidate National NCR Luzon Visayas Mindanao

FPJ 480,207 100,486 212,098 62,518 114,105
GMA 466,294 70,182 157,527 143,470 95,107
LACSON 202,929 56,954 80,452 18,655 46,858
ROCO 93,100 22,245 45.045 13,570 12,239
VILLANUEVA 102,249 27,102 40.973 13,254 20,920

At this time of day, Mr. Mahar Mangahas appeared at ABS-CBN to deliver the first results of the ABS-CBN/SWS exit poll. Based on 528 NCR respondents, this poll reported the following findings, as of 2:30 a.m.

GMA ————– 31 percent
FPJ ————— 23 percent
LACSON———- 20 percent
VILLANUEVA—10 percent
ROCO————– 8 percent
NO ANSWER—– 7 percent

Notice the big difference between the Quick Count’s figure and this one. Yet the votes of SWS’s 528 respondents tried to reverse the impact of the 276,969 NCR voters who had put FPJ ahead of GMA by at least 30,304 votes. In the SWS survey, FPJ was now 8 percent behind.

Mangahas did not find his NCR poll conclusive. In a text message to Ms. Susan Tagle, FPJ’s communications aide, received at 3:30 a.m. of May 11, 2004, Mangahas said:

“Frm Mahar: Sori 2 disappoint. She has lead, but inconclusiv sins many kept silent. Ds is ncr only. On d way to abs now. “

Despite such inconclusiveness, he submitted the results anyway.

Given the wide discrepancy between its own Quick Count and the SWS survey, ABS-CBN management did not quite know how to proceed. They could not possibly present to the public two sets of conflicting data and still claim any credibility. According to inside informants, Mr. Dong Puno could not decide, so they woke up Mr. Gaby Lopez at 5:30 a.m. But Mr. Lopez himself could see no way out either.

Someone then proposed that the Quick Count be made to support the thrust of the SWS survey, which would eventually show GMA leading FPJ nationwide. They proposed to revise the geographic distribution of the vote count, by increasing the percentage for the Visayas (where GMA was leading FPJ) and bringing down the percentage for NCR, Luzon and Mindanao (where she was trailing FPJ).

The proposed new distribution was as follows:

NCR — 18.38 percent
LUZON — 35.84 percent
VISAYAS — 26.05 percent
MINDANAO— 19.73 percent

This proposal was accepted, and Mr. Lopez immediately left for the United States.

At 7:18 a.m. same day, the Quick Count showed the following results:

Candidate National NCR Luzon Visayas Mindanao

GMA 587,027 76,893 168,555 237,616 103,963
FPJ 562,976 105,646 229,033 102,709 125,588
LACSON 231,195 60,273 88,685 29,525 52,712
ROCO 109,902 24,270 47,925 24,150 13,557
VILLANUEVA 122,881 29,618 44,211 26,406 22,646

Just by readjusting the geographic distribution of the count, GMA was able to wipe out her deficit of 22,903 votes at the 4 a.m. report, and post a lead of 24,051 votes at the 7:18 a.m. count. This tends to show that GMA had posted an additional 46,954 votes while FPJ posted zero. In reality, neither the votes nor the margins in NCR, Luzon, Visayas and Mindanao changed; only the percentage distribution of the votes did. Perception of reality changed, but not reality itself.

As of 7:18 a.m. of May 11, the Quick Count had counted a total 1,613,981 actual votes for all the presidential candidates. In this count, FPJ led GMA in NCR, Luzon, and Mindanao. The only place where GMA led FPJ was the Visayas whose percentage share of the count had been changed, from 18.58 to 26.05 percent.

At 1:00 p.m. of the same day, however, the SWS exit poll, using a base of 4,627 respondents, reported the following results:

Candidate National NCR N/Cen. Luzon So. Luzon Visayas Mindanao

GMA 41% 31% 32% 24% 62% 50%
FPJ 32 23 41 38 21 35
LACSON 9 20 9 11 4 5
ROCO 5 8 2 11 3 2
VILLANUEVA 5 10 5 6 2 4
NO ANSWER 8 7 11 9 7 4

Notice that GMA now leads FJP everywhere, except Luzon.

This trend has since migrated to the Namfrel count.

Namfrel has clearly adopted the same formula used by ABS-CBN and SWS. Simply by concentrating its Quick Count on areas where GMA has more votes than FPJ, while delaying the count in bigger areas where FPJ is leading GMA by wide margins, it is able to show GMA leading FPJ across the nation.

Thus, as of 4 p.m., May 17, Namfrel already counted 1.68 million or 51.13 percent of the 3.29 million votes of Region VII, while counting only 1.19 million of Metro Manila’s 6.9 million votes, and an average of 22.12 percent for the other regions..

In Central Luzon, where FPJ’s advantage was never disputed, except in Mrs. Arroyo’s home province of Pampanga, the same Namfrel report showed Mrs. Arroyo leading FPJ, 693,461 to 252,794. The only possible explanation was that the votes were taken mostly, if not entirely, from Pampanga.

This manipulation of public perceptions need not automatically affect the integrity of the entire data, if there were no attempt to change the same. But it is precisely part of the operation to alter the data. Once the trend is accepted, even the most militant would tend to drop their guard, and accept anything that follows, even if it was the result of invention or fraud.

The twin motu proprio orders of the Department of Justice and the Commission on Elections stopping the independent broadcast by ABC-Channel 5 of the results of elections, and the refusal by the pro-Arroyo newspapers to carry paid advertisements by the KNP containing hitherto unreported results of the elections, followed now by the Comelec order to the Daily Tribune not to carry the same ads – all these must be seen in this light. They are an integral part of the effort to steal the elections.

In the past, such attack on press freedom and the right of the people to be informed on matters of public concern would have ignited a rebellion in the ranks of the press. Not now. The mainstream media, both print and broadcast, have chosen to look the other way, while two of their own must fight for their freedoms. This shows the depth of the sinful collaboration between so many media owners and the administration. This has become the gravest danger to the democratic system.

There’s something wrong about our surveys!

March 2, 2010
Written by J.A. de la Cruz / Coast-to-Coast
Monday, 01 March 2010 21:39
For the nth time, the local political survey firms Social Weather Stations (SWS) and Pulse Asia are under withering scrutiny. Not just by candidates and their political advisers, but by a growing number of observers, including members of academe, who are increasingly concerned about the methods, practices and, yes, the results of these surveys as purveyed by the firms and their adherents.

These sectors are coming around to the view that our local pollsters are doing a great disservice to the public and to the social-science profession by using flawed, even long-discarded, methods in their determination of public opinion, and then issuing the polling results in a skewed, helter-skelter way, without as much as offering the obligatory caveats about their work. They are also being taken to task for their adroit (some suggest deceptive) “marketing” operations as they actively promote their “studies” and seek “sponsors” (subscribers is how these firms call them) to cover the costs of their operations. The critics insist that these firms have taken a larger-than-life role in public life, promoting candidates and advocacies with hardly any accountability at all. They contend that, instead of becoming enhancers, they have become degraders of our democratic aspirations. Having taken roots in our democratic discourse and playing such a key role in the shaping of public opinion, it is time these firms’ own operations are scrutinized and subjected to the rigors of real, factual and scientific research with hardly any room for intervention of any kind from any source. These concerns have become even more telling in the run-up to the May elections as the country’s future gets so closely interlinked to the “polled” fortunes of the candidates. Instead of getting scrutinized for their views, their past performance and their character and standing in private and public life, the public is fed with, at best, less-than- exemplary polling results. That these firms have had their own share of “boo-boos” in the past makes such a scrutiny even more necessary and urgent at this time.

Tatad’s view

Comebacking Sen. Kit Tatad, who has been a victim of flawed survey results in the past, has actively sought greater transparency and accountability on the part of SWS and Pulse Asia. In our regular Kapihan sa Sulô forum last Saturday, Tatad asked that these firms refrain from surveying and purveying the results in the meantime until they can clear themselves, as it were, from past mistakes and indiscretions. Said Tatad: “SWS should first explain its fatally flawed exit poll of the 2004 elections in Metro Manila before it conducts yet another opinion poll related to the May 10 elections. For its part, Pulse Asia should disclose to the public how many candidates have paid how much in order to participate in and benefit from its surveys. I believe this is the irreducible minimum ethical and professional requirement before the two firms resume their unrestrained effort to shape public perceptions on the next presidential elections. Our people have a right to make this demand in light of the far-from-exemplary record of the two firms and the unaccountable power they now seem to possess.

“On May 11, 2004, within hours of the close of balloting, SWS announced that the incumbent President Gloria-Macapagal Arroyo got 31 percent of the votes in Metro Manila as against opposition candidate Fernando Poe Jr., who reportedly got 23 percent. The exit poll, commissioned by ABS-CBN, was conducted in the homes of 528 voters in the National Capital Region [NCR]. However, when the official Commission on Elections count came, Mr. Poe took the NCR with 1,452,380 votes or 36.67 percent of the votes, while Mrs. Arroyo got 1,049,016 votes or 26.46 percent of the votes. Mr. Poe won in all Metro Manila cities and towns except Las Piñas, where he lost by a mere 1,876 votes.

“This gross misreading of the results of the 2004 presidential elections in Metro Manila was far more devastating than the costly error of the Literary Digest in predicting President Franklin Delano Roosevelt’s defeat in the hands of Alf Landon in 1932, and George Gallup’s, Archibald Crossley’s and Elmo Roper’s common error in predicting President Harry Truman’s defeat in the hands of Thomas Dewey in 1948. Why so? Because while the American pollsters had erred in their respective pre-election surveys, the best of which could never be completely free from any mistake, SWS had messed up in an exit poll, where no professional pollster should.

“Similarly, I would ask Pulse Asia to make a full disclosure of the services it has sold to politicians who are eager to rate in its surveys. Contrary to what appears to be sound ethical practice, the firm has been inviting politicians to participate in its surveys at the rate of P400,000 per head, and to introduce ‘rider’ questions about their candidacies at P100,000 each. The politicians’ names have never been published, and neither have the ‘rider’ questions,” Tatad said. (Note: Actually, both firms and others conducting political surveys should make this disclosure).

Pulse Asia’s caveats

Indeed, these firms, being the leaders in the field, have a duty to make as much of their operations (and connections) open to the public. To his firm’s credit, Pulse Asia president Prof. Ronnie Holmes gamely answered questions about their polling methodology, their practices and, yes, their “subscribers” and other “clients.” Holmes noted that their methods and practices have adhered closely to the requirements of polling, and they are an active member of the Philippine Social Services Council, which ostensibly monitors their members’ operations as the umbrella organization of this largely self-regulatory body. He also noted that their records are open to public scrutiny and they will be open to discussions with the media and other sectors as far as their undertakings, including “marketing” activities, are concerned.

He also agreed with our other guest, veteran journalist Yen Makabenta, that it may, indeed, be necessary to change the survey question in the most sought-after issue at this point—who would one vote for president if the elections were held today—as such effectively suppresses the actual percentage of undecided. Quoting pollster David Moore, Makabenta noted that such “vote choice” (a forced choice) question glosses over voter indecision, which is likely in an election campaign as a good number of voters actually make their choice right at the precinct level or just days or hours before going to the polls. “The worst sin in poll reporting,” Moore noted, “was hedging”—which is what happens with a “forced choice” question. He also noted that in the US, the undecided can range from a low of 20 percent to as high as 70 percent—depending on how far away the election is.

Curiously, with three months to go before the May elections, both SWS and Pulse Asia are reporting very low “undecided,” i.e., from 2 percent to 4 percent only—almost negligible by polling standards. Yet, these results, which gloss over the huge “undecided,” are reported as if cast in stone, bringing the candidates and their advisers to moments of ecstasy or exasperation, depending on which side one is on. To avoid this skewed, if not totally discardable, question, Moore suggested a new question which, to my mind, better captures the opinion or sentiment of a respondent. Translated into the coming polls, it should read: “In the May election, who would you vote for president, or haven’t you yet made up your mind?” And to those who made a decision, a rider to be asked should be: “Is that a firm choice, or could you change your mind before Election Day?”

Moore’s point

In his book, The Opinion Makers, Moore makes the startling conclusion that pollsters “do not measure public opinion, they manufacture it.” He anchors this contention on the practice of polling firms to gloss over “voter indecision” during an election campaign. Moore notes:

“There is crisis in public-opinion polling today, a silent crisis that no one wants to talk about. The problem lies not in the declining response rates and increasing difficulty in obtaining representative sample, though these are issues the polling industry has to address. The problem lies, rather, in the refusal of media polls to tell the truth about those surveyed and about the larger electorate. Rather than tell us the essential facts about the public, they feed us a fairy-tale picture of a completely rational, all-knowing and fully engaged citizenry. They studiously avoid reporting on widespread public apathy, indecision and ignorance. The net result is conflicting poll results and a distortion of public opinion that challenges the credibility of the whole polling enterprise. Nowhere is this more often the case than in election polling.”

So there. To those who have taken on the polling firms as oracles, and their surveys unvarnished truth on the public’s opinion of the various candidates, we can only say: caveat emptor. And let us move on to make those surveys truly reflective of the public pulse, not a skewed or, worse, manufactured one.

Poll survey firms on the dock

March 2, 2010


(The Philippine Star) Updated February 19, 2010 12:00 AM

Like many others out there, my perception of the ongoing election campaign has been in no small way influenced by the pre-election surveys that are sprouting from everywhere and are now being fed us with increasing regularity. The danger of the campaign being turned into a horse race has happened. Every day, we open the daily papers to check who’s up, who’s down in the latest survey.

So I took more than routine interest in the presentation made by former Senator Francisco S. Tatad at the Club Filipino last Wednesday wherein he called to question the accuracy and integrity of the work of local pollsters, and warned of the manipulation of public opinion by political and commercial interests.

The core of his brief is that: (1) local pollsters have been using methodologies and techniques that are flawed and discredited, and had been discarded sometime ago by social research experts and professional pollsters in the US and advanced countries; (2) local pollsters have ignored the strict standards for professional and ethical practice of public opinion polling that elsewhere are regarded as sacred and inviolate by polling associations and reputable pollsters; and (3) in reporting the survey results, media organizations have been an unwitting and unsuspecting purveyor of dubious findings to the detriment of the election campaign and the public.

To support his indictment, Kit Tatad cited the writings of research experts and professional pollsters that detailed current survey research methodology and its evolution from survey methods and practices that proved “fatally flawed and inaccurate.” High among these discarded methodologies and practices are face-to-face interviewing and quota sampling which produce “contaminated data” according to top US experts. Significantly, these methods and techniques are being used by local survey firms like SWS, Pulse Asia, and the new survey groups to measure Philippine public opinion.

Finally, he underscored the quite remarkable fact that, for a highly sensitive service, local pollsters operate without restraints from either the law or professional standards set by private associations. Unlike market research, which has MORI, political pollsters are pretty much on their own. This, according to Tatad, has resulted in glaring excesses.

At the heart of public opinion polling is essentially an almost incredible proposition: that by getting the opinions and preferences of some 1,500 to 2,500 citizens, pollsters can divine accurately the opinions and preferences of our 94-million citizens or 48-million voters. It takes a leap of faith on our part to believe that there’s a science here somewhere that makes this possible, and that those who practice this in our midst are professional and responsible.

Public opinion polling is mainly a US invention, and it is in America where it is most heavily practiced and most advanced. The story of the craft has been a journey of ups and downs, marked by spectacular failures and then by steady improvement of methodology and professionalism in recent decades. Among the instances of pollsters going wrong are the erroneous forecast of Dewey’s victory over Truman in 1948 by top US polling firms like Gallup, and the more recent 1992 failure to read John Majors’ victory in the United Kingdom.

In grappling with the failures, public opinion polling has sought to rid itself of the methods and practices that tarnished its work and tried to be more scientific and rigorous. According to the literature cited by Tatad, this was the reason for discarding face-to-face interviewing and quota sampling. In the words of pollster Kenneth Warren, these methods were proven to be “fatally flawed and grossly inaccurate.”

Here then is the riddle: If these methods are now passé in the US and advanced countries, why then are they being used for measuring Philippine public opinion by Filipino pollsters? The answer is unsatisfactory. In developing countries and emerging democracies in Eastern Europe, public opinion polling is still in infancy. What is not acceptable for the US and the advanced world is deemed as good enough for less developed countries.

The justification is that quality polling is hard to do in these countries because of poor communication, poor transport systems and prohibitive costs in reaching respondents. So pollsters had to cut corners and come up with “creative solutions.” But this is slammed by critics like David Kennamer, professor of mass communications at Virginia Commonwealth University, who says that these solutions are “indefensible from a methodological perspective.”

Tatad says that there would be no controversy if polling firms, in presenting their survey results, took care to inform the public of their basic limitations and the danger of errors. But instead the local poll surveys were transmogrified into a virtual dictator in the run-up to the May elections. The media, the candidates and the public treated the results as gospel truth. And local pollsters basked in the glow, instead of warning everyone not to be carried away.

Restraint and modesty are the rule for reputable pollsters in the US because theirs is not a science. According to Kenneth Warren, in his book In Defense of Public Opinion Polling: “As hard as professional pollsters may try, it is practically impossible to uphold “by the book” approved polling techniques every step of the way. Contamination will inevitably creep in at each and every polling stage… Even the very best pollsters can only try to limit the extent of the contamination.”

At a time when we in the private sector are badgering our government and private firms to adopt best practice in their respective realms as a means to make ourselves more competitive in the world, the possibility that our polling firms are cutting corners is most disturbing. It’s one more black eye to add to the many already taken by the country.

The other side of the coin

It’s possible that local survey firms, particularly SWS and Pulse Asia, have something to say in response to Kit Tatad. I welcome their rebuttals. A friend of mine in the US, who is a noted pollster, said the following:

The reason they do door to door is not to cut corners – it’s to be accurate since many homes don’t have a land line. And they use this method all over the world. Just this year I used it in central Europe and South America – there were phone options but they were inferior so I opted for door to door. That’s also the method I used in two countries in Southeast Asia and it has always been accurate. In the US, Yankelovich has continued to use door to door for some projects because they felt it was superior for some applications like long interviews. I think in the last couple of years they may have stopped, but not because of quality but because of the safety of their interviewers.”

Meanwhile, let’s all treat the poll surveys floating around with a grain of salt. And let’s enjoy the fact that the contest is not over. Teodoro, Gordon, et al, have reason to persevere.

Tatad replies to John Nery’s Inquirer Column

March 1, 2010

Dear John Nery,

This refers to your column of Feb. 23. ( Regardless of what it says, it is the first free advertising I have received since I became a senatorial candidate on Dec. 1, 2009. But for my hectic campaign schedule, I should have thanked you sooner. Still certain points need to be clarified.

On Feb. 17 at Café Fernandina, I presented a paper on our “fatally flawed political surveys.” It was an updated recap of what I had been saying since the pre-campaign polling began, long before I became a candidate, long before anyone ever filed a certificate of candidacy for the May elections, long before the official campaign rolled off.

These were some of my points: 1) that our local pollsters have been using quota sampling and face-to-face interviewing long after these have been junked by reputable pollsters in the US, where opinion polling originated; 2) that these have produced “unrepresentative samples” that could not possibly produce any good results; 3) that the basic information about each survey—who sponsored it, who did it, what samples were used, what questions were asked, in what order were they asked, what is the margin of error, etc.—all of which should be published with every survey result has never been published; 4) that politicians are allowed to ask their own questions in these surveys for P100,000 per question, on top of a P300,000 subscription fee; 5) that the media have routinely published the results without any critical analysis; 6) that the surveys have shaped voter preferences, even without further inputs.

My presentation offered more than enough room for an intelligent debate, in case of disagreement. Yet, instead of pointing out any errors in my brief, you chose to aim at my person, which is far from perfect, even without the added burden of imaginary misadventures and questionable quotes. Sad, to say the least.

There are at least 30 countries in the world today—including strong democracies like Italy and Canada—where one may not publish the results of a political survey within a certain period before an election, unless all the basic information about the survey are also published. In the US, newspapers have to satisfy themselves on some 20 questions (available online) before publishing any survey results. And the most reputable pollsters maintain that no preelection (or pre-campaign) survey may be taken at face value to predict the outcome of any election.

Some outstanding examples:

1) In 1932, Literary Digest, the leading US pollster, predicted that Alf Landon would defeat President Franklin Delano Roosevelt, who was running for a third term. FDR won, and the Digest folded up not long thereafter.

2) In 1948, America’s leading pollsters—George Gallup, Archibald Crossley and Elmo Roper—predicted President Harry Truman would be overwhelmed by Thomas Dewey. Thus the election day headline screamed: “Dewey defeats Truman.” But Truman won. The pollsters were investigated by the US Congress and the Social Science Research Council later.

3) On 8 January 2008, eleven pollsters predicted Barack Obama winning the Democratic primary in New Hampshire. But Hillary Clinton won.

4) Among the Republicans, Rudy Guiliani was generally touted as the frontrunner. But in Nov 2007, a special Gallup poll showed no Republican candidate had more than 5 percent of the vote.

5) On May 11, 2004, in Metro Manila, an SWS exit poll showed President Gloria Macapagal Arroyo leading Fernando Poe Jr. 31 percent to 23 percent. The official count, however, gave FPJ 36.67 percent of the votes to GMA’s 26.46 percent. In principle no exit poll should make such a mistake.

SWS was never investigated by any council or Congress. Nor has SWS ever publicly apologized or explained why it erred. Was that error so trivial or is public memory so short that SWS should once again be polling voter preferences for any election, as though its credibility had never been tarnished?

Time and space do not permit me to go on. But I hope you will be gracious enough in the spirit of fairness to give this letter the same space you gave yours. Thank you very much.

Tatad: Local pollsters use discredited techniques

March 1, 2010

BY DAN MARIANO, Manila Times, Monday, 22 February 2010

When a candidate complains about survey results, people usually conclude that he must be a cellar-dweller in the opinion polls. And former Sen. Francisco “Kit” Tatad could be regarded as one such disgruntled aspirant.

A survey done by Social Weather Station (SWS) earlier this month, for instance, placed Tatad at a “significant distance” from the leading pack of senatorial candidates. The casual observer, thus, is tempted to dismiss his complaints against surveys in general as just another case of “sour grapes.”

Tatad, however, is not your typical Aesopian fox badmouthing the fruit that are beyond his reach. What he presents is an obviously well-researched critique that should give the public—especially the news media—pause.

Surveys do tend to influence, if not how people vote, then certainly how they size up candidates.

Aspirants who do well at the start of the race tend to get more attention—and conceivably more financial support from quarters that see campaign contributions as wagers on future political accommodation and favors.

As experience tells us, it does not exactly enhance the democracy that we have been trying to erect for the past several decades. Instead of making informed choices and wise decisions, the bulk of voters come under the deceptive spell of popularity.

During a media forum last week, Tatad described local surveys as “fatally flawed.” He said the outfits that conduct surveys in the Philippines use misleading methodologies, which have long been discredited in more advanced democracies.

“Local pollsters have used methodologies and techniques that are flawed and discredited, and have long been discarded in the US, where public opinion polling was invented and turned into a billion-dollar industry,” Tatad said at the Fernandina media forum last week.

These “flawed” methodologies include: face-to-face interviewing, quota and cluster sampling, loaded and lengthy questionnaires, trial heat polls and pick 3 polling, which are all stated in the methodologies disclosed by the pollsters.

Face-to-face interviewing is the standard method used by pollsters—such as SWS and Pulse Asia—for eliciting responses from survey participants. Respondents are tracked door-to-door and interviewed by the pollsters’ field personnel. They are asked to respond to the preset questionnaire and shown pictures of candidates.

In the past, personal or face-to-face interviewing was viewed as an appropriate method for conducting opinion surveys because it ostensibly allowed the pollster to select the “right” respondent to be interviewed.

After major flops, however—notably, the erroneous forecast of victory for New York Gov. Thomas E. Dewey over incumbent President Harry S. Truman in the 1948 US presidential race—this survey method was discarded, Tatad recalled.

Reputable pollsters in the United States have now totally abandoned face-to-face interviewing, he added.

Tatad cited experts Chava Frankfort-Nachnias and David Nachmias who in Research Methods in the Social Sciences wrote: “The very flexibility that is the interviewer’s chief advantage leaves room for the interviewer’s personal influence and bias.”

Tatad also quoted pollster Kenneth Warren who in his book, In Defense of Public Opinion Polling, wrote: “The cons of door-to-door interviews far outweigh the pros . . . Because of the sensitivity or personal nature of some questions, interviewers, because they were placed in face-to-face situations, have admitted that they sometimes guessed or fudged responses . . . These problems are a major source of bias in personal interviews, causing significant contamination of the poll data.”

These methodological and practical problems according to Warren have doomed face-to-face interviews.

By 1980 in the United States, nobody wanted to pay for this type of surveys which were “fatally flawed and grossly inaccurate anyway.”

But as Tatad noted, this “seems to have had no persuasive effect on our local pollsters.”

Another glaring weakness in local surveys, according to Tatad, is the extensive and general use of quota sampling to create a supposedly “representative sample” of the Philippine population.

In quota sampling, survey respondents are picked from different types of people—such as by age, sex, religion, income—and various predetermined areas, like region of the country, as well as urban or rural.

“This method is the most familiar form of non-probability sampling. It is supposed to mirror the same proportions in the targeted survey populations, but doesn’t,” Tatad said.

Quota sampling, he added, proved to be an earthshaking failure in 1948 after three leading US pollsters—Gallup, Roper and Crossley—erroneously called the US presidential election in favor of Dewey instead of Truman.

In the United Kingdom, where quota sampling persisted, it was blamed for the pollsters’ failure to predict Prime Minister John Majors’ victory in 1992.

Nowadays, reputable US pollsters rely almost exclusively on probability random sampling to create a “representative sample,” Tatad said.

“Why then do local pollsters continue to use quota sampling and face-to-face interviewing for their surveys?” Tatad asked. “Why haven’t they adopted probability random sampling, which has protected US opinion polls from using contaminated data?”

Tatad also said that local pollsters have ignored standards for professional and ethical practice of public opinion polling that elsewhere are regarded as sacred by polling associations and reputable pollsters.

“Professional standards are virtually nonexistent in the local opinion polling industry,” the comebacking senator said.

“No law regulating the conduct of opinion polling, and no professional association of pollsters either to set and enforce standards of conduct and standards of disclosure and ensure the reliability and validity of survey results,” he added.

Tatad further said that the news media have been an unsuspecting purveyor of dubious findings to the detriment of the election campaign and the public.

“The public would have had a better appreciation and understanding of public opinion polling had the media been a little more critical and vigilant,” he said.

Needless to say, Tatad’s critique requires a response of similar detail and authority from SWS, Pulse Asia and other local pollsters. Dismissing it as mere “sour grapes” will no longer suffice.