How to Spot a Bogus Poll

by Brad Edmondson

October 1996

American Demographics


Framing the Question

Opinion surveys can look convincing and be completely worthless. But asking four simple questions of any poll can separate the good numbers from the trash.Politicians use opinion polls as verbal weapons in campaign ads. Journalists use them as props to liven up infotainment shows. Executives are more likely to pay attention to polls when the numbers support their decisions. But this isn't how polls are meant to be used. Opinion polls can be a good way to learn about the views Americans hold on important subjects, but only if you know how to cut through the contradictions and confusion.

Conducting surveys is difficult. It is especially difficult to take a meaningful survey of public opinion, because opinion is a subjective thing that can change rapidly from day to day. Poll questions sometimes produce conflicting or meaningless results, even when they are carefully written and presented by professional interviewers to scientifically chosen samples. That's why the best pollsters sweat the details on the order and wording of questions, and the way data are coded, analyzed, and tabulated.

Pollsters other than the best can also set up surveys that deliberately shade the truth. They do this by acting like trial lawyers: they ask leading questions, or they restrict their questions to people likely to give the desired response. In fact, pollsters can use dozens of obscure tricks to intentionally push the results of a survey in the desired direction. So the next time a poker-faced person tries to give you the latest news about how Americans feel, ask some pointed questions of your own.

Did You Ask the Right People? In 1936, the editors of Literary Digest conducted a Presidential preference poll of more than 2 million Americans. The poll predicted that the Republican candidate, Alf Landon, would defeat Franklin Roosevelt. Landon's loss made the Digest history's most famous victim of sample bias.

The Digest mailed more than 10 million ballots to households listed in telephone books and automobile registration records. This method might create a relatively representative sample today, but in 1936, it substantially biased the sample toward those affluent enough to own cars and phones. The magazine's disgrace was made complete by young poll-takers like George Gallup and Elmo Roper, who used samples of a few thousand people to predict a Roosevelt win. Gallup and Roper carefully chose their samples to reflect a demographic cross-section of Americans, just as they do today.

The most amazing thing about this story is that some journalists and businesses in the 1990s still make the mistakes the Literary Digest made 60 years ago. Any journalist with half a pencil knows know that only a scientifically chosen survey sample will represent the country's opinions. But the temptation to take a biased poll is great if you have a tight deadline and a small budget, as many news organizations do.

The 2 million who responded to the Literary Digest poll in 1936 were even more likely than the total sample base to be wealthy and Republican, typifying a common survey problem-nonresponse bias. Even when you start out with a representative sample, you could end up with a biased one. This is a risk all pollsters take, but some particular methods lend themselves to greater error. For example, readers of women's magazines are frequently asked to fill out surveys on weighty subjects like crime and sexual behavior. Not only do such polls ignore the opinions of nonreaders, they are biased toward readers who take the trouble to fill out and return the questionnaire, usually at their own expense.

Television news and entertainment shows get into the act by posting toll-free or even toll numbers that viewers can call to "vote" on an issue. These samples are not only biased, they are prone to "ballot-stuffing" by enthusiasts. In other words, viewers who call 12 times get 12 votes.

Poll results based on "convenience" samples can be wildly misleading, even if the sample sizes are huge. A call-in poll conducted by a television network in 1983 asked: "Should the United Nations continue to be based in the United States?" About 185,000 calls were received. Two-thirds said that the U.N. should move. At the same time, the network conducted a random-sample poll of 1,000 people, and only 28 percent said the U.N. should move.

Between 1989 and 1995, the Pew Research Center for The People & The Press in Washington, D.C., monitored the public's interest in 480 major news stories. Almost half of Americans paid little or no attention to these stories, and only one in four followed the average story very closely. Stories about wars and disasters were followed most closely, while those about celebrity scandals and politics finished last. When Pat Buchannan announced that he was running for President in 1991, for example, only 7 percent of Americans paid close attention to the story.

Conflicts make news. When journalists are trying to liven up a boring political story, they need angry, well-informed citizens like a fish needs water. This is one reason why older men may be quoted more often than other groups. Those aged 50 and older are more likely than younger adults to follow news stories "very closely," according to the Center, and men are more likely than women to follow stories about war, business, sports, and politics.

In the last decade, angry white men have dominated media programs designed to give ordinary people a chance to speak out in public. Two-thirds of regular listeners to political talk radio programs are men, according to a 1996 poll taken by Roper Starch Worldwide for the Media Studies Center in New York City. Republicans outnumber Democrats three to one in the talk-radio audience, and 89 percent of listeners are white, compared with the national average for voters of 83 percent. Three in five regular listeners to political talk radio perceive a liberal bias in the mainstream media, compared with one in five nonlisteners.

A multitude of reputable surveys have shown that most Americans generally believe that the country is headed in the wrong direction and that political leaders can't be trusted. But those who respond to convenience polls and call in to talk shows probably don't speak for most Americans. What's the margin? A statistician and two friends are hunting for deer. They spot a buck. Friend number one takes a shot and hits a tree five feet to the left of the animal. Friend number two fires and hits a tree five feet to the right. The statistician exclaims, "We got him!"

No matter how carefully a survey sample is chosen, there will still be some margin of error. If you selected ten different sets of 1,000 people using the same rules and asked each group the same question, the results would not be identical. The difference between the results is sampling error. Statisticians know that the error is equally likely to be above or below the true mark, and that larger samples have smaller margins of error if they are properly drawn. They are also able to estimate the margin of error, or the amount by which the result could be above or below the truth. Sampling error will always exist unless you survey every member of a population. If you do that, you have conducted a census.

Sampling error is one reason why two professionally conducted polls can show different results and both be correct. For example, the CNN/USA Today/Gallup Poll of January 5-7, 1996, showed that the proportion of Americans who approved of President Clinton's performance had dropped to 42 percent, from 51 percent on December 15-18, 1995. Polls conducted that same week by Washington Post/ABC and New York Times/CBS showed that his approval rating was 53 percent and 50 percent, respectively. This made for a few wild days at the White House, until the next Gallup survey showed a sudden rebound.

Reputable surveys report a margin of error-usually of 3 or 4 percentage points-at a particular confidence level-typically 95 percent. This means that 5 percent of the time, or 1 time in 20, the poll's results will not be reliable. The other 95 percent of the time, it is within 3 or 4 percentage points of the "truth." This sort of inevitable statistical problem explains the blip in the January Gallup poll.

Which came first? The order in which questions are asked can have a big effect on the results. In late May 1996, the CNN/USA Today/Gallup poll reported that 55 percent of Americans believe taxes can be cut and the federal deficit reduced at the same time, compared with 39 percent who do not believe this. The same week, New York Times/CBS reported a dead heat of 46 percent who believe and 46 percent who do not. This variance was way beyond the margin of error. The questions were almost identical in their wording. But the order of questions in the Gallup poll may have biased the results, according to Poll Watch, a publication of the Pew Research Center.

In the CBS poll, questions before the tax cut/deficit question were not related to the subject. But Gallup first asked respondents if they favor a tax cut. Then it asked those who did if they would still favor it even if it meant no reduction in the deficit. Then it asked all respondents if they believed both could be done at the same time. By that point, "some of Gallup's interviewees may have felt invested in the idea of a tax cut," says Poll Watch.

Most people want to appear consistent to others and to be consistent in their own minds. When a pollster asks a series of related questions, this desire can lead people to take positions they might not have taken if they were asked only one question. Neither way produces an obviously "correct" response, but the results are different. One way to handle this problem is to rotate the order of questions. Then the degree of differences due to question order can be described and interpreted. But not everyone pays heed to such fine distinctions.

What was the question? "Do you want union officials, in effect, to decide how many municipal employees you, the taxpayer, must support?" Well, do you? This question, taken from an actual survey, is obviously biased. The results might make good propaganda for an anti-union group, but they are totally bogus as a poll. So before you pass a survey finding on to others, or even believe it yourself, be sure to look at the actual question.

Question wording is extremely subtle. In the hours after President Clinton's November 27, 1995, speech announcing that 20,000 U.S. troops would be sent to Bosnia as part of a NATO peace-keeping mission, three major news organizations took reaction polls. CNN/USA Today/Gallup found that 46 percent of Americans favored Clinton's plan, while 40 percent were opposed. CBS found that only 33 percent were in favor, and 58 percent were opposed. ABC said that 39 percent were in favor, and 57 percent were opposed.

The CNN poll was probably more in favor because it did not mention that the U.S. was sending 20,000 troops, says Poll Watch. CBS and ABC gave respondents the chance to react to that substantial number, which drove down their approval. In addition, CBS described the troops' mission as "enforcing the peace agreement," while ABC and CNN described the troops as part of "an international peace-keeping force." CBS's harsher wording may have contributed to its respondents' harsher judgment of the Clinton decision.

Sometimes words are problematic because they are too vague. In April 1996, the Pew Research Center asked which Presidential candidate was best described by the phrase: "shares my values." By this measure, Clinton beat Dole by 47 percent to 37 percent. But when CBS and the New York Times asked whether each candidate "shares the moral values most Americans try to live by," 70 percent said that Dole did, but only 59 percent said so of Clinton.

One way to sharpen the meaning of a poll on "values" is to ask an open-ended question, such as: "What 'values' do you share with the candidate?" The long list of responses generated by such a question can be entered and coded to provide the sample's average definition of the term. This step obviously takes more time and money. It produces a survey that is more precise, but harder to explain. If you were a television reporter and you had 20 seconds to describe the question and give the result, what would you do?

These are some of the most common reasons why polls that appear to be authoritative are, in fact, total trash. Other pitfalls are described in Polls and Surveys: Understanding What They Tell Us, a layperson's guide by Norman M. Bradburn and Seymour Sudman, (San Francisco: Jossey-Bass, Inc., 1988). The Pew Research Center's Poll Watch newsletter does an excellent job of spotting and explaining poll gaffes: for more information, call Pew at (202) 293-3126.

When you're presented with a new survey or a used car, it helps to ask a few key questions before you buy. But for all their flaws, surveys are essential to the work of politicians, journalists, and businesspeople. "Yes, there are too many bad polls," writes Richard Morin, director of polling for the Washington Post. There are also "too many polls that report what people think but not why they think it."

At the same time, with lots of competing polls, it's easier to see errors and rapid shifts in public opinion. "Polling is a robust methodology," writes Morin. "A lot of little things can go wrong and the final result can still be right, or at least close enough."

1996 Intertec Publishing - A PRIMEDIA Company