
Frequently Asked Questions About Exit Polls
Frequently Asked Questions About Polling Answered by Steve Freeman and Election Integrity.org
Original source: http://www.electionintegrity.org/about/faq.aspx
About us Team FAQ Contact us |
Frequently Asked Questions About Exit PollsWhy should we care about exit poll results? When properly conducted, exit polls should predict election results with a high degree of reliability. Unlike telephone opinion polls that ask people which candidate they intend to vote for Are exit polls data better than other polling data? Exit polls, properly conducted, can remove The difference between conducting a pre-election telephone poll and conducting an Election Day exit poll is like the difference between How do exit polls work? There are two basic stages of an exit poll. The exit pollster begins by choosing precincts that serve the purpose of the poll. For example, if a pollster wants to cost effectively project a winner, he or she may select “barometer” precincts which have effectively predicted past election winners. The second stage involves the surveys within precincts. On Election Day, one or two interviewers report to each Voting preferences of absentee and early voters can be accounted for with telephone polls. The 2004 US Presidential Election Exit Poll DiscrepancyWhat were the results of the 2004 US Presidential election exit poll? The exit polls indicated a seven percentage point Kerry victory. According to the official count, Bush won by 3,000,000 votes. Had votes been cast as voters leaving the polling place said they voted, Kerry would have won by 6,000,000 votes nationwide and would have had a decisive electoral victory. Why was the exit poll surprising? Seven percentage points doesn't sound like that much. The exit poll figures reported by the pollsters who conducted the polls are different than yours Several analyses have been conducted about the exit polls by the pollsters, myself and many others using different data and assumptions. But in order to understand the discrepancy between the exit-poll survey results and the official count, the best measure is the simplest rendering of the discrepancy within the precinct itself. Within Precinct Disparity (WPD) is the difference between the way people said they voted as they exited the polling place and the official count in these same precincts. This is the simplest rendering of the data. As such it is different from the many more complex analysis that we and others have performed. And it is not how Mitofsky and Lenski analyze the data. But in its simplicity, it is revealing and powerful. Election Integrity Research and AnalysisCould the discrepancy between the exit poll results and the official count have been due to chance or random error? |
Are we saying that the discrepancy itself means that Kerry must have really won the election?
No, the evidence that cast doubts on the election results comes from diverse sources. The exit polls have never been cited as primary evidence of fraud, but only as a reason to take that primary evidence to heart. US Representative John Conyers, the ranking member of the House Judiciary Committee and author of the foreword to our book says the discrepancy is "but one indicia or warning that something may have gone wrong -- either with the polling or with the election." The discrepancy is an undisputed fact. The question is "What caused it?"
There are only two possible explanations for the discrepancy: 1) far more Kerry voters than Bush voters agreed to fill out the questionnaires offered by pollsters, or 2) the votes were not counted as cast. In our book, we examine these two possible scenarios as thoroughly as possible.
The official NEP explanation that more Kerry voters than Bush voters agreed to fill out the questionnaires seems plausible. Why question this conclusion?
It is not a conclusion, but rather a presumption.
The pollsters merely asserted that this must be true without evidence or even a theory as to why it may be the case. The limited data that the pollsters present not only fail to substantiate the presumption, they undermine it entirely.
All independent indicators on poll participation suggest not lower, but higher response rates among Bush voters. One of these is that response rates are higher, not lower, in precincts where Bush voters predominated as compared to precincts where Kerry voters predominated. In precincts where Bush got 80 percent or more of the vote, an average of 56 percent of people who were approached volunteered to take part in the poll, while in precincts where Kerry got 80 percent or more of the vote, a lower average of 53 percent of people were willing to be surveyed.
How, then, do the exit polls indicate fraud?
There are more than a dozen indicators. I’ll mention just two of them.
First, there is no reason why exit polls should be more or less accurate in key states, but they are a key corruption variable: If you are going to steal an election you go after votes most vigorously where they are most needed. The discrepancy is significantly higher in the 11 swing states than other states and significantly higher yet in the three critical battleground states of Ohio, Florida and Pennsylvania,
Second, in light of the charges that the 2000 election was not legitimate, the Bush/Cheney campaign would have wanted to prevail in the popular vote. If fraud was afoot, it would make sense that the president's men would steal votes in their strongholds, where the likelihood of detection is small. Lo and behold, the report provides data that strongly bolster this theory. In those precincts that went at least 80 percent for Bush, the average within-precinct-error (WPE) was a whopping 10.0—the numerical difference between the exit poll predictions and the official count. That means that in Bush strongholds, Kerry, on average, received only about two-thirds of the votes that exit polls predicted. In contrast, in Kerry strongholds, exit polls matched the official count almost exactly (an average WPE of 0.3)
Criticism and Validation of Our Research
Have your papers been peer reviewed?
Yes. There is no formal mechanism for papers like this (nor is there any good forum in which to publish them), but when I leave a "t" uncrossed in these papers, people write to the dean and demand my dismissal (actually, they do that anyway). The conclusions of the initial paper, in fact, have been accepted, and the "debate" has moved on.
The US Count Votes paper which I co-authored with 11 mathematicians, statisticians, and other social scientists was extensively peer reviewed.
Has evidence come to light since the publication of these pieces which would explain this exit poll discrepancy? No such evidence has come to light. All indications are that if the primary exit poll data were made available, it would conclusively show count corruption and identify where count corruption occurred. Unless there is some great public pressure or successful legal action, none of this primary exit poll data will be released.
Have there been any rebuttals to your analyses? There are many "rebuttals." They come from every angle you can think of, and many you could never think of. They are easy to find on the web. Here are two examples:
http://www.counterpunch.org/landes03032005.html.
Intended to sow confusion? Counterpunch is supposedly one of the leading "alternative" media forums.
http://elections.ssrc.org/research/ExitPollReport031005.pdf.
This is a report by reputable academics at top universities, sponsored by a reputable foundation. Its purpose seems to be to justify that (1) exit poll results should never be released until they have been "corrected" to the vote count, and (2) that the raw uncorrected data should never be released at all for methodological reasons that are not even sound methodology. (Many of us are appalled by (lack of) election reporting, but academic commentary has been no better.)
What do the pollsters say?
Incredibly, Warren Mitofsky, the lead exit pollster justified ignoring the vast preponderance of publicly available evidence which we have presented by claiming that data which they refuse to share, “kill the fraud argument.”
The retort is a triple outrage. First, the dismissal of public data in favor of secret data. Second, that this supposedly conclusive analysis is the work of an entrpenuer and doctoral student hired by Mitofsky. No independent researchers or serious scholars have ever seen the data or the methods by which they reach this conclusion. Third, the data is secret.
If there were indications of fraud, wouldn’t the pollsters be the first to say so? Wouldn't they want to defend their methods?
No, the last thing that Edison Media Research and Mitofsky International want to do is to imply fraud.
By minimizing the discrepancy and attributing it to polling factors, they were re-awarded one of the most prestigious and lucrative contracts in the polling world. The incentive of Warren Mitofsky, was, in his own words, to "make this thing go away."
Lack of Transparancy in the National Election Pool Exit Poll
Have you been able to obtain the "uncorrected"* data from the polling consortium? The data needed to fully investigate the integrity of the election has never been made available to independent researchers. Rather, it remains the property of the NEP consortium that commissioned the exit polls, which says it cannot be released. Data has been made available, but not the data that could be used to verify the validity of the election. In the future, it's unlikely that any media poll will even let us know about any exit poll discrepancy. (For this reason and more, we have undertaken to develop an independent exit poll.)
Why won’t they release this data? NEP pollsters claim that release could violate confidentiality agreements, i.e., that under some extreme circumstances one conceivably might be able to figure out how one unusual individual in an unusually homogenous precinct may have said he or she voted.
The pollsters say they are protecting respondent anonymity – what’s wrong with that?
Protecting respondent anonymity is, of course, proper and ethical. It is highly improper and unethical to use this as a dissimulation for failing to comply with the more fundamental ethical considerations of open data and protecting democracy.
The NEP claim of protecting respondent anonymity is a crock, for at least six reasons:
1. it’s unclear that such identification would be, in fact, be a realistic possibility
2. Why would any researcher ever go through the trouble of doing this? Certainly, it’s clear that our intention is to detect fraud, not to determine how a lone obscure voter might have said they voted.
3. Even if, in the extremely remote circumstances, that someone might think he or she could identify a voter, what harm could it cause? Yet NEP would have us accept that a small, extremely hypothetical risk that a few individuals’ confidentiality might be compromised but causing no apparent harm –outweighs the importance of an independent check on our nation’s voting procedures and, very likely, evidence of a stolen election.
Even if this doesn’t persuade you, consider that:
4. Confidentiality could not be a concern in the vast majority of precincts that have even minimal demographic diversity. Why not release precinct identification for these data?
5. In those few precincts where some individual identification might conceivably be possible, NEP could simply have blurred the
demographic data. Indeed, given the choice between precinct identifiers – critical to the investigation of fraud – and demographic data, not only is the relative importance plain as day, but demographic data make no sense at all. After all, what is the point of trying to explain why voters purportedly voted as they did, when we
cannot even say how they voted?
6. Finally, consider that NEP denied this data to highly qualified and experienced independent academics from the nation’s leading research
institutions, many of whom have experience working with sensitive and national security data, who offered to work only onsite and reimburse NEP for any additional costs incurred. Yet they have given it to two individuals whose only qualifications seem to be an ability to promote the Mitofsky perspective.
• Elizabeth Liddle, a British doctoral student in an unrelated field, who has argued ubiquitously (4,000 posts, many of them very long, in one year on [the web discussion board] Democratic Underground and similar numbers on other sites) and extensively that the data, which she cannot share, indicate no fraud.
• Steve Hertzberg, a man with no record at all of either research or maintaining confidentiality, whose qualifications includes no background in research, polling, or political science, but rather, in direct marketing.
It is clear that NEP’s primary concern is not respondent confidentiality, but rather control over the findings.
User Login
Hands-On Elections
Buy this Handbook and learn about:
- voting rights
- public elections
- election laws
- how our votes are or are not being counted
- the people running our elections
- how to run real elections
- how to overcome challenges in taking back our elections
- the federal government's role in our elections
- the people making the decisions affecting our elections
- citizen election watchdog groups