The Likely Voter Cutoff Model: What Is So Wrong About Getting It Right?

Jonathan D. Simon1

March 17, 2011


 

Logic tells us, and experience confirms, that political pollsters stay in business and prosper by predicting election outcomes accurately. Pollsters are now publicly ranked by various scorekeepers (see, e.g., the Fordham University 2008 ranking: http://www.fordham.edu/campus_resources/enewsroom/archives/archive_1453.asp) according to how brilliantly close or embarrassingly far off they turn out to be when the returns come in. A “Certificate of Methodological Purity” may make a nice wall ornament, but matters not at all when it comes to success within the highly competitive polling profession.

If election returns in competitive races were being systematically manipulated in one direction over a period of several biennial elections, we would expect pollsters to make methodological adjustments necessary to match those returns. Indeed it would be nothing short of professional suicide not to make those adjustments, and turn whatever methodological handsprings were required to continue “getting elections right.”

In the computerized election era—where virtually every aspect of the vote counting process is privatized and concealed; where study after study, from Princeton to the GAO, has concluded that the vote counting computers are extremely vulnerable to manipulation; and where statistical analyses pointing to such manipulation have been reflexively dismissed, no matter how compelling—it may be that the methodological contortions required for pollsters to “get elections right” constitute the most powerful evidence that computer-based election fraud and theft are systemic and rampant.

Enter the Likely Voter Cutoff Model, or LVCM for short. Introduced by Gallup about 10 years ago (after Gallup came under the control of a right-wing Christianist heir), the LVCM has gathered adherents until it is now all-but-universally employed, albeit with certain fine-tuning variations. The LVCM uses a series of screening questions—about past voting history, residential stability, intention of voting, and the like—to qualify and disqualify respondents from the sample. The problem with surveying the population at large or even registered voters, without screening for likelihood of voting, is obvious: you wind up surveying a significant number of voters whose responses register on the survey but who then don’t vote. If this didn’t-vote constituency has a partisan slant it throws off the poll relative to the election results—generally to the left, since as you move to the right on the political spectrum the likelihood of voting rises.

But the problem with the LVCM as a corrective is that it far overshoots the mark: that is, it eliminates individuals from the sample who will in fact cast a vote, and the respondents/voters so eliminated, as a group, are acknowledged by all to be to the left of those who remain in the sample, skewing the sample to the right (a sound methodology, employed for a time by the NYTimes/CBS poll, would solve the participation problem by down-weighting, but not eliminating, the responses of interviewees less likely to vote). So the LVCM—which disproportionately eliminates members of the Democratic constituency, including many who will in fact go on to cast a vote, by falsely assigning them a zero percent chance of voting—should get honestly tabulated elections consistently wrong. It should over-predict the Republican/Right vote and under-predict the Democratic/Left vote, most often by an outcome-determinative 5-8% in competitive elections.

Instead it performs brilliantly and has therefore been universally adopted by pollsters, no questions asked, not just in the run-up to elections as in the past, but now all year round, setting expectations not just for electoral outcomes but for broad political trends, contributing to perceptions of political mojo and driving political dynamics—rightward, of course. In fact, the most “successful” LVCM models are now the ones that are strictest in limiting participation, including those that eliminate all respondents who cannot attest that they have voted in the three preceding biennial elections, cutting off a slew of young, poor, and transient voters. The impact of this exclusion in 2008 should have been particularly devastating, givens the millions of new voters turned out by the Democrats. Instead the LVCM got 2008 just about right (we note in passing that an extraordinary, 11th-hour Republican freefall, triggered by the collapse of Lehman Bros. and the subsequent economic crash, produced an Obama victory in the face of a “red shift”—votecounts more Republican and less Democratic than the exit polls—even greater than that measured in 2004). Pollster Scott Rasmussen, formerly a paid consultant to the 2004 Bush campaign, employs the LVCM most stringently to winnow the sample, eliminating more would-be Democratic voters than most if not all of his professional colleagues. A quick survey of his polls at www.rasmussenreports.com shows a nation unrecognizably canted to the right, and yet Rasmussen Reports was ranked “the most accurate national polling firm in the 2008 election” and close to the top in 2004 and 2006.

There is something very wrong with this picture and very basic logic tells us that the methodological contortion known as the LVCM can get election results so consistently right only if those election results are consistently wrong—that is, shifted to the right in the darkness of cyberspace.

A moment to let that sink in, before adding that, if the LVCM shift is not enough to distort the picture and catch up with the “red-shifted” votecounts, polling (and exit polling) samples are also generally weighted by partisanship or Party ID. The problem with this is that these Party ID numbers are generally drawn from prior elections’ final exit polls—exit polls that were “adjusted” in virtually every case rightward to conform to votecounts that were to the right of the actual exit polls, the unshakable assumption being that the votecounts are gospel and the exit polls therefore wrong. In the process of “adjustment,” also known as “forcing,” the demographics (including Party ID, age, race, etc.) are dragged along for the ride and shift to the right. These then become the new benchmarks and baselines for current polling, shifting the samples to the right and enabling prior election manipulations to mask forensic/statistical evidence of current and future election manipulations.

To sum up, we have a right-shifting tunable fudge factor in the LVCM, now universally employed with great success to predict electoral outcomes, particularly when tuned to its highest degree of distortion. And we have the incorporation of past election manipulations into current polling samples, again pushing the results to the right. These methodological contortions and distortions could not be successful—in fact they would put the pollsters quickly out of business—absent a consistent concomitant distortion in the votecounts in competitive races.

Since polls and election outcomes are, after some shaky years following the advent of computerized vote counting, now in close agreement (though still not exit polls, which are weighted to false demographics but of course do not employ the LVCM, and therefore still come in consistently to the left of votecounts until they are “adjusted” rightward to conformity), everything looks just fine. But it is a consistency brought about by the polling profession’s imperative to find a way to mirror/predict votecounts (imagine, if you will, the professional fate of a pollster employing undistorted methodology, who insisted that his/her polls were right and both the official votecounts and all the other pollsters wrong!). It is a consistency, achieved without malice on the part of the pollsters, which almost certainly conceals the most horrific crime, with the most devastating consequences, of our lifetimes.

1 Jonathan D. Simon, JD, is Executive Director of Election Defense Alliance.

AttachmentSize
TheLVCM.pdf22.72 KB