by Jonathan Simon
December 16, 2014
Any comparative forensic analysis is only as “good” as its baselines. In Landslide Denied—our archetypal post-election comparative forensics study, in which the “red shift” (the rightward disparity between exit poll and votecount results) was identified and measured—a critical component of the analysis was to establish that the exit poll respondents accurately represented the electorate. We employed a meta-analysis of multiple measures of the demographics and political leanings of the electorate to demonstrate that the exit polls in question had not “oversampled” or over-represented Democratic or left-leaning voters (in fact any inaccuracy turned out to be in the opposite direction), and therefore that those polls constituted a valid baseline against which to measure the red-shifted votecounts. In Fingerprints Of Election Theft, we went further and removed all issues of sample bias from the equation by conducting a separate poll in which we asked the same set of respondents how they had voted in at least one competitive and one noncompetitive contest on their ballot. The noncompetitive contests, being presumptively unsuitable targets for rigging, thus served as the baselines for the competitive contests, and the relative disparities could be compared without concern about any net partisan tendencies of the respondent group.
More recently we have commented on the feedback loop that develops between election results and polling/sampling methodologies, such that consistently and unidirectionally shifted votecounts trigger, in both pre-election and exit polls, methodological adaptations that mirror those shifts. Approaching E2014, we observed that the near-universal use of the Likely Voter Cutoff Model (LVCM) in pre-election polling, and stratification to demographic and partisanship quanta derived from (rightward) adjusted prior-election exit polls in all polling, were methodological distortions that pushed both exit polls and pre-election polls significantly to the right, corroding our baselines and making forensic analysis much less likely to detect rightward shifts in the votecounts.
Indeed, given the rightward distortions of the adaptive polling methodologies, we noted that accurate polls in E2014 would serve as a red-flag signal of rightward manipulation of the votecounts. In effect, the LVCM and the adjusted-exit-poll-derived weightings constituted a rightward “pre-adjustment” of the polls, such that any rightward votecount manipulations of comparable magnitude would be “covered.”
It is against this backdrop that we present the E2014 polling and votecount data, recognizing that the adaptive polling methodologies which right-skewed our baselines would combine to reduce the magnitude of any red shift we measured and significantly mitigate the footprint of votecount manipulation in this election.
The tables that follow compare polling and votecount results, where polling data was available, for US Senate, gubernatorial, and US House elections. The exit polling numbers represent the first publicly posted values, prior to completion of the “adjustment” process, in the course of which the poll results are forced to congruity with the votecounts. The “red shift” represents the disparity between the votecount and exit poll margins. For this purpose, a margin is positive when the Democratic candidate’s total exceeds that of the Republican candidate. To calculate the red shift we subtract the votecount margin from the exit poll margin, so a positive red shift number represents a “red,” or rightward, shift between the exit poll and votecount results.
 Because these “unadjusted” exit polls, which have not yet been tainted by the forcing process, are permanently removed from public websites often within minutes of poll closings, they must be captured as screenshots or in free-standing html format prior to their disappearance. At Election Defense Alliance we archive these captures as part of our forensic operations.
To summarize the data presented in Tables 1 – 3:
· The US Senate red shift averaged 4.1% with a half dozen races presenting red shifts of over 7%. Of the 21 Senate elections that were exit polled, 19 were red-shifted.
· The gubernatorial red shift averaged 5.0% and 20 out of the 21 races were red-shifted.
· In US House elections, which are exit polled with an aggregate national sample, the red shift was 3.7%. This is the equivalent of approximately 2.9 million votes which, if taken away from the GOP winners of the closest elections, would have been sufficient to reverse the outcomes of 89 House races such that the Democrats would now hold a 120-seat (277 – 157) House majority.
· Although the thousands of state legislative contests are not exit-polled, it is fair to assume that the consistent red shift numbers that we found in the Senate, House, and gubernatorial contests would map onto these critical (as we have seen) down-ballot elections as well.
These red shift numbers, well outside applicable margins of sampling error, are egregious even by the dubious historical standards of the elections of the computerized voting era in America. Although it is an indirect measure of mistabulation, the red shift has been, with very few exceptions, pervasive throughout that era, and it is not reflective of the impact of any of the overt tactics of gerrymandering, voter suppression, or big money. It represents a very telling incongruity between how voters indicate that they voted and the official tabulation of those votes. While it is not “smoking gun” proof of targeted mistabulation, it is, in the magnitude and persistence we have witnessed over the past half-dozen biennial election cycles, just about impossible to explain without reference to such fraud. It is simply too much smoke for there not to be a fire.
We relied as well on pre-election polling averages as a corroborative baseline, and found that the red shifts from these predictions were comparable, though somewhat smaller than the exit poll-votecount red shifts (3.3% vs. 4.1% for the US Senate races; 3.5% vs. 5.0% for the gubernatorial races; and 3.3% for the Generic Congressional Ballot vs. 3.7% for the US House Aggregate Exit Poll). We suspect that these differences can be accounted for by the impact of the Likely Voter Cutoff Model in pre-election polling, which pushes samples even further right than does the use of prior elections’ adjusted exit poll demographics to weight the current exit poll sample, thereby further reducing the poll-votecount disparity.
The standard arguments have of course been put forward that all these exit polls (and pre-election polls) were “off,” that essentially every pollster in the business (and there are many), including the exit pollsters, overestimated the turnout of Democratic voters, which was “known” to be historically low because the official votecounts and a slew of unexpected Democratic defeats tell us it was. In response to this entirely tautological argument, there are two non-jibing realities to be considered. The first is that the sampling methodologies of the polls were already distorted to impound the anticipated low turnout rate of Democratic voters in off-year elections, a model which has been grounded on the official votecounts of this century’s three previous suspect computerized midterm elections, E2002, E2006, and E2010. The second is what would have to be termed the apparent schizoid behavior of the E2014 electorate, in which—from county-level referenda in Wisconsin backing expanded access to healthcare and an end to corporate personhood, to state-level ballot proposals to raise the minimum wage across America (see Table 4)—voters approved, by wide margins, the very same progressive proposals that the Republican candidates they apparently elected had violently opposed.
 The sample size of the House poll exceeded 17,000 respondents, yielding a Margin Of Error (MOE) of less than 1%.
 Of course I am not suggesting that vote theft can be targeted with such infallible precision. But it would make no sense at all not to target vote theft to the closest races and shift enough votes to ensure narrow victories. When one couples the evidence of a nearly 3 million vote disparity with even a modestly successful targeting protocol, the result is easily sufficient to flip the balance of power in the US House.
 The Generic Congressional Ballot is a tracking poll that asks a national sample of respondents whether they intend to vote for the Democratic or Republican candidate for US House in their district.
The wide margins are significant because they tell us that, unlike the key contests for public office, these ballot propositions were well outside of smell-test rigging distance. Thus, even had defeating them been an ancillary component of a strategy that appears riveted on seizing full governmental power rather than scoring points on isolated issue battlefields, these ballot propositions would have failed any reasonable risk-reward test that might have been applied, and thus were left alone.
 As was the state of California, the one place in America where Democrats actually made US House gains in E2014. This perpetuates a pattern we have noted in several previous elections that may speak to the deterrence value of a well-designed audit protocol and a higher level of scrutiny from the (Democratic) Secretary of State’s office than is found in the vast majority of other states.
With so much not making sense about E2014 it seems hardly necessary to add that it makes no sense at all for an historically unpopular Congress to be shown such electoral love by the voters that exactly TWO (out of 222) incumbent members of the Republican House majority lost their seats on November 4, 2014, while the GOP strengthened its grip on the House by adding 12 seats to its overall majority, and of course took control of the US Senate, 31 governorships, and 68 out of 100 state legislative bodies.
It would seem to require magicianship of the highest (or lowest) order to pull these results from a hat known to contain a Congressional Approval rating in the single digits (See Table 5). In handing over vote counting to computers, neither the processes nor the programming of which we are permitted to observe, we have chosen to trust the magician, and we should not be at all surprised if for his next trick he makes our sovereignty disappear.
Full .pdf version attached at link to right: