Joint Legislative Panel Hears of Widespread Failures in CA Primary

In response to widespread election breakdowns across California during the February 5th Primary Election, an unprecedented Joint Legislative Hearing heard testimony for more than five hours in Los Angeles Friday, March 7 concerning error-ridden voter registration rolls blocking eligible voters from voting, shortages of provisional ballots disenfranchising thousands, lax or nonexistenct ballot custody measures, and most notoriously, a completely avoidable, known ballot design issue that ended up voiding the candidate choices of 12,000 voters.

VIDEO: Tom Courbat, director of Save R Vote (and also EDA Coordinator for Election Monitoring) addressed the Joint Legislative hearing in Los Angeles, March 7, about problems he observed in Riverside County in the Feb. 5 primary election.

Let the People Count


Had Enough of "Faith-Based" Elections Entrusted to a Corporate Machine?

Tell Congress and the Media You Want Paper Ballots Counted By Hand in The Precincts

Click Here for a quick, one-click way to write
your U.S. Senators, your Representative, and your regional newspaper all at once.

We provide a sample letter you can adapt to make your own. Copy, paste, alter, and add what you want to say.

Click here to Act

When you're done, you can click a link to see the messages other citizens have sent.

Want to do more?

Volunteer to hand-count ballots in your local precinct: Click here to Join the I Count Corps


Fingerprints Of Election Theft

Fingerprints Of Election Theft: Were Competitive Contests Targeted?

Comparison Between Exit Poll and Vote Count Disparities in Competitive vs. Noncompetitive Contests in Election 2006
Jonathan Simon, JD, Bruce O’Dell, Dale Tavris, PhD, Josh Mitteldorf, PhD1
Election Defense Alliance

Download PDF

Abstract

In this report, we describe results from a telephone poll conducted the night of the national election of November, 2006. The poll methodology was explicitly designed to detect partisan manipulation of the vote count, and to separate evidence for manipulation from poll sampling bias. Our premise was that politically motivated tampering would target races that were projected to be competitive, while the perpetrators would be less motivated to interfere in races that were not projected to be close. Designing our poll to be maximally sensitive to such a pattern, we selected 16 counties around the country where, of the three most prominent races (Governor, Senator or US House), there was at least one competitive contest and one noncompetitive contest. In our study, the responses of the same group of respondents were compared to official election results for pairs of races, one competitive and one noncompetitive. We used paired data analysis to compare discrepancies between poll and official count for these matched pairs. Our results revealed much larger discrepancies in competitive than in noncompetitive races (p<0.007), suggesting manipulation that consistently favored Republican candidates. We also found a linear relationship between the size of the pro-Republican disparity and the tightness of the election (p<0.000022). These results corroborate analyses published elsewhere, also suggesting significant vote manipulation in favor of Republican candidates in the November, 2006 election.


1 Jonathan Simon (http://www.electiondefensealliance.org/jonathan_simon) is Co-founder of Election Defense Alliance; Bruce O'Dell (http://www.electiondefensealliance.org/bruce_odell) and Dale Tavris (http://www.electiondefensealliance.org/about_dale_tavris) are Co-Coordinators of the EDA Data Analysis Working Group; Josh Mitteldorf (http://mathforum.org~josh) is a frequent EDA project contributor and member of the Data Analysis Working Group.

Background

Recent American elections have been tabulated by computerized voting equipment that has been proven through independent investigation by qualified security experts to be wide open to systematic insider manipulation.2 This fact has been acknowledged in the mainstream American press, and indeed in government reports.3

Nevertheless, those who, taking the next logical step, gather and present evidence to suggest that at least some recent elections may have actually been compromised continue to be met with skepticism and indifference. In light of this skepticism, election forensics experts have endeavored to take the measure of recent elections from several complementary perspectives. Several methods by which systemic election theft can be perpetrated electronically and invisibly—and with high confidence of evading immediate detection–-have been documented.4

With vote-counting software and hardware both ruled ‘proprietary’ and off-limits to inspection—and with limited access to, and the scheduled destruction of, paper election records, where they exist—direct proof of an electronically-altered election outcome may well be impossible.5

Yet although systematic electronic vote manipulation may well go undetected both during and after an election, it can still leave behind rather glaring mathematical ‘fingerprints’. And when multiple analytic methods find mathematical ‘fingerprints’ that are all consistent with the same pattern of apparent mistabulation, the case becomes very strong—at least for anyone willing to contemplate the evidence, even though the implications are profoundly disturbing.

In Landslide Denied: Exit Polls vs. Vote Count 2006,6 a study published shortly after the 2006 election (‘E2006’), authors Simon and O’Dell analyzed the nationwide discrepancy between official vote counts and the E2006 exit polls. They concluded that mistabulation of votes reduced the Democratic margin in total votes cast for the House of Representatives by a minimum of 4%, or 3 million votes.

Based on the official margins of House races, the authors further concluded that, accurately tabulated, E2006 would have been an epic landslide, netting the Democrats a very substantial number of additional seats in Congress. By examining in detail the 2006 US House exit poll data’s underlying demographic and voter-preference questions, the authors were able to confirm both the validity of the exit poll sample and the size of the official mistabulation.

Past comparisons between exit polls and official results have been questioned on the grounds that sampling bias may have played a role. By comparing the national sample’s responses to a variety of established demographic and voter-preference benchmarks, Landslide Denied established that the national exit poll certainly did not ‘oversample Democrats’.7

Landslide Denied also argued that the Republicans might have succeeded in holding on to the House and the Senate, but for the fact that they calibrated and engineered a theft based on pre-October polling numbers, which subsequently shifted dramatically further toward the Democrats in the final weeks before the election. If the election had been held a month earlier, the vote-shift evidenced by the exit poll discrepancy would have sufficed to keep the Republicans in power.

This analysis has not been rebutted or challenged, although its evidence and conclusions are clearly presented and quite straightforward. On the other hand, it has gone almost completely unreported.8

In the 2006 elections, the national House exit poll could provide, at most, an indication of aggregate mistabulation on a nationwide basis. Even so, in planning and preparing for forensic analysis of the 2006 elections, it was fair to assume that any damning evidence exit polls might provide would once again face skepticism in the press (as in ‘as usual, the exit polls oversampled Democrats and cannot be relied upon’), and among official voices of both political parties. Therefore, Election Defense Alliance sought to capture data from the 2006 election from a different and, we hoped, complementary angle.


2 See for example (http://brennancenter.org/dynamic/subpages/download_file_39288.pdf), (http://itpolicy.princeton.edu/voting/ts-paper.pdf), (http://www.sos.ca.gov/elections/elections_vsr.htm), (http://www.blackboxvoting.org/BBVtsxstudy.pdf), or (http://www.blackboxvoting.org/BBVreport.pdf).

3 See, e.g., Government Accountability Office, Oct. 2005, at (http://www.gao.gov/new.items/d05956.pdf).

4 See fn. 2.

5 To these difficulties we may add the simple-enough employment of self-deleting tabulation code, which would leave no trace of foul play even in the unlikely event inspection was permitted.

6 (http://tinyurl.com/y5fk4r).

7 The national sample that had allegedly ‘oversampled Democrats’ gave President Bush approval numbers at or above established benchmarks. Several other key indicators (such as racial composition, party ID, vote for President in 2004, and Congressional approval) all corroborated the fact that the sample leaned, if anything, to the right.

8 Landslide Denied was posted on the Election Defense Alliance website on 11/17/06, and simultaneously distributed through US Newswire to hundreds of media outlets. It was picked up by one, a passing reference in a small publication in North Dakota. Landslide Denied was also submitted for inclusion in the record of Senate Rules Committee hearings on election fraud and security. It was not accepted and no explanation was offered for its rejection.



Our Approach and Methodology
In order to counter the anticipated dismissal of 2006 national exit poll evidence on the basis of sample bias, we turned to an approach that would effectively remove sampling bias as a factor by measuring how the same sample of voters responded with respect to different electoral contests.

Our study was based on the premise that vote theft would be targeted to races that were within striking distance of a shift. We hypothesized that races that appeared close in the pre-election polls would be targeted for theft, while races that were projected to be landslides would not be corrupted.

We designed a study to compare pairs of competitive and non-competitive races in such a way that responses from the same polling respondents would be used for both. Therefore we selected counties in which we anticipated, based on pre-election polling, that there would be at least one competitive contest and at least one noncompetitive contest among the races for U.S. House, U.S. Senate, and the governorship of the state.9

We viewed contests decided by a margin smaller than 10% as ‘competitive’ and contests decided by a margin of 10% or greater as ‘noncompetitive’.10 All contests in each selected county were sampled by a single Election Night survey of actual voters (whether at-precinct, early, or absentee) conducted by telephone on our behalf by the polling firm Survey USA.

As a result, the same set of respondents was asked to indicate how they had voted in each of the contests within each selected county. This ‘apples-to-apples’ comparison, rather than any presumed freedom from bias in the samples themselves,11 provided the basis for our analysis.


9 Although hundreds of counties nationwide would have met this basic criteria, our selection was further constrained by budgetary considerations: with approximately $36,000 available for this project, the counties chosen had to be sufficiently small that the cost of obtaining the voter lists would not be prohibitive, and so that enough counties could be surveyed to generate a statistically meaningful number of data points for analysis. Altogether 19 counties were surveyed for this project, of which 16 turned out to meet the criterion of having at least one competitive and one noncompetitive contest. These 16 counties form the basis of our primary analysis.

10 Our ‘paired’ analysis of course necessitates a categorical line of demarcation. While 10% is a common-sense choice, others might be imagined. As will be seen below, the actual race margins tended to a bi-polar distribution (mean margin for competitive races = 3.2%, mean margin for noncompetitive races = 20.5%), generally distant enough from the 10% line to remove any concern about its arbitrariness. In fact, the divider could have been placed at 9% or 8% without having any impact on our paired analysis.

11 In this type of survey, calls are placed on Election Night to all voters on the county registration lists, but only those respondents who indicate they actually cast a vote are included in the survey results. Response rates are typically quite low and there is no attempt to eliminate self-select response bias (e.g., if Republicans or Democrats have a greater tendency to respond and are therefore over-represented) via stratification techniques. Such efforts are not necessary for our purposes because response bias does not adversely affect our comparison between competitive and noncompetitive races drawn from the same set of respondents.


Hypothesis Our hypothesis was that, although there would of course be discrepancies between survey results and vote counts in most (if not all) contests, in the absence of vote shifting foul play selectively targeted to competitive races there would be no statistically significant pattern of discrepancies by which competitive and noncompetitive contests could be distinguished.

Results

Table 1 below presents our core data for the 16 counties which had both competitive and noncompetitive contests. An expanded table—showing the actual winning margins of these contests, as well as the actual vote count and exit poll percentages within the sampled counties—is presented as Appendix 1. Reading from left to right, Table 1 presents the county surveyed, the office contested, whether that contest proved to be competitive or noncompetitive, the disparity between vote count and survey results in competitive and noncompetitive races respectively, and the difference within each county between the disparities found in competitive and noncompetitive races (using the mean disparity when there were two competitive or noncompetitive races within a county).

'Red Shift' and 'Blue Shift' Defined

We designate an official vote count more Republican than the survey results to be a ‘red shift,’ and an official vote count more Democratic than the survey results to be a ‘blue shift’. The right-hand column conveys the overall picture. A positive percentage in the right-hand column indicates that there was more of a red shift (or less of a blue shift) in competitive than in noncompetitive contests in that county. That is, a positive percentage indicates a net shift toward the Republican candidate in the competitive versus noncompetitive contest(s) within a given county.

An Individual County Example

To take Hardee County, Florida, as an example: the competitive contests were for Governor and US House and the noncompetitive contest was for the US Senate. The competitive contests exhibited a red shift of 7.5% and 8.0% respectively: meaning the official vote counts in Hardee in those races were 7.5% and 8.0% more Republican than the survey results, an average of 7.75%. In the noncompetitive contest for US Senate we see a blue shift of 3.5%, meaning the official vote count was 3.5% more Democratic than the survey results. Overall, therefore, in Hardee County - as measured by the survey responses of precisely the same group of voters - the official vote counts in competitive contests were shifted by a net of 11.25% (that is, by 7.75% + 3.5%) to the Republican candidates, relative to the official vote count in the noncompetitive contest.

Sixteen-County Analysis

We find that relative red shift toward the Republican candidate in competitive contests occurred in 11 of the 16 counties. Only four counties exhibited a relative blue shift away from the Republican candidate in competitive contests.12 One county exhibited no net shift, red or blue. More significantly, we found that for the 19 competitive contests, the average survey vs. vote count disparity was a red shift of 3.6%, and for the 20 noncompetitive races the average disparity was a blue shift of 1.7%. Competitive contests were therefore relatively more red-shifted by an average of 5.3% per contest.13

Statistical significance of competitive race ‘red shift’
Employing the paired t-test (two-tailed) to evaluate the statistical significance of this result, we find it to be statistically significant at the p = 0.007 level, meaning that that much of a difference between disparities in competitive and noncompetitive contests would be expected by chance only seven in 1000 times14 According to our hypothesis, the string of positive percentages in the right hand column should not occur unless systematic election mistabulation is occurring–selectively, in competitive contests, and favoring Republican candidates. In the absence of targeted mistabulation, the mean value at the bottom of the right-hand column would be at or very close to zero.


12 Interestingly, two of the four ‘net blue shift’ counties are located in Pennsylvania, a state which stood out in E2006 for bucking the red shift pattern in statewide US Senate races. While a total of 21 Senate races exhibited red shifts (mean = 4.2%), Pennsylvania, a state under Democratic administrative control, was one of only five states to exhibit a blue shift (2%) in its Senate race. At this point we can do little more than speculate about the possible effects of partisan administrative control upon both aggregate mistabulation and targeting patterns. See also, for example (http://kdka.com/topstories/local_story_311194635.html).

13 Because of the above-mentioned averaging within counties, the 16-county mean difference between disparities in competitive and noncompetitive contests was a slightly higher 5.47%.

14 A one-tailed t-test, justifiably employed if we are testing only for the likelihood of an overall competitive contest red shift, would yield a p value of 0.003, a 3/1000th prospect of chance occurrence. It should also be noted that a regression analysis of magnitude/direction of shift relative to magnitude of contest margin yields an F value of 21.9, corresponding to a p value of p<0.000022 and strongly corroborating our finding of strong correlation using the paired testing approach. Such an analysis also dispenses with what some might consider an arbitrary dividing line between competitive and noncompetitive contests at a margin of 10%, necessary for the paired-test approach. The shift-margin correlation is powerful using either approach. Please see Appendix 2 for this analysis.


Discussion

We have already discussed the evidence for an aggregate mistabulation of votes in E2006 of a magnitude sufficient to alter the outcome of dozens of federal and statewide elections.15 The aggregate evidence is based on the quasi-official exit polls conducted by Edison Research and Mitofsky International (‘Edison/Mitofsky’) for the media consortium known as the National Election Pool (‘NEP’).

In Landslide Denied,16 it is shown not only that the NEP sample of the national electorate (i.e., the aggregate vote for all House races) was of a size that makes it a virtual impossibility that the 4% poll-vote discrepancy could occur as a result of chance or sampling error but also, more significantly, that the alleged political bias of the sample towards the Democrats did not exist, as proven by the demographics of the exit poll sample itself.

Yet whenever a direct comparison between poll results (whether pre-election, exit, or post-election) and official vote counts is made and a discrepancy is noted, it is, inexplicably, always the polls that the media chorus hastens to discount and dismiss. Demonstrating the lax standards of computer security and the inadequate procedural safeguards universally applied to our electronic voting systems seems to make no impression.

The present study was undertaken because we anticipated—correctly, as it turned out—that direct poll-vote comparisons, if they appeared to indicate outcome-determinative mistabulation, would likely face hasty dismissal, predictably on the grounds of sample bias. We therefore sought a methodology that would serve to eliminate any effect of sampling bias from the equation.17


15 In Landslide Denied (http://tinyurl.com/y5fk4r), the authors established a net shift to the Republican candidates for US House of Representatives of at least 3 million votes nationwide.

16 pp. 2 - 16.

17 Much of the analysis in E2004 focused on the astounding individual exit poll-vote count disparities that turned up in certain states and in the national popular vote. But some attention was also given to the telling distribution of disparities between states that were considered ‘battlegrounds’ on the one hand and ‘safe’ states on the other. It emerged that, of the 11 battleground states, 10 were red-shifted. It further emerged that, relative to their respective average MOEs (the battleground states were more heavily sampled than the safe states, which makes a shift of the same magnitude less likely to occur in a battleground state), the battleground states as a group were nearly three times as red shifted as the safe states. So in a sense, in E2004, there was already a rough but glaring comparative analysis of competitive and noncompetitive states, pointing strongly to targeted vote-shifting. The question raised was, if the exit poll-vote count disparity was caused by ‘reluctant Bush responders’, why did this very useful phenomenon (for which no evidence was ever presented) occur so disproportionately in competitive states; that is, why were Bush voters reluctant in Ohio and Florida (where it counted) but not in, say, Utah or Idaho (where it did not)? No cogent answer was ever given.



How Our Study Neutralizes the Impact of Sample Bias

In the vast majority of federal and state political contests, it is possible to ascertain well in advance of Election Day the degree to which the race will be competitive. It is therefore possible to target competitive contests for fraudulent manipulation in a timeframe that allows the necessary mechanisms to be selectively deployed18 (for example, tainted memory cards,19 or malicious code or code parameters installed under the guise of a legitimate software distribution).

We found that we could identify such targeting patterns using poll-vote comparisons from which sampling bias had been eliminated as a factor. In the 16 counties we studied, in the absence of fraud targeted to competitive contests, we would expect no particular correlation between poll-vote disparities and the competitiveness of the contests. Disparities would of course be expected, both as predicted by the statistical margin of error (‘MOE’) of each poll and as a result of any sampling bias independent of such pure statistical considerations.20

But, since we are not relying upon a direct poll-votecount comparison, but rather upon comparison between disparities, we are not concerned with the impact of either sampling error or sampling bias on the poll-votecount disparities which constitute our data set. Indeed sampling bias in any given county survey could be very substantial without affecting the validity of our competitive-noncompetitive comparison, because the same putatively biased set of respondents would be our benchmark for both competitive and noncompetitive contest votecounts.

Take, as an example, Van Buren County, Iowa. In this county the noncompetitive Governor’s race votecount margin was shifted 8% towards the Republican relative to the poll, a result on which it might be suggested that sampling bias (oversampling of Democrats) might have had an impact.

But in the same county, and with the same set of respondents, the competitive House race votecount margin was shifted 18.5% towards the Republican relative to the poll. We can see that sampling bias, whether or not it was in fact present, drops out of the equation entirely, because it would be equally present in both races (using the same set of respondents) and could not account for the 10.5% difference between the two shifts.

Thus, in the absence of a competitive contest targeting pattern, disparities would be just about equally likely to occur, and equally likely to be in the “red” or “blue” direction, in competitive and noncompetitive contests alike.21 This is not what we found. We found a strong correlation between the competitiveness of a contest and the poll-vote disparity for the county we surveyed. Competitive contest votecounts, taken as a group, were strongly red-shifted, with an official vote count more Republican than poll result, as compared to noncompetitive contest vote counts. The goal of our study was not to identify particular contests, counties, or districts as having been targeted for rigging, but rather to determine whether there existed an overall pattern indicative of a targeting process, an indelible fingerprint of electoral manipulation. In this we succeeded, to a high level of statistical significance.


18 See http://brennancenter.org/dynamic/subpages/download_file_39288.pdf pages 37-39 for parameterized attacks on voting systems.

19 See http://itpolicy.princeton.edu/voting/ts-paper.pdf for attacks on voting systems via centrally-programmed memory cards.

20 It is important to understand the distinction between sampling error and sampling bias. Sampling error, generally reflected in a poll’s stated MOE, derives from the statistical chance that a fairly drawn sample (i.e., one drawn at random and without bias) will misrepresent the whole to some quantifiable, and usually very small, degree. Sampling bias, on the other hand, extends beyond any such purely statistical limitations to impound any intentional or inadvertent biases in the sampling process that yield further misrepresentation. A classic example would be interviewers who ignore random selection instructions to choose respondents whom they know or who look more ‘like them’; another would be a differential response rate based on categorical receptivity to being interviewed or ownership of the technology (e.g., telephone, computer) used for the poll. Effects of sampling bias can be virtually eliminated by a thorough demographic weighting process such as that employed by the NEP prior to publication of their poll results. Such a process was not, however, necessary to the design of the current study, as explained in fn. 5.

21 “Just about equally” because the MOE decreases very slightly between a 50%-50% contest and a 75%-25% contest (most competitive and least competitive ends of our spectrum of contests). At the 200 – 300 sample sizes we are primarily working with, the MOE decrease is about 1%. This minor variation had no quantitative impact on our analysis.


Methodological Limitations

No discussion would be complete without a frank acknowledgement of our study’s limitations. We were compelled by budgetary considerations to select a small set of relatively small counties for our study. We could not afford to test any of the larger counties, where the cost of registration lists and survey completions would have been prohibitive. In applying our approach to future elections, in particular to 2008, we hope to significantly expand the number and scope of counties surveyed.

Should E2008 be as much a victim of targeted rigging as E2006 appears to have been, the expanded study we expect to undertake will expose and quantify the pattern to a ‘DNA-level’ of statistical certainty.

Or, put another way, it would appear that in light of political circumstances any effort to seize national control through manipulation of the vote counting in 2008 will have to be either of an aggregate magnitude that is truly shocking and so carries a high risk of exposure, or so well-targeted that the targeting pattern itself sticks out like a sore thumb. To deter or expose massive electoral subversion, both modes of attack must be anticipated and monitored.

Conclusion

Our study was modest in scope because of financial constraints, but it was tightly-focused in its design. The result shines a powerful triple beam into the dark corner of secret electronic vote-counting in American elections.

• First, it detects a clear pattern indicating a wholesale shift in tallied votes. This is consistent with our study of aggregate vote shifting presented in Landslide Denied.

• Second, it identifies the overall direction of the shift: in favor of Republican candidates, once again corroborating our aggregate findings in Landslide Denied.

• Third, it confirms the common-sense notion that any group with the will and ability to secretly manipulate vote tabulation would likely focus their efforts on changing the outcomes of close contests, where the power of electronic vote-shifting would be maximized through selective targeting, while at the same time minimizing the size of the aggregate shift—and the corresponding risk of discovery.

We found evidence, in Landslide Denied, of an aggregate net shift of 3 million votes nationwide from Democratic to Republican candidates for the US House. If one imagines those shifted votes distributed randomly and evenly across the 435 contests, it would amount to a net shift of just under 7000 votes per contest. If we apply this model by taking 3500 putatively shifted votes from each Republican candidate and transferring them back to the Democratic candidate (for a net shift of 7000 votes), it would reverse the outcome of 15 House contests in 2006. This is not an inconsiderable effect, as it would have given the Democrats a 30-seat greater margin (248 – 187).

If, however, we target and apply those same 3 million shifted votes to the most competitive Republican victories, we find it would instead reverse the outcome of 112 contests, giving the Democrats an overwhelming 345 – 90 majority in the House. We naturally do not suggest that vote-shifting in 2006 was, or could be, targeted with such hindsight-aided precision. Our point is rather that targeting, even at the modest level of precision obtainable months in advance (from historical voting patterns and pre-election polling) can vastly increase the bottom-line effect of the covert shift of a given total number of votes or—conversely and more ominously—can enable a political control-shifting electoral manipulation that leaves only the smallest and all-but-undetectable fingerprint of aggregate mistabulation.22

In E2006, the explosive movement toward the Democrats in the month of October23 would have overwhelmed a rational targeting plan finalized during the pre-October period, after which the logistics of further deployment or recalibration of vote-shifting mechanisms would most likely have been prohibitively problematic.24 Such an extraordinary pre-election dynamic certainly cannot be counted on again to defeat attempts to seize political control via electoral manipulation. We submit that our findings regarding targeting in the present study, coupled with our earlier findings in Landslide Denied, sound an alarm for democracy, and make a compelling case for expanded monitoring of future elections.

We restate here the concluding sentences of Landslide Denied, as these latest findings only serve to increase the urgency of our warning: ‘The vulnerability is manifest; the stakes are enormous; the incentive is obvious; the evidence is strong and persistent. Any system so clearly at risk of interference and gross manipulation cannot and must not be trusted to tally the votes in any future elections.’

* * *


22 This is especially ominous in light of the fact that, in the absence of any effective system of intrinsic electoral audits, the only check mechanism of sufficient sensitivity and statistical power to effectively challenge the official numbers spit out by the computers is the demographically validated national exit poll (assuming that ‘unadjusted’ exit poll results are made available in 2008). But this check mechanism detects only an aggregate disparity. Targeted rigging allows the theft of both the Presidency and Congress with a footfall light enough to avoid setting off this sole remaining burglar alarm.

23 See Landslide Denied, pp. 13 - 15. 24 See Landslide Denied, Appendix 2. Although the vulnerabilities of vote-counting computers make it possible to shift (or delete or fabricate) virtually unlimited numbers of votes, the size of the footprint and the likelihood of detection of course increases accordingly. The logical vote-shifting algorithm therefore remains ‘take no more than you need’. A possible exception is the Presidential race, in which there is a rather compelling advantage to shifting enough votes nationwide to ensure a popular-vote victory, even though an Electoral College victory might be secured with a well-targeted fraction of those votes. A popular vote victory–as reflected in the contrasting behavior of the Democratic candidates in 2000 and –2004—plays a major role in granting or denying a Presidential candidate the standing, in the media and in the court of public opinion, to challenge even quite egregious anomalies in decisive battleground states.


Appendix 1 - Expanded Table 1

Appendix 2 - Regression Analysis

The purpose of regression analysis was to look at the correlation between vote margin and within-county exit poll-vote count disparity. We included in this analysis as a separate data point each of the 39 races in each of the 16 counties that served as the basis for our paired t-test analysis. This analysis represents a way of looking at the same data as we looked at in our paired t-test analysis, but from a different angle, with two advantages over the paired t-test analysis and two disadvantages. The disadvantages were: 1. The regression analysis doesn’t completely eliminate bias (though it eliminates the great majority of potential bias) as an explanation for our results, since some counties contributed data points to a non-competitive race without being matched by a competitive race, or vice versa. Therefore, the exact same population was not used for competitive and non-competitive races in this analysis. However, the two populations were very similar, and whereas a potential for a small amount of bias exists in this analysis, we see no reason to suspect that it does exist. 2. The rationale for using the paired t-test was that competitive races were characterized by the potential for fraud, whereas there would be no reason for committing fraud in non-competitive races. With that assumption, the vote margins would be unimportant, as long as the races could be characterized as competitive or non-competitive. If this assumption was accurate, then an analysis that included the vote margins of the race would include meaningless data, which could weaken the ability to detect meaningful differences between competitive and non-competitive races. The advantages were:

1. When analyzing continuous variables (which vote margins are), regression analysis generally provides more power to detect meaningful differences than t-tests, which do not make use of the continuous nature of the variable, but dichotomize it instead.

2. To the extent that it might have been difficult to ascertain whether a race was competitive vs. non-competitive prior to the election, it would be reasonable to assume that the more competitive a race was the more likely that it would be subject to fraud. And, it is reasonable to suspect that the closer a race was presumed to be, the more susceptible it would be to fraud.

The regression analysis provided an F value of 21.85, corresponding to a p value of p<0.000022. That means that the correlation between vote margin and within-county exit poll-vote count disparity was so strong that it would have occurred only about one out of 50,000 times on the basis of chance alone (see graph below).


Appendix 3 – Survey USA Data Links

State  County       Link

MO     Henry           http://www.voterrollcall.com/client/PollReport.aspx?g=a6e072a1-a39e-4f6c-95e4-af1a0150bcac
MO     Cedar           http://www.voterrollcall.com/client/PollReport.aspx?g=cfd957af-bc6d-406e-b05e-23f025dd91a3
TN      Haywood      http://www.voterrollcall.com/client/PollReport.aspx?g=f5256fb4-48be-434f-a8ac-1e6c9c768e00
FL       Hardee         http://www.voterrollcall.com/client/PollReport.aspx?g=7bf59ee4-894f-43fd-9113-23bc4a8a21a8
FL       Okeechobee  http://www.voterrollcall.com/client/PollReport.aspx?g=aae0d44f-8fd7-426b-9186-cdd8d2222292
PA       Bradford       http://www.voterrollcall.com/client/PollReport.aspx?g=d3b628f5-5da3-42c7-96b9-350bc4fd11d2
PA       Wyoming      http://www.voterrollcall.com/client/PollReport.aspx?g=f04c2158-acee-4a6e-912f-14eef91303f0
MN      Mower          http://www.voterrollcall.com/client/PollReport.aspx?g=f065fa14-3452-4321-99dc-42fa8c48ee53
MN      Pipestone     http://www.voterrollcall.com/client/PollReport.aspx?g=6889cbbc-ade1-400e-a49e-c629be32bce0
OH      Adams         http://www.voterrollcall.com/client/PollReport.aspx?g=42f186df-1fdc-4f41-b5d6-b9b30026106d
GA      Jefferson      http://www.voterrollcall.com/client/PollReport.aspx?g=b962e036-0513-423b-9a5b-5d29892bf0c3
GA      Emanuel       http://www.voterrollcall.com/client/PollReport.aspx?g=dd565bbb-8dfc-4143-bd8c-016ac197203b
IA       Van Buren    http://www.voterrollcall.com/client/PollReport.aspx?g=b19fd14c-f493-406f-a18c-cf62dc1e1df6
IA       Jefferson      http://www.voterrollcall.com/client/PollReport.aspx?g=2b03ce9c-121a-45f4-a3d5-5453d177465d
NV      Humboldt     http://www.voterrollcall.com/client/PollReport.aspx?g=a36dfabf-2b31-4513-bc83-5b416056f84d
VA      Lancaster     http://www.voterrollcall.com/client/PollReport.aspx?g=70c3610b-c22e-49ed-b5a1-e102cf6ad4cf

Landslide Denied

Major Miscount in 2006 Election: Were 3 Million Votes "Misplaced"?
Read the Full Press Release
READ THE FULL REPORT
(as revised and expanded 7/15/07)

Study the Exit Poll Data

Election Defense Alliance, a national election integrity organization, issued an urgent call today for an investigation into the 2006 election results and a moratorium on deployment of all electronic voting equipment after analysis of national exit polling data indicated a major undercount of Democratic votes and an overcount of Republican votes in congressional races across the country. These findings are an alarming indictment of the American election system in which 80% of voters used electronic voting equipment.

As in 2004, the Exit Poll and the reported election results do not add up. Once again the media reflexively discredited the Exit Poll. But there are several objective yardsticks in the Exit Polls that establish their validity and expose the inaccuracy of the election returns. These findings are detailed in a paper published today on the EDA website.

Protect Your OWN Vote: Verify Your Registration!

Registered to Vote?
Are you Sure?
Resist the Purge!

Confirm Your Registration Status Online!
Click the links below to be sure you know the registration deadlines and rules for your state,
and that your name is on the voter registration roll

Voter Registration Deadline Dates by State

Details on Registration System in Your State

Check Your Registration Status at CanIVote.org

Take Action

Send This Page to a Friend


Click for Action of the Day


What Can You Do to Promote Election Integrity?


If you are asking, "But what can one person do?" this page is a good place to start.

You can click the "Take Action" tab on our main menu (look up to top of this page). We post recommended actions as opportunities arise.

At the foot of this page, you will find the Take Action menu tree. Each link opens up to list additional practical, effective ways you can take action in your own community or from your desktop to restore integrity, transparency, and public accountability to U.S. electoral democracy. (Check the "Citizen Activism Tools" and "Action of the Day" links for  updates).

Other Easy Ways to Get Started

1.  Open an EDA web account

Click to register: http://www.ElectionDefenseAlliance.org/Join
With an EDA web account, you can add content to the site. Monitoring elections nationwide is a big job. We need you, the voters, observing, reporting, and proposing solutions.
If you're eager to do even more, check into the EDA Working Groups

2.  Subscribe for EDA news and announcements

Click to opt-in: EDA Subscribe
Generally newsletters go out twice a month, with occasional regional-specific event announcements in between.
You are always in control of your own subscription.

3.  Volunteer

Whatever kind of organizing work you enjoy doing, EDA needs done -- whether that be events production, public speaking, video, volunteer recruitment, website-building, database programming, research, writing, graphics, publicity, or you name it. You decide what you want to do, and we'll give you the latitude to do it.

Tell us what you can help with:
Click over to the Volunteer Registry and design your own program.


We have some special volunteer roles; click
the IDEALIST.org logo in the left side column to view them.

If you have questions or suggestions you would like to discuss, call 877 375 3930. We'll get right back to you.

4. Donate

EDA is a sponsored project of International Humanities Center, a 501(c)(3) organization, so your donations are tax-deductible.

We welcome your one-time donations for general operating support, or for any particular EDA project you would like to specify.
Recurring monthly contributions are especially helpful. Please select a contribution level that is right for you.

Our donations page makes it easy for you to set up a contribution and automatically receive a record of your donation.

5. What Else?


Check Here Right Now> Action of the Day (and frequently thereafter)  for new action updates. Send in yours, too.

Then click here  What You Can Do  for many more options to choose from.

Last but not least,  
Send This Page to a Friend



Think Nationally, Act Locally

A core concept we have at EDA is that all elections are local.
There is no substitute for taking action in the precincts and counties where you live and vote.

Our goal at EDA is to build local-to-national collaboration among regional election integrity groups acting locally but also with collective strategy for effecting national outcome. This collaboration is not one-way and top-down, but two-way and interactive. Local groups have invaluable direct experience that can be synthesized and applied at a national level. A national group can develop research, legal, media, and fundraising capacities that are beyond the scope of smaller local groups.

We encourage you to go get active with a local or regional election integrity group where you live -- and then put that experience to work at a national level as well, by becoming an active member of an EDA Working Group. Everything that you learn and do on the local level can be leveraged for double-duty, working with EDA to build a collaborative national effort.

To find a state or regional election integrity group near you, see this directory:
http://electiondefensealliance.org/regional_election_integrity_organizations

If you know of additional groups not listed here, please send us their contact information.

Syndicate content