Field Hearing in New York City, May 7, 2007

 

United States House of Representatives

Committee on Oversight and Government Reform,

Subcommittee on Information Policy, Census, and National Archives

 

Statement of

Teresa Hommel

www.wheresthepaper.org

Chair, Task Force on Election Integrity, Community Church of New York

 

Limitations of Certification Testing, “Transparency,” and Current Standards

and What Congress Can Do

 

Standards and testing, the subject of this hearing, are only one part of what would be needed to make computerized elections capable of supporting election legitimacy and legitimate democratic government.

 

Standards and testing must be understood in a realistic, overarching context. Without that understanding, Congress cannot mandate proper use of computers in elections or exercise proper oversight of the small part that standards and testing represent.

 

For example, the Help America Vote Act of 2002 and the currently proposed HR811, the "Voter Confidence and Increased Accessibility Act of 2007," both focus on details, and fail to put those details into an overarching structure of proper management of computer use.[1]

 

In my comments below, I refer to all computerized voting systems as “computers” for simplicity, and specify Direct Recording Electronic voting machines (“DREs”) when referring only to them.

 

A. Limitations of certification testing

 

Standards and testing are useful to minimize malfunctions, but they cannot guarantee that a computer will work on election day.

 

1. Are the computers used on election day the same as those tested for certification? Election administrators do not know how to verify this. They do not know what equipment they have, how to verify that their equipment is indeed what they think they have purchased, nor how to verify that it has not been tampered with by maintenance personnel or other insiders with access to the equipment. Election administrators are not demanding to verify their equipment, and no law requires them to verify it. The law has actually allowed vendors to prevent verification of equipment by claiming that the products are “trade secret”.

 

2. Computers are volatile, unlike, for example, mechanical lever machines which are stable. After a lever machine is programmed, it won't change itself, no one can change it via remote communications, and no one can change its functionality by taking two seconds to insert a different memory card. The law allows computers to be treated like mechanical devices by not requiring pre- and post-election publicly observable verification of equipment.

 

3. Physical security is impossible during elections since the computers used for voting are used by the public and managed by non-computer-literate poll workers. A dishonest person would have unlimited opportunities to tamper. He or she could simply go into a poll site, claim to be a technician from the Board of Elections with the job of “checking the computer” and tamper. DREs are often compared to ATM machines, which most people use and trust. Yet banks routinely lose millions of dollars annually through thefts through their ATMs, and this money is written off as “a cost of doing business.” A Google search for “ATM Theft” yields over a million entries.

 

4. There is widespread belief in the field of elections that standards and testing can ensure correct and accurate computer function on election day. This belief is false. A test can show that a computer is capable of working today under tested conditions, but no test can guarantee that the computer will work tomorrow under the same or different conditions. In the professional world, no installation tests their equipment once and then assumes that all future processing will be correct. Instead, they verify continuously. I speak as a computer professional for 40 years (since 1967). For the last 24 years I have been a short-term contractor and have worked for hundreds of major companies and governmental agencies. It is my personal observation that every professional installation verifies every transaction, and has a staff that monitors the equipment twenty-four hours a day, seven days a week. In contrast, the law does not require election administrators to verify that their computers have worked properly. The problem is that vendors have assured legislators and election administrators that “the equipment is certified, therefore we can trust that it works properly.” Our law fails to require realistic verification.

 

5. The idea that you can trust a computer because that model has been certified rests on false assumptions, such as:

 

If a computer works today, it will work tomorrow.

If a computer works today, it will work the same way tomorrow.

If one computer works, another computer of the same make and model will also work.

If you buy a large number of computers, they will all work the same and none of them will be lemons.

 

6. Computer security is impossible to control, and assumptions that computers used for voting will be secure is out of line. In spite of the continuous, routine verification of results in professional installations, as well as expensive security monitoring, the FBI Computer Crime Survey of 2005 reported that in that one year 87% of organizations had "security incidents", 64% lost money (showing that the incident was not trivial), and 44% had intrusions from within their own organization (showing that insider tampering is common, and that outside hacking over the internet is only one small part of the security picture).[2] Within this context, the idea that standards and certification testing can guarantee computer security is bizarrely inaccurate, yet widely held by public officials and election administrators.

 

B. Limitations of the term "transparency"

 

“Transparency” is an inadequate word because different people have different ideas of what it means. I urge everyone to use the term “understandable and observable” when speaking about elections.

 

Election legitimacy requires that ordinary non-technical citizens be able to appropriately observe the handling of votes and ballots, understand what they observe, and attest that procedures were proper and honest.

 

 

DREs produce and count electronic votes and ballots. When DREs are used, the votes that are counted for election tallies consist of invisible electrical charges inside computer circuits. This means that no voters or observers can understand or observe the votes and ballots.

 

Use of electronic votes forces investigation of election irregularities to focus on computers rather than votes and ballots, and to be performed by “experts” rather than average citizens. Use of electronic votes has also enabled the trade secret claims of vendors to prevent appropriate investigation.

 

DREs provide one or two placebos for voters to “verify” -- the computer screen and in some jurisdictions a voter-verifiable paper audit trail (“VVPAT”). Yet neither the screen nor the VVPAT is used to create initial tallies, and the law does not require meaningful verification of computer results to occur before initial tallies are announced. The tiny spot-checks that may be mandated under the name of “recount” or “audit” will allow unverified computer tallies of electronic votes to be used in the vast majority of cases.

 

The history of American election fraud provides many examples of dishonest people who marked and cast paper ballots "for" real or non-existent voters. DREs continue the tradition of this type of fraud but automate it and prevent subsequent opening of the ballot box to examine evidence. DREs force voters to turn over their ballots to be marked and cast by others.

 

C. Limitations of current standards

 

Even if a system is certified to the 2005 standards, this is not an indication of good quality because the standards are themselves seriously flawed. [3]

 

1. The standards do not require computerized voting systems to provide a means for independent verification of vote recording, casting, or counting. In other words, systems that have been designed to be impossible to independently verify are legal under these standards.

 

2. The standards give the EAC blanket authority to violate any of the standards and approve any system whether or not it passes tests. From Vol. II, Appendix B5:

 

"Of note, any uncorrected deficiency that does not involve the loss or corruption of voting data shall not necessarily be cause for rejection. Deficiencies of this type may include failure to fully achieve the levels of performance specified in Volume I or failure to fully implement formal programs for quality assurance and configuration management described in Volume I, Sections 8 and 9. The nature of the deficiency is described in detail sufficient to support the recommendation either to accept or to reject the system. The recommendation is based on consideration of the probable effect the deficiency will have on safe and efficient system operation during all phases of election use."

 

The problem here is that no one can know in advance if a deficiency will "involve the loss or corruption of voting data."

 

3. The standards require a minimum Mean Time Between Failure of 163 hours. This allows a 9% failure rate in an election day of 15-hours. Vol I, 4.3.3 Reliability:

 

The reliability of voting system devices shall be measured as Mean Time Between Failure (MTBF) for the system submitted for testing. MTBF is defined as the value of the ratio of operating time to the number of failures which have occurred in the specified time interval. A typical system operations scenario consists of approximately 45 hours of equipment operation, consisting of 30 hours of equipment set-up and readiness testing and 15 hours of elections operations. For the purpose of demonstrating compliance with this requirement, a failure is defined as any event which results in either the:

    • Loss of one or more functions

    • Degradation of performance such that the device is unable to perform its intended function for longer than 10 seconds

 

The MTBF demonstrated during certification testing shall be at least 163 hours.

 

For purposes of comparison, an ordinary incandescent light bulb has a MTBF of 1000 hours, a DVD player has a MTBF of 40,000 hours (4.5 years), and a computer hard drive has a MTBF of 1,000,000 hours (114 years). Some computer scientists have speculated that more failures of voting systems are due to software than hardware; in fact both types of failures have occurred.

 

The Election Assistance Commission has not addressed this deficiency in the standards, although public comments called it to their attention in 2005. Once the standards are improved, it will take years for equipment in the field to be improved also.

 

The testing process in Vol II, Appendix C.4 "Time-based Failure Testing Criteria" allows even higher rates of failure: 6 failures after 466 hours, which is a MTBF of only 78 hours.

 

D. What Congress Can Do

 

1. Define terms.

 

The Help America Vote Act of 2002 (“HAVA”) requires voting systems to "produce a permanent paper record with a manual audit capacity" yet the term was not defined, and currently many jurisdictions use paperless DREs that cannot be meaningfully audited. New legislation should not be necessary to require VVPAT and independent auditability.

 

The same failure to define terms appears in HR811, the “Voter Confidence and Increased Accessibility Act of 2007.” This bill requires recounts and audits but does not define the terms. This paves the way for meaningless and useless procedures, such as reprinting DRE or optical scanner tally reports, which will be called recounts or audits.

 

2. Require the opportunity for meaningful citizen observation, and prohibit use of equipment that prevents it.

 

Elections conducted in secret cannot support legitimate democratic government. Citizens must be able to observe and understand what they are observing.

 

If computers are used, citizens including voters, election observers and candidates, must be guaranteed proper access to closely and meaningfully observe. In a jurisdiction’s central tabulating location, this would require the use of several cameras focused on the screen, keyboard, and mouse of central tabulators, while the continuous images are displayed on large-screen TVs where citizens can observe.

 

Meaningful observation requires observers to understand what they are observing. Use of computers in elections means that observers would have to become computer experts, and be trained by their Board of Elections to understand all the procedures to be used. Even if voters, observers, and candidates designate hired experts to observe for them, these experts would need to be trained in the use of the equipment. It is likely that computers place an insurmountable barrier between observers and the handling of votes and ballots.

 

3. Require jurisdictions to provide backup emergency paper ballots.

 

Immediate resolution of computer problems is rarely possible on election day. For this reason emergency paper ballots must be on hand.

 

4. Require elections to be held again if voters are disenfranchised due to computer failure.

5. Mandate that voters have standing to sue when they are disenfranchised due to computer failures, and mandate remedies.

6. Mandate that candidates have standing to sue if their elections involved computer failures.

 

Allowing computer failure to disenfranchise voters will guarantee that computer failure occurs early and often.

 

If voters can’t vote because computers failed and the jurisdiction does not provide backup emergency paper ballots, elections must be held again.

 

After computer malfunctions cause the outcome of elections to be called into question, we have learned from experience that the controversies either cannot be resolved, or cannot be resolved soon enough, to ensure that the election reflects the will of the voters. The concept of a “large margin of victory” is meaningless in computerized elections (since any margin of victory can be achieved by tampering) so remedies should not be tied to margin of victory. 

 

Similarly, if computers handle voter registration lists at poll sites on election day ("electronic poll books"), the computer’s failure to find names of registered voters must be discouraged by providing penalties for the jurisdiction, and mandating standing and remedies for voters and candidates.

 

7. Prohibit secret certification testing.

 

Secrecy of certification testing does not serve the public, whether or not vendors claim that "disclosure would compromise security features." Computer scientists have repeatedly said that security cannot rely on people not knowing how a computer works (called “security by obscurity”).

 

8. Prohibit use of equipment that is subject to “trade secret” vendor claims.

 

Use of computers that are subject to trade secret claims has prevented investigation of election irregularities. The law must prohibit vendor claims of trade secrets or prohibit the use of equipment that contain trade secret parts. This must apply to voting and vote-tabulating computers as well as electronic pollbooks and central voter registration systems.

 

E. Microvote, New York State and Ciber

 

The problem of inadequate federal and state testing has been an open secret, the details of which have been concealed by the secrecy of the process as well as the ignorance and complacency of state and local election officials. New York is the first state to have properly and independently overseen the work of our former state testing lab, Ciber.

 

Over 3 years ago, an inteview with the executives of Microvote, a voting machine vendor in the midwest, revealed the basic flaws with our federal and state testing and certification problems.[4]

 

Bill Carson: Unfortunately the ITA (independent testing authority) has a limited scope in what they can test and check on the system. It is based on time and economics. For an independent test authority to absolutely, thoroughly test under all possible conditions that the device will operate properly they would have to spend, in my estimation, 10 times the amount of time and money as it took to develop it in the first place…. And the technology changes so rapidly, by the time they get done testing it, it's obsolete.

 

I-Team: So what do ITAs not test?

 

Carson: (Picks up electrical cord.) UL says that this will not shock you and it will not catch fire. They don't tell you that it actually works. That's beyond the scope of UL testing. Absolutely nothing will you see in the FEC requirements that this (puts hand on DRE voting machine) has to work. It has to have these functions. But it doesn't have to work.

 

I-Team: What about state certification testing?

 

Ries Jr.: We've been somewhat loosely monitored by the states. There's a lot of trust that the vendors are out for the best interest of the local jurisdictions. The states basically look at the federal qualification testing as being kind of the ultimate testing ground. As a vendor working with these independent testing authorities, they do a good job of following the test plans afforded to them by the vendors. They don't really go outside of those test plans. In the state of Indiana - and I'm not criticizing by any means - we just don't have the technical expertise to take these test result plans that the independent testing authorities provide them and really go through them in detail. Maybe it's just the leap of faith that the states feel that the federal testing authorities have done an adequate job and that they will adopt that product pursuant to state compliance.

 

I-Team: What about evaluation of equipment at the local level prior to a purchase? Do those buying or approving the purchase even know what questions to ask?

 

Ries Jr.: Local council, local commissioners typically don't get involved in the evaluation of equipment. And that's not a bad thing.

 

I-Team: Local jurisdictions conduct public tests of new voting equipment, but few members of the public actually attend. Why do you think that is?

 

Ries Jr.: I guess it's just a leap of faith and understanding that what we're doing is what we're presenting to the county. So there is a bit of uncertainty there. There has to be faith in their local election boards. It's one of those areas of a leap of faith. That you really do have to have a faith in your local jurisdiction, that they are conducting equitable elections in the best faith of the voters. The larger the jurisdiction, the more scrutiny should exist.

 

Failure to thoroughly test computers used in elections is unwise given computer industry statistics. 72% of software projects in a typical year, 2000, were complete or partial failures, including 23% that were completely abandoned after huge expenditures (and waste) of time and money. Regarding partial failures, if a computer system “partially” doesn’t work, that means it doesn’t work.[5]

 

F. Conclusion

 

Computerized election errors and fraud cannot be prevented, detected, or corrected by standards and testing. However, the use of computers in elections has shifted the focus of discourse away from votes, voters, ballots, observers, poll workers, and candidates. When computers are used, the conversation is solely about computers. No one has more than circumstantial evidence of what might have happened to the votes and ballots.

 

In order to evaluate election integrity, then, everyone is forced to rely on computer experts. Statisticians are called in to determine confidence levels. The focus shifts from "open the ballot box, let me see the ballots" to "let my expert examine the computer."

 

Elections may be acceptable without verification if the procedures for handling votes and ballots are properly observed and understood, but when computers are used, computers always need to be verified--that is the nature of the technology. The idea that computers need to be only "verifiable" is wrong. Computers need to be verified. This is due to the difficulty of making them work in the first place, and also the impossibility of maintaining security especially from insider tampering..

 

Many citizens and election integrity activists oppose computerization of elections for two reasons:

 

First, the computers are being used without proper verification.

 

Second, the need for meaningful observation to support election legitimacy may be unable to be met, due to the difficulty of making all election observers sufficiently computer literate and making all Boards of Elections provide large-screen TVs to enable observers to watch the use of central tabulators on election night.

 

I hope that the members of the Committee on Oversight and Government Reform,

Subcommittee on Information Policy, Census, and National Archives, can carry these ideas forward, share them with other members of Congress, and improve any federal legislation that is to be voted on by Congress.

 

The United States has spent a large amount of money on unverifiable and shoddy computerized voting systems. It is better to take a financial loss, however, than to lose our democracy due to the use of expensive and wrongly designed, wrongly used voting equipment.

 

____________

1. HR811 with embedded comments: http://www.wheresthepaper.org/HR811withCmt070225.htm

2. FBI report. Re the 87% with security incidents, it has been said, only half jokingly, that the other 13% hasn’t noticed it yet. The FBI’s report itself mysteriously “was lost” for several months from the FBI web site, and I got a copy on paper from their Houston office.

http://houston.fbi.gov/pressrel/2006/ho011906.htm

http://www.wheresthepaper.org/YahooNews060120FBI_MostCompaniesGetHacked.htm

Financial institutions with the most sophisticated computer security in the world have had massive losses: USA Today.  40 Million credit card holders may be at risk

http://www.usatoday.com/money/perfi/general/2005-06-19-breach-usat_x.htm?csp=34

3. This section was drawn from the work of Howard Stanislevic:

http://www.wheresthepaper.org/StanislevicAreStandardsSolvingTheProblems.pdf

http://www.wheresthepaper.org/StanislevicCertificationWhosMindingTheStore.pdf

http://www.wheresthepaper.org/StanislevicCiberFailures.pdf

http://www.wheresthepaper.org/StanislevicDRE_ReliabilityMTBF.pdf

http://www.wheresthepaper.org/StanislevicGapingHole.pdf

4. An I-Team 8 Investigation, Excerpts from Interviews with MicroVote Executives. Posted February, 2004.  http://www.wishtv.com/Global/story.asp?S=1647598&nav=0Ra7JXq2

or  http://www.wheresthepaper.org/iTeam01_20MicroVoteInterview.htm

5. Why the Current Touch Screen Voting Fiasco Was Pretty Much Inevitable" by Robert X. Cringely, December 4, 2003. .  http://www.pbs.org/cringely/pulpit/pulpit20031204.html