http://broadcatching.wordpress.com/2007/09/05/voting-machines-fatally-flawed/
::Broadcatching::
http://www.voiceofthevoters.org/ for transcripts of all programs
Wednesday, September 5th, 2007...7:53
VOTING MACHINES “FATALLY FLAWED”
OpEdNews
Voice of the Voters: Transcript of Matt Blaze Interview
By Mary Ann Gould
TIME TO BAN DREs?????
THE PROBLEMS WITH ELECTRONIC VOTING, ESPECIALLY DRES – A
VIEW FROM THE CA TOP TO BOTTOM STUDY
Transcript of Matt Blaze, and Mary Ann Gould on Voice of the
Voters!
Dr. Matt Blaze of the University of Pennsylvania and leader
of the Sequoia source code review team for California’s Top to Bottom
Electronic Voting Investigation
August 8, 2007
MAG: Good evening, Dr. Blaze. We’re glad to have you here,
especially with all the notoriety that is going around the country about the
top-to-bottom study in California.
MB: Glad to be here.
MAG: I noticed on your blog, which is excellent,
www.crypto.com, you noted that you found significant, deeply rooted weaknesses
in all three of the vendors’ software. Then you went on to talk about the red
team and their finding significant problems because of built-in security
mechanisms that they were up against—that they simply don’t work properly.
MB: That’s right. I should start by telling you a little bit
about what we did, and what my role in it was. So, the California Secretary of
State, Debra Bowen, this Spring, put together a study of the electronic voting
technology that’s used in her state, that’s primarily four systems made by
Diebold, Sequoia, Hart, and ES&S. What she did is went to the University of
California to two professors, one at UC Berkley, David Wagner, and another at
UC Davis, Matt Bishop, and asked them to put together teams to review each of
these systems in various ways. And in particular, one of the teams was to
review the source code of the systems—the programs that run on the voting
computers, and on the vote tallying computers back at the county election’s
headquarters. And another team was to attempt to use any vulnerabilities that
were found to see if these could be exploited to interfere with the proper
tallying of votes or interfere in the election, in some way. Now, my role in
this was to lead the team that looked at the source code for one of the
systems, the Sequoia system, and our reports, the red team reports, and the
source code review reports were submitted to the state a few weeks ago, and
they’re up on the Secretary of State’s website. So, my role was to basically
look at the Sequoia system’s source code and see if there were any security
problems in it—to do a security review of the software. Now, after we finished,
all the reports found particular problems that were particular to the various
systems. There was an overall similarity among them, which is that all three of
the reviewed systems (one of the systems wasn’t reviewed; they didn’t submit
their source code in time, that was the ES&S system), of the three systems
that were reviewed, Diebold, Sequoia and Hart, all of the teams that looked at
them just found that the software mechanisms that are intended to secure the systems
can be defeated very, very easily. They just don’t work very well, at all.
Because of that, the red teams that were to try to penetrate these systems and
tamper with election results in a simulated environment had a relatively easy
time of it. They were able to succeed at almost everything they tried.
MAG: Now, you indicated that what you found, even in the
code alone, was far more pervasive and much more easily exploitable than you
had ever imagined it would be. What did you mean by that?
MB: That’s right. It would be unfair to expect any large
system to be completely perfect, and really nobody expects that any large
software project is going to be completely free of mistakes or bugs or even
little security problems. And in fact, election systems are designed with
procedures that are intended to tolerate a certain amount of weakness. So we
expected that we would find some things that would be wrong. What really
surprised me, and I think surprised all of us, was just how deeply rooted the
problems were. It wasn’t simply that there were some mechanisms that could be
beefed up or that weren’t as good as they could have been, but that every
single mechanism that was intended to stop somebody from doing something just
didn’t work or could be defeated very, very easily.
Now, two of the three systems, Hart and Sequoia, haven’t
really been studied that widely in the public literature, in the academic
literature; not much had been known about them before. But the Diebold system,
various versions of that have been studied by academics, by researchers, who
had found that there were problems. But even there, the problems that were
found by the Diebold team included some things that hadn’t been found before.
MAG: Well, Harri Hursti, on our program, had said two
things: one, that there was an overall weakness in the architecture and that
basically, the equipment that he had looked at has not been built for quality.
MB: I’d say there really are two problems. This is really
another way of putting that. The first, as you said: there’s a problem with the
architecture, and by the architecture, what I mean is the design of the system.
Even if it were built absolutely perfectly, the way it was designed puts
security at a bit of a disadvantage. That is, the way these systems are
designed, if you compromise one component, one voting machine somewhere, it
becomes easier than it should be to interfere with the election results. The
architectures of the systems aren’t designed with enough built-in checks and
balances and built-in—essentially—mistrust of the possibility of mistakes to
tolerate the kinds of problems that come up in any system run by people. So you
can look at the overall design of these systems and tell right off the bat that
this design was not as good for security as it could be. But, compounding that
problem, when we actually went and looked inside these systems and looked at
the source code that runs them, not only is the design weak, but the
implementation itself is weak. The code has bugs in it, there are some
fundamental security weaknesses that could have been avoided by better
programming. So that makes that weak architecture that much worse, because the
weaknesses that you might be able to exploit are just all over the place.
MAG: How did these machines get certified?
MB: There’s a federal certification process in which the
design is submitted and the source code is submitted to what’s called an
independent testing authority, and they look at the code and make sure, and
they’re supposed to make sure, that the code is written according to certain
standards. They look at the actual machines and they test them. I frankly was
surprised that the systems we looked at had passed certification.
MAG: Then that’s my question. How did they get past that
certification?
MB: I think you’d have to ask the testing authorities. It
frankly baffles me.
MAG: Okay. Then we get to the bottom line, I guess. Are the
problems fixable, or do we have systems that might be fatally flawed?
MB: I think they’re fatally flawed, and that puts us in a
real bind. We can’t just postpone our elections until the technology is ready.
So we really have two problems: one, which in a lot of ways is the easier of
the two problems, is what do we do in the long term? How would we design a
good, secure election system for use in three to five years from now? And I
think there are a number of ways we might do that, and we can talk about them.
But we’re still left with the problem of what will we do in November and what
will we do in the primaries, and what do we do in the presidential election in
2008?
MAG: And those are very serious situations. First, I’d like
to ask on DREs, direct recording electronics, or many people call them
touch-screen machines: Even if we had a printer put on them, would that solve
the problem?
MB: So there’s a concept with these touch-screen DRE voting
machines, a concept called a voter-verified paper trail. The idea here is that
votes are recorded electronically, but before you finalize casting your vote,
there’s a little printer, similar to a cash register receipt printer, next to
the machine, usually behind glass, that prints out the votes that the machine
is recording, all the different candidates in each race it thinks you voted
for. What you’re supposed to do is, before pressing the “Yes, I want to cast my
vote” on the touch-screen display, you should look at that voter-verified paper
trail print-out and confirm that it actually reflects your vote, and at that
point, it should print out on that display “Vote confirmed, scroll forward,”
and then the display on the screen will go blank and let the next person vote.
So this is intended to improve the reliability and the security of these
machines, because it means that there is now a paper record of what’s been
voted for, so if the electronic record is tampered with, or is lost, or is
challenged later on, you can go to these print-outs and count up the votes that
the machines printed out. Now, this does, in fact, prevent a number of ways of
attacking these machines, a number of types of vote tampering pretty well, but
they’re not perfect; they don’t solve the problem as well as we’d need them to,
and probably not well enough to use with the kinds of machines that we’ve seen
here. The first problem is that the paper trail produced by these printers only
gets counted if there’s an actual recount. It’s a very labor-intensive process
to go through all the voting machines and count up each of the tallies in each
of the races.
MAG: So on election night, what we get as a result has
nothing to do with these paper print-outs.
MB: That’s right. These are just secondary records that are
used only if there’s a recount of particular machines, so if there is no
recount, then these paper trails are never looked at. So somebody would have to
suspect there was a problem, or challenge the results of the election for these
paper trail records to even be taken into consideration. So that’s one
weakness. Another weakness is that we really don’t know that much about how
voters behave with these print-outs. We don’t know if people actually look at
them carefully, so if the machine is running software or firmware that’s trying
to cheat, it may be able to print out invalid choices right on the printer.
MAG: And I believe that has been found.
MB: So, the behavior of voters— because you know, the
voter’s looking at the screen to cast their ballot and there’s this little
receipt printer, or this little cash register–type printer on the side, we
don’t really know if people look at it carefully enough to tell if their
choices are accurately recorded. The other problem is that in these voting
machines, the printer itself—many of the characteristics of the printer—are
under the control of the software running on the voting machine and so the
corrupted voting machine that has bad software loaded into it by someone might
be able to print out the paper trail in a very misleading way that might look
acceptable to the voter but in fact actually reflects a vote for someone else.
For example, it could print out the correct candidates, but then print
“cancelled” below them, and then print the candidates that the machine wants to
vote for.
MAG: Hmm. Now, we also have the other option with the
opscan. Now that too is vulnerable. How would you compare the two?
MB: So, the optical scanning voting systems are a little
different. There, rather than voting on a touch-screen, you vote by filling out
a piece of paper, one of these optically scanned forms where you usually cross
out with a pen or pencil something next to the candidate you want to vote for,
so you actually use a paper ballot, and it’s at the voting booth. It’s just a
booth; there’s no actual voting machinery where you fill out the form, it’s
just a little booth you get privacy to fill out your ballot in. Then you take
this ballot and feed this into a scanning device that sits on top of a ballot
box and basically the scanning device reads the marks you put on the ballot and
figures out who you voted for, records a tally for those candidates in those
races, and deposits your ballot in the ballot box. Then, at the end of the
election, the electronic results from the optical scanner and the paper ballots
are sent back to the election headquarters. Now, what we found in looking again
at all these systems is that it’s possible to tamper with the electronic
records of optically scanned ballots that are returned from the polling place
back to headquarters and change what results are recorded. So these systems, as
they’re implemented, are still vulnerable to tampering, but they at least have
the benefit that you still have the paper ballots that the voters voted on.
And, as long as the ballot boxes are adequately secured, and somebody is
watching them and they’re properly sealed, if you suspect there might have been
that kind of tampering, you can go back and count the paper ballots in a secure
place and find out who the voters intended to vote for.
MAG: Okay. Now, some people say that we can also solve the
problem by doing a one to three percent audit. Would that work? Are there some
problems that you’ve found?
MB: We didn’t look at auditing procedures in our study in
any particular detail, except the procedures as used in California, as they
might interact with some of the vulnerabilities that we found. So, I can tell
you what they do in California is automatically recount one percent of the
precinct results as a kind of safeguard, so one percent of the voting machines
will have their paper ballots (if they’re an optical scan system, or if there
are voter-verified paper trails) counted and matched against the electronic
results that were recorded in those machines. And, if there’s a mismatch, then
they know that there was some tampering with those particular machines. Now,
this is actually helpful for catching deep problems that affect all of the
machines. If, for example, the manufacturer of a voting machine included bad
software in every machine that was sent everywhere, the one percent recount
procedure would be likely to catch that because the fraud would be uniformly
distributed among all of the voting machines. But what this is not as good at
catching is targeted fraud where somebody goes to a particular precinct and
knows that there will be, for example, a lot of votes for the candidate they
don’t want to win, and arranges for those particular machines to run tampered
software, which as we showed could be very easily loaded in. The safeguards to
prevent that in software don’t work nearly as well as they’re intended to. Now,
the one percent recount will only catch that if, by sheer luck, a chance of one
in a hundred, the machines that were tampered with get selected for the audit.
MAG: So we have a serious situation. We’ve got a system that
you’ve indicated is fatally flawed, the two systems available both have
problems; one from your point of view has the advantage, at least, of the voter
completing the ballot with their own hand, which could be counted. What, then,
can we do for 2008?
MB: Again, we’re in a real bind. I don’t envy the election
officials who are going to have to make some very hard decisions, coming up.
Now, one thing I should emphasize: we looked only at the software and the systems
themselves. We looked at the software. The red teams looked at the hardware as
delivered, and tried to tamper with it, using some of the problems that we
discovered with the software systems. And what we found was that the software
and the hardware don’t prevent tampering. So that’s not the only set of
security mechanisms in place in an election. The elections are also protected
by procedures and by physical security of the machines themselves. So what our
results tell you is that the security system depends entirely on those
procedures. Any security that we were relying on the machines to have or the
software to have, we shouldn’t assume it’s there; it’s fatally flawed. So what
we’re saying is all of the security in an election depends on the security
procedures and the protocols and the physical seals and the two-person control
by poll workers and election officials and people watching what’s going
on—that’s where all of the security comes in. Now, the problem that we have is
that those procedures were designed on the assumption that the machines were
offering a certain level of security to start with, but in fact they’re not. So
those procedures have to be thought out from the beginning very carefully, and
whether or not a practical set of procedures can be designed that actually adds
security, I’m not sure.
MAG: So you’re really saying that you could have the best
security procedures in the world, but if what they’re checking out has
problems, it may help a little bit, but you’re still left defenseless.
MB: You have the problem that an election is a logistically
very complex event. You may have a thousand polling places in a county, and
thousands of poll workers who get a few hours of training and have been
basically hired to work just on Election Day, and you may have half a dozen of
them in any polling place, carrying out procedures that they do maybe once a
year after a few hours of training. The equipment has to be distributed to
these polling places; some of them are in lobbies of apartment buildings, in
school gyms, sometimes even in private homes. That equipment might be delivered
the night before. In some cases, it’s sent home with the poll workers, who
bring it to the polling place on the morning of Election Day and basically had
it in their homes overnight and had access to it completely without
restriction. So building a physical security system that prevents anybody from
tampering with equipment in such a complicated event and with so many people
involved, this is going to be very hard.
MAG: Well, I understand the Secretary of State of California
is going to institute some changes, which may include in some places a hundred
percent count. Do you think we may have to do that for 2008?
MB: One of the things that the Secretary of State required
was that in many cases the DRE machines all have to have their paper trails
recounted—one hundred percent of them, not just one percent. That will
certainly prevent certain attacks that would otherwise not be detected with
just a one percent recount. They’ve limited the number of DREs for the Diebold
and the Sequoia system to just one per polling place in order to accommodate
voters with disabilities who can’t use the optical scan ballots without needing
assistance, but who might be able to use the DRE machines, and that is intended
to reduce the scale and the number of people who’d have
access to the machine throughout the day, to limit what would need to be
protected and to make it easier to do that hundred percent recount. These seem
like, to me, frankly, very sensible ways of mitigating this. What I’d be less
confident in saying is that this is going to give you a secure election, but
these seem like steps in the right direction. It’s certainly more secure than
not doing these things.
MAG: Now I’ll put you on the spot: Congress is apparently
finally waking up and is supposedly considering banning DREs and giving states
money to replace [them] with optical scan. Would you support that?
MB: That seems, from what we’ve seen, my opinion, and I’m
speaking only for myself, is that that would make me feel a lot more
comfortable with the security of these elections.
MAG: But you would still like to see a fair number of
procedural changes, as well.
MB: That’s right. We still need procedural changes, we still
need to look at the security of the optical scan ballots, but I think the most
serious problems we found, and most importantly, the ones that are hardest to
correct, once they’ve happened, are the problems with the DREs.
MAG: That even raises the question, because you mentioned
checks and balances, and that’s pretty important; I’m wondering if you could
ever design and have a DRE system that would meet that, because a DRE system,
even with a printer, would never be a separate and independent system.
MB: The disadvantage of a DRE is that the voters’
intentions—are touching a screen, this ephemeral process, that, at the end of
it, you’re left with only the record produced by the machine, you’re not left
with something that the voter has produced themselves, so you don’t know if
it’s an accurate reflection of what they actually intended. So, DREs start from
a security disadvantage right there. Now, it’s important not to confuse DREs
with touch-screens.
MAG: Understood.
MB: This, I think, has been a source of considerable
confusion on the issue because people often equate the nice user interface of a
touch-screen, which many voters, particularly disabled voters, like quite a bit
because you can, for example, have assistive devices hooked up to it that will
speak in different languages, you can have sip and puff interfaces for
mobility-impaired voters, and so on. These are all very important
considerations, but they don’t actually require a DRE machine in order to
accommodate these voters.
MAG: Do you think that we’re going to be faced in 2008 with
doing a lot more hand counting to give us any security?
MB: Well, I think if we want secure elections, with the
equipment at least that we looked at, we’re going to have no alternative.
MAG: Is there any reason for you to think that, and here
again, this is strictly your opinion, that the equipment you didn’t examine,
although it covers a large majority, that it would be that much different?
MB: Well, all we can do is speculate. We looked at three. Of
the three we looked at, all of them were very deeply and pervasively flawed.
Are the others any better? I suppose it’s possible that they are, but
unfortunately, they haven’t been looked at with the same kind of scrutiny.
MAG: So, how do you feel as a Pennsylvanian and living in
Philadelphia, where you have a Danaher machine which actually doesn’t even have
a print-out, and you will be going, unless there is a change, up to that
machine, entering your vote and not knowing where it went? How secure will you
feel in 2008 if we have no change?
MB: Well, I hope that the procedures that are put in place
in Philadelphia to prevent tampering are really sound.
MAG: But we still have that problem without any proof.
MB: That’s right.
MAG: Okay. Is there anything else that our audience should
know, and is there anything Congress should be aware of?
MB: Well, I think one of the things we need to recognize is
that these voting machines, the DREs, and the systems that count the votes, and
the optical scan systems, these are all computers. They don’t look like
personal computers, they don’t have the same keyboard and the same display, but
on the inside, they’re computers that run software, and they’re running very
complex software that performs a specialized task that only gets tested out a
few times a year, and may not be stress-tested in a hostile environment very
often in its life at all. Now, writing software that’s correct and that’s
secure is a very, very difficult problem. It’s really the fundamental problem
that computer science has been grappling with and has not succeeded in solving
for its whole history. So, building a secure voting system out of software is
already a very difficult problem, because designing software itself is a hard
problem. So scrutiny and skepticism are really the only safeguards we have
here.
MAG: And what about for Congress? Do you think it is time
they re-look at this?
MB: It’s a shame, and again, I’m speaking only for myself
here.
MAG: Understood.
MB: It’s a shame that after the 2000 election, with the
butterfly ballot and so on, there was a real national consensus that it was
important to make voting more reliable. I think everyone agreed with this very
important goal that we should modernize elections and make them as reliable as
possible. Unfortunately, we really rushed into buying equipment everywhere in
the country that really wasn’t ready, and I think the only way we are going to
solve this problem is by recognizing that we’ve got to do a careful design.
We’re going to be left with whatever equipment we buy, whatever systems are put
in place, we’re going to have them for a while, and this is something our
democracy vitally depends on, so this is worth doing right.
MAG: Well, I would invite all our listeners, in addition, to
play a game. I found that your Vendor Excuse Bingo is absolutely ingenious and
fantastic. Where could they find it?
MB: There’s a link to it on my blog. I should say, I don’t
want to make light of this, because this is very, very serious; but an
unfortunate property of vendors of software, whether it’s voting machines or
web servers, who have had their software exposed to scrutiny and discovered
that it’s not as secure as it should be is to deny and threaten and so on. So I’ve
put together a little bingo game with some of the common vendor responses to
these kinds of things that I think we’re likely to hear in the voting machine
case, but we often hear in computing in general.
MAG: And that website is?
MB: The website is www.crypto.com/blog.
MAG: Well, I want to thank you.
MB: And I should also say, if I can just interrupt very
quickly. Go to the source: the Secretary of State’s website in California has
all of our reports. We tried to make them as readable as possible.
MAG: And you did an excellent job and I think you did a
tremendous service for this country and thank you very much.
MB: Thank you.
Authors Bio: Mary Ann Gould is a founding member of the Pennsylvania-based Coalition for Voting Integrity.