www.wired.com/politics/security/commentary/securitymatters/2007/08/securitymatters_0809

 

WIRED

commentary

Bruce SchneierSecurity Matters

Commentary by  Bruce Schneier  

Politics  :  Security  

E-Voting Certification Gets Security Completely Backward

By Bruce Schneier Email08.09.07 | 2:00 AM

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People who I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source-code review. Serious flaws were discovered in all machines, and as a result the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it's fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition, or SCADA, systems -- those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities -- electronic badge-entry systems, MySpace and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems have all claimed that they were secure, and that their customers believed them.

Earlier this month the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I'm not surprised that the system is so insecure; I'm surprised that anyone is surprised.

We've been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability can be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who's more likely to be right?

It's all backward. Insecurity is the norm. If any system -- whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc. -- is ever built completely vulnerability-free, it'll be the first time in the history of mankind. It's not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn't make us any more secure. If vulnerabilities are so common, finding a few doesn't materially reduce (.pdf) the quantity remaining. A system with 100 patched vulnerabilities isn't more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn't mean that there's one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don't tell me that it's my job to find another vulnerability in the third patch; it's Diebold's job to convince me that it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about (.pdf) the concept of "assurance" in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn't use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

1. The system's security policy is internally consistent and reflects the requirements of the organization,
2. There are sufficient security functions to support the security policy,
3. The system functions to meet a desired set of properties and only those properties,
4. The functions are implemented correctly, and
5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I'm just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It's all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It's some of what Microsoft is trying to do with its Security Development Lifecycle, or SDL. It's the Department of Homeland Security's Build Security In program. It's what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It's what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don't care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it's cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it's long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It's time we stopped thinking backward and pretending that computers are secure until proven otherwise.

- - -

Bruce Schneier is the CTO of BT Counterpane and the author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World.