Entries Tagged "NIST"

Page 6 of 6

GAO Report on Electronic Voting

The full report, dated September 2005, is 107-pages long. Here’s the “Results in Brief” section:

While electronic voting systems hold promise for a more accurate and efficient election process, numerous entities have raised concerns about their security and reliability, citing instances of weak security controls, system design flaws, inadequate system version control, inadequate security testing, incorrect system configuration, poor security management, and vague or incomplete voting system standards, among other issues. For example, studies found (1) some electronic voting systems did not encrypt cast ballots or system audit logs, and it was possible to alter both without being detected; (2) it was possible to alter the files that define how a ballot looks and works so that the votes for one candidate could be recorded for a different candidate; and (3) vendors installed uncertified versions of voting system software at the local level. It is important to note that many of the reported concerns were drawn from specific system makes and models or from a specific jurisdiction’s election, and that there is a lack of consensus among election officials and other experts on the pervasiveness of the concerns. Nevertheless, some of these concerns were reported to have caused local problems in federal elections—resulting in the loss or miscount of votes—and therefore merit attention.

Federal organizations and nongovernmental groups have issued recommended practices and guidance for improving the election process, including electronic voting systems, as well as general practices for the security and reliability of information systems. For example, in mid-2004, EAC issued a compendium of practices recommended by election experts, including state and local election officials. This compendium includes approaches for making voting processes more secure and reliable through, for example, risk analysis of the voting process, poll worker security training, and chain of custody controls for election day operations, along with practices that are specific to ensuring the security and reliability of different types of electronic voting systems. As another example, in July 2004, the California Institute of Technology and the Massachusetts Institute of Technology issued a report containing recommendations pertaining to testing equipment, retaining audit logs, and physically securing voting systems. In addition to such election-specific practices, numerous recommended practices are available that can be applied to any information system. For instance, we, NIST, and others have issued guidance that emphasizes the importance of incorporating security and reliability into the life cycle of information systems through practices related to security planning and management, risk management, and procurement. The recommended practices in these election-specific and information technology (IT) focused documents provide valuable guidance that, if implemented effectively, should help improve the security and reliability of voting systems.

Since the passage of HAVA in 2002, the federal government has begun a range of actions that are expected to improve the security and reliability of electronic voting systems. Specifically, after beginning operations in January 2004, EAC has led efforts to (1) draft changes to the existing federal voluntary standards for voting systems, including provisions related to security and reliability, (2) develop a process for certifying, decertifying, and recertifying voting systems, (3) establish a program to accredit the national independent testing laboratories that test electronic voting systems against the federal voluntary standards, and (4) develop a software library and clearinghouse for information on state and local elections and systems. However, these actions are unlikely to have a significant effect in the 2006 federal election cycle because the changes to the voluntary standards have not yet been completed, the system certification and laboratory accreditation programs are still in development, and the software library has not been updated or improved since the 2004 elections. Further, EAC has not defined tasks, processes, and time frames for completing these activities. As a result, it is unclear when the results will be available to assist state and local election officials. In addition to the federal government’s activities, other organizations have actions under way that are intended to improve the security and reliability of electronic voting systems. These actions include developing and obtaining international acceptance for voting system standards, developing voting system software in an open source environment (i.e., not proprietary to any particular company), and cataloging and analyzing reported problems with electronic voting systems.

To improve the security and reliability of electronic voting systems, we are recommending that EAC establish tasks, processes, and time frames for improving the federal voluntary voting system standards, testing capabilities, and management support available to state and local election officials.

EAC and NIST provided written comments on a draft of this report (see apps. V and VI). EAC commissioners agreed with our recommendations and stated that actions on each are either under way or intended. NIST’s director agreed with the report’s conclusions. In addition to their comments on our recommendations, EAC commissioners expressed three concerns with our use of reports produced by others to identify issues with the security and reliability of electronic voting systems. Specifically, EAC sought (1) additional clarification on our sources, (2) context on the extent to which voting system problems are systemic, and (3) substantiation of claims in the reports issued by others. To address these concerns, we provided additional clarification of sources where applicable. Further, we note throughout our report that many issues involved specific system makes and models or circumstances in the elections of specific jurisdictions. We also note that there is a lack of consensus on the pervasiveness of the problems, due in part to a lack of comprehensive information on what system makes and models are used in jurisdictions throughout the country. Additionally, while our work focused on identifying and grouping problems and vulnerabilities identified in issued reports and studies, where appropriate and feasible, we sought additional context, clarification, and corroboration from experts, including election officials, security experts, and key reports’ authors. EAC commissioners also expressed concern that we focus too much on the commission, and noted that it is one of many entities with a role in improving the security and reliability of voting systems. While we agree that EAC is one of many entities with responsibilities for improving the security and reliability of voting systems, we believe that our focus on EAC is appropriate, given its leadership role in defining voting system standards, in establishing programs both to accredit laboratories and to certify voting systems, and in acting as a clearinghouse for improvement efforts across the nation. EAC and NIST officials also provided detailed technical corrections, which we incorporated throughout the report as appropriate.

Posted on December 2, 2005 at 3:08 PMView Comments

NIST Hash Workshop Liveblogging (5)

The afternoon started with three brand new hash functions: FORK-256, DHA-256, and VSH. VSH (Very Smooth Hash) was the interesting one; it’s based on factoring and the discrete logarithm problem, like public-key encryption, and not on bit-twiddling like symmetric encryption. I have no idea if it’s any good, but it’s cool to see something so different.

I think we need different. So many of our hash functions look pretty much the same: MD4, MD5, SHA-0, SHA-1, RIPE-MD, HAVAL, SHA-256, SHA-512. And everything is basically a block cipher in Davies-Meyer mode. I want some completely different designs. I want hash functions based on a stream ciphers. I want more functions based on number theory.

The final session was an open discussion about what to do next. There was much debate about how soon we need a new hash function, how long we should rely on SHA-1 or SHA-256, etc.

Hashing is hard. At the ultra-high-level hand-waving level, it takes a lot more clock cycles per message byte to hash than it does to encrypt. No one has any theory to account for this, but it seems like the lack of any secrets in a hash function makes it a harder problem. This may be an artifact of our lack of knowledge, but I think there’s a grain of fundamental truth buried here.

And hash functions are used everywhere. Hash functions are the workhorse of cryptography; they’re sprinkled all over security protocols. They’re used all the time, in all sorts of weird ways, for all sorts of weird purposes. We cryptographers think of them as good hygiene, kind of like condoms.

So we need a fast answer for immediate applications.

We also need “SHA2,” whatever that will look like. And a design competition is the best way to get a SHA2. (Niels Ferguson pointed out that the AES process was the best cryptographic invention of the past decade.)

Unfortunately, we’re in no position to have an AES-like competition to replace SHA right now. We simply don’t know enough about designing hash functions. What we need is research, random research all over the map. Designs beget analyses beget designs beget analyses…. Right now we need a bunch of mediocre hash function designs. We need a posse of hotshot graduate students breaking them and making names for themselves. We need new tricks and new tools. Hash functions are a hot area of research right now, but anything we can do to stoke that will pay off in the future.

NIST is thinking of hosting another hash workshop right after Crypto next year. That would be a good thing.

I need to get to work on a hash function based on Phelix.

Posted on November 1, 2005 at 3:43 PMView Comments

NIST Hash Workshop Liveblogging (4)

This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.

Lots of interesting ideas, but no real discussion about trade-offs. But it’s the trade-offs that are important. It’s easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest’s dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I’m making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it’s the correct analysis here as well.

Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn’t be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we’ve just been too optimistic about how strong hash functions are.

The other thing we need to be doing is providing answers to developers. It’s not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They’ll need an answer in a year. They’ll need an answer in four years. Maybe the answers will be the same, and maybe they’ll be different. But if we don’t give them answers, they’ll make something up. They won’t wait for us.

And while it’s true that we don’t have any real theory of hash functions, and it’s true that anything we choose will be based partly on faith, we have no choice but to choose.

And finally, I think we need to stimulate research more. Whether it’s a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses…. We need a whole bunch of new hash functions to beat up; that’s how we’ll learn to design better ones.

Posted on November 1, 2005 at 11:19 AMView Comments

NIST Hash Workshop Liveblogging (3)

I continue to be impressed by the turnout at this workshop. There are lots of people here whom I haven’t seen in a long time. It’s like a cryptographers’ family reunion.

The afternoon was devoted to cryptanalysis papers. Nothing earth-shattering; a lot of stuff that’s real interesting to me and not very exciting to summarize.

The list of papers is here. NIST promises to put the actual papers online, but they make no promises as to when.

Right now there is a panel discussing how secure SHA-256 is. “How likely is SHA-256 to resist attack for the next ten years?” Some think it will be secure for that long, others think it will fall in five years or so. One person pointed out that if SHA-256 lasts ten years, it will be a world record for a hash function. The consensus is that any new hash function needs to last twenty years, though. It really seems unlikely that any hash function will last that long.

But the real issue is whether there will be any practical attacks. No one knows. Certainly there will be new cryptanalytic techniques developed, especially now that hash functions are a newly hot area for research. But will SHA-256 ever have an attack that’s faster than 280?

Everyone thinks that SHA-1 with 160 rounds is a safer choice than SHA-256 truncated to 160 bits. The devil you know, I guess.

Niels Ferguson, in a comment from the floor, strongly suggested that NIST publish whatever analysis on SHA-256 it has. Since this is most likely by the NSA and classified, it would be a big deal. But I agree that it’s essential for us to fully evaluate the hash function.

Tom Berson, in another comment, suggested that NIST not migrate to a single hash function, but certify multiple alternatives. This has the interesting side effect of forcing the algorithm agility issue. (We had this same debate regarding AES. Negatives are: 1) you’re likely to have a system that is as strong as the weakest choice, and 2) industry will hate it.)

If there’s a moral out of the first day of this workshop, it’s that algorithm agility is an essential feature in any Internet protocol.

Posted on October 31, 2005 at 4:00 PMView Comments

NIST Hash Workshop Liveblogging (2)

In the morning we had a series of interesting papers: “Strengthening Digital Signatures via Randomized Hashing,” by Halevi and Krawczyk; “Herding Hash Functions and the Nostradamus Attack,” by Kelsey and Kohno; and “Collision-Resistant usage of MD5 and SHA-1 via Message Preprocessing,” by Szydlo and Yin. The first and third papers are suggestions for modifying SHA-1 to make it more secure. The second paper discusses some fascinating and cool, but still theoretical, attacks on hash functions.

The last session before lunch was a panel discussion: “SHA-1: Practical Security Implications of Continued Use.” The panel stressed that these are collision attacks and not pre-image attacks, and that many protocols simply don’t care. Collision attacks are important for digital signatures, but less so for other uses of hash functions. On the other hand, this difference is only understood by cryptographers; there are issues if the public believes that SHA-1 is “broken.”

Niels Ferguson pointed out that the big problem is MD5, which is still used everywhere. (Hell, DES is still everywhere.) It takes much longer to upgrade algorithms on the Internet than most people believe; Steve Bellovin says it takes about one year to get the change through the IETF, and another five to seven years to get it depoloyed. And that’s after we all figure out which algorithm they should use.

Georg Illies gave a perspective from Germany, where there is a digital-signature law in effect. In addition to the technology, there are legal considerations that make it harder to switch.

The panel seemed to agree that it’s still safe to use SHA-1 today, but that we need to start migrating to something better. It’s way easier to change algorithms when you’re not in the middle of a panic.

There was more talk about algorithm agility. This problem is larger than SHA. Our Internet protocols simply don’t have a secure methodology for migrating from one cryptographic algorithm to another.

Bottom line: Don’t use SHA-1 for anything new, and start moving away from it as soon as possible. To SHA-256, probably.

And now it’s lunchtime.

Posted on October 31, 2005 at 11:50 AMView Comments

NIST Hash Workshop Liveblogging (1)

I’m in Gaithersburg, MD, at the Cryptographic Hash Workshop hosted by NIST. I’m impressed by the turnout; a lot of the right people are here.

Xiaoyun Wang, the cryptographer who broke SHA-1, spoke about her latest results. They are the same results Adi Shamir presented in her name at Crypto this year: a time complexity of 263.

(I first wrote about Wang’s results here, and discussed their implications here. I wrote about results from Crypto here. Here are her two papers from Crypto: “Efficient Collision Search Attacks on SHA-0” and “Finding Collisions in the Full SHA-1 Collision Search Attacks on SHA1.”)

Steve Bellovin is now talking about the problems associated with upgrading hash functions. He and his coauthor Eric Rescorla looked at S/MIME, TLS, IPSec (and IKE), and DNSSEC. Basically, these protocols can’t change algorithms overnight; it has to happen gradually, over the course of years. So the protocols need some secure way to “switch hit”: to use both the new and old hash functions during the transition period. This requires some sort of signaling, which the protocols don’t do very well. (Bellovin’s and Rescorla’s paper is here.)

Posted on October 31, 2005 at 9:02 AMView Comments

The Doghouse: Lexar LockTight

Do you think we should tell these people that SHA-1 is not an encryption algorithm?

Developed by Lexar, the new security solution is based on a 160-bit encryption technology and uses SHA-1 (Secure Hash Algorithm), a standard approved by the National Institute of Standards and Technology (NIST). The 160-bit encryption technology is among the most effective and widely accepted security solutions available.

This seems not to be a typo. They explain themselves in more detail here:

Lexar has provided us with the following explanation as to how data is protected on the LockTight cards: (we understand that the encryption is carried out on the communications layer between the card and camera/computer rather than the data itself).

“Lexar employs a unique strategy to protect data on LockTight cards. LockTight cards are always ‘locked.’ In other words no computer or camera can read or write data from/to a LockTight card until a critical authorization process takes place between the LockTight card and the host computer or host camera. This authorization process is where the 160-bit HMAC SHAH-1 encryption algorithm is employed.”

Posted on October 3, 2005 at 8:22 AMView Comments

The Legacy of DES

The Data Encryption Standard, or DES, was a mid-’70s brainchild of the National Bureau of Standards: the first modern, public, freely available encryption algorithm. For over two decades, DES was the workhorse of commercial cryptography.

Over the decades, DES has been used to protect everything from databases in mainframe computers, to the communications links between ATMs and banks, to data transmissions between police cars and police stations. Whoever you are, I can guarantee that many times in your life, the security of your data was protected by DES.

Just last month, the former National Bureau of Standards—the agency is now called the National Institute of Standards and Technology, or NIST—proposed withdrawing DES as an encryption standard, signifying the end of the federal government’s most important technology standard, one more important than ASCII, I would argue.

Today, cryptography is one of the most basic tools of computer security, but 30 years ago it barely existed as an academic discipline. In the days when the Internet was little more than a curiosity, cryptography wasn’t even a recognized branch of mathematics. Secret codes were always fascinating, but they were pencil-and-paper codes based on alphabets. In the secret government labs during World War II, cryptography entered the computer era and became mathematics. But with no professors teaching it, and no conferences discussing it, all the cryptographic research in the United States was conducted at the National Security Agency.

And then came DES.

Back in the early 1970s, it was a radical idea. The National Bureau of Standards decided that there should be a free encryption standard. Because the agency wanted it to be non-military, they solicited encryption algorithms from the public. They got only one serious response—the Data Encryption Standard—from the labs of IBM. In 1976, DES became the government’s standard encryption algorithm for “sensitive but unclassified” traffic. This included things like personal, financial and logistical information. And simply because there was nothing else, companies began using DES whenever they needed an encryption algorithm. Of course, not everyone believed DES was secure.

When IBM submitted DES as a standard, no one outside the National Security Agency had any expertise to analyze it. The NSA made two changes to DES: It tweaked the algorithm, and it cut the key size by more than half.

The strength of an algorithm is based on two things: how good the mathematics is, and how long the key is. A sure way of breaking an algorithm is to try every possible key. Modern algorithms have a key so long that this is impossible; even if you built a computer out of all the silicon atoms on the planet and ran it for millions of years, you couldn’t do it. So cryptographers look for shortcuts. If the mathematics are weak, maybe there’s a way to find the key faster: “breaking” the algorithm.

The NSA’s changes caused outcry among the few who paid attention, both regarding the “invisible hand” of the NSA—the tweaks were not made public, and no rationale was given for the final design—and the short key length.

But with the outcry came research. It’s not an exaggeration to say that the publication of DES created the modern academic discipline of cryptography. The first academic cryptographers began their careers by trying to break DES, or at least trying to understand the NSA’s tweak. And almost all of the encryption algorithms—public-key cryptography, in particular—can trace their roots back to DES. Papers analyzing different aspects of DES are still being published today.

By the mid-1990s, it became widely believed that the NSA was able to break DES by trying every possible key. This ability was demonstrated in 1998, when a $220,000 machine was built that could brute-force a DES key in a few days. In 1985, the academic community proposed a DES variant with the same mathematics but a longer key, called triple-DES. This variant had been used in more secure applications in place of DES for years, but it was time for a new standard. In 1997, NIST solicited an algorithm to replace DES.

The process illustrates the complete transformation of cryptography from a secretive NSA technology to a worldwide public technology. NIST once again solicited algorithms from the public, but this time the agency got 15 submissions from 10 countries. My own algorithm, Twofish, was one of them. And after two years of analysis and debate, NIST chose a Belgian algorithm, Rijndael, to become the Advanced Encryption Standard.

It’s a different world in cryptography now than it was 30 years ago. We know more about cryptography, and have more algorithms to choose among. AES won’t become a ubiquitous standard in the same way that DES did. But it is finding its way into banking security products, Internet security protocols, even computerized voting machines. A NIST standard is an imprimatur of quality and security, and vendors recognize that.

So, how good is the NSA at cryptography? They’re certainly better than the academic world. They have more mathematicians working on the problems, they’ve been working on them longer, and they have access to everything published in the academic world, while they don’t have to make their own results public. But are they a year ahead of the state of the art? Five years? A decade? No one knows.

It took the academic community two decades to figure out that the NSA “tweaks” actually improved the security of DES. This means that back in the ’70s, the National Security Agency was two decades ahead of the state of the art.

Today, the NSA is still smarter, but the rest of us are catching up quickly. In 1999, the academic community discovered a weakness in another NSA algorithm, SHA, that the NSA claimed to have discovered only four years previously. And just last week there was a published analysis of the NSA’s SHA-1 that demonstrated weaknesses that we believe the NSA didn’t know about at all.

Maybe now we’re just a couple of years behind.

This essay was originally published on CNet.com

Posted on October 6, 2004 at 6:05 PMView Comments

1 4 5 6

Sidebar photo of Bruce Schneier by Joe MacInnis.