Blog: September 2010 Archives

Wiretapping the Internet

On Monday, The New York Times reported that President Obama will seek sweeping laws enabling law enforcement to more easily eavesdrop on the internet. Technologies are changing, the administration argues, and modern digital systems aren’t as easy to monitor as traditional telephones.

The government wants to force companies to redesign their communications systems and information networks to facilitate surveillance, and to provide law enforcement with back doors that enable them to bypass any security measures.

The proposal may seem extreme, but—unfortunately—it’s not unique. Just a few months ago, the governments of the United Arab Emirates, Saudi Arabia and India threatened to ban BlackBerry devices unless the company made eavesdropping easier. China has already built a massive internet surveillance system to better control its citizens.

Formerly reserved for totalitarian countries, this wholesale surveillance of citizens has moved into the democratic world as well. Governments like Sweden, Canada and the United Kingdom are debating or passing laws giving their police new powers of internet surveillance, in many cases requiring communications system providers to redesign products and services they sell. More are passing data retention laws, forcing companies to retain customer data in case they might need to be investigated later.

Obama isn’t the first U.S. president to seek expanded digital eavesdropping. The 1994 CALEA law required phone companies to build ways to better facilitate FBI eavesdropping into their digital phone switches. Since 2001, the National Security Agency has built substantial eavesdropping systems within the United States.

These laws are dangerous, both for citizens of countries like China and citizens of Western democracies. Forcing companies to redesign their communications products and services to facilitate government eavesdropping reduces privacy and liberty; that’s obvious. But the laws also make us less safe. Communications systems that have no inherent eavesdropping capabilities are more secure than systems with those capabilities built in.

Any surveillance system invites both criminal appropriation and government abuse. Function creep is the most obvious abuse: New police powers, enacted to fight terrorism, are already used in situations of conventional nonterrorist crime. Internet surveillance and control will be no different.

Official misuses are bad enough, but the unofficial uses are far more worrisome. An infrastructure conducive to surveillance and control invites surveillance and control, both by the people you expect and the people you don’t. Any surveillance and control system must itself be secured, and we’re not very good at that. Why does anyone think that only authorized law enforcement will mine collected internet data or eavesdrop on Skype and IM conversations?

These risks are not theoretical. After 9/11, the National Security Agency built a surveillance infrastructure to eavesdrop on telephone calls and e-mails within the United States. Although procedural rules stated that only non-Americans and international phone calls were to be listened to, actual practice didn’t always match those rules. NSA analysts collected more data than they were authorized to and used the system to spy on wives, girlfriends and famous people like former President Bill Clinton.

The most serious known misuse of a telecommunications surveillance infrastructure took place in Greece. Between June 2004 and March 2005, someone wiretapped more than 100 cell phones belonging to members of the Greek government—the prime minister and the ministers of defense, foreign affairs and justice—and other prominent people. Ericsson built this wiretapping capability into Vodafone’s products, but enabled it only for governments that requested it. Greece wasn’t one of those governments, but some still unknown party—a rival political group? organized crime?—figured out how to surreptitiously turn the feature on.

Surveillance infrastructure is easy to export. Once surveillance capabilities are built into Skype or Gmail or your BlackBerry, it’s easy for more totalitarian countries to demand the same access; after all, the technical work has already been done.

Western companies such as Siemens, Nokia and Secure Computing built Iran’s surveillance infrastructure, and U.S. companies like L-1 Identity Solutions helped build China’s electronic police state. The next generation of worldwide citizen control will be paid for by countries like the United States.

We should be embarrassed to export eavesdropping capabilities. Secure, surveillance-free systems protect the lives of people in totalitarian countries around the world. They allow people to exchange ideas even when the government wants to limit free exchange. They power citizen journalism, political movements and social change. For example, Twitter’s anonymity saved the lives of Iranian dissidents—anonymity that many governments want to eliminate.

Yes, communications technologies are used by both the good guys and the bad guys. But the good guys far outnumber the bad guys, and it’s far more valuable to make sure they’re secure than it is to cripple them on the off chance it might help catch a bad guy. It’s like the FBI demanding that no automobiles drive above 50 mph, so they can more easily pursue getaway cars. It might or might not work—but, regardless, the cost to society of the resulting slowdown would be enormous.

It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state. No matter what the eavesdroppers say, these systems cost too much and put us all at greater risk.

This essay previously appeared on CNN.com, and was a rewrite of a 2009 op ed on MPR News Q—which itself was based in part on a 2007 Washington Post op ed by Susan Landau.

Three more articles.

Posted on September 30, 2010 at 6:02 AM97 Comments

Cultural Cognition of Risk

This is no surprise:

The people behind the new study start by asking a pretty obvious question: “Why do members of the public disagree—­sharply and persistently—­about facts on which expert scientists largely agree?” (Elsewhere, they refer to the “intense political contestation over empirical issues on which technical experts largely agree.”) In this regard, the numbers from the Pew survey are pretty informative. Ninety-seven percent of the members of the American Association for the Advancement of Science accept the evidence for evolution, but at least 40 percent of the public thinks that major differences remain in scientific opinion on this topic. Clearly, the scientific community isn’t succeeding in making the public aware of its opinion.

According to the new study, this isn’t necessarily the fault of the scientists, though. The authors favor a model, called the cultural cognition of risk, which “refers to the tendency of individuals to form risk perceptions that are congenial to their values.” This wouldn’t apply directly to evolution, but would to climate change: if your cultural values make you less likely to accept the policy implications of our current scientific understanding, then you’ll be less likely to accept the science.

But, as the authors note, opponents of a scientific consensus often try to claim to be opposing it on scientific, rather than cultural grounds. “Public debates rarely feature open resistance to science,” they note, “the parties to such disputes are much more likely to advance diametrically opposed claims about what the scientific evidence really shows.” To get there, those doing the arguing must ultimately be selective about what evidence and experts they accept—­they listen to, and remember, those who tell them what they want to hear. “The cultural cognition thesis predicts that individuals will more readily recall instances of experts taking the position that is consistent with their cultural predisposition than ones taking positions inconsistent with it,” the paper suggests.

[…]

So, it’s not just a matter of the public not understanding the expert opinions of places like the National Academies of science; they simply discount the expertise associated with any opinion they’d rather not hear.

Here’s the paper.

Posted on September 28, 2010 at 6:33 AM102 Comments

Isolating Terrorist Cells as a Security Countermeasure

It’s better to try to isolate parts of a terrorist network than to attempt to destroy it as a whole, at least according to this model:

Vos Fellman explains how terrorist networks are “typical of the structures encountered in the study of conflict, in that they possess multiple, irreducible levels of complexity and ambiguity.”

“This complexity is compounded by the covert activities of terrorist networks where key elements may remain hidden for extended periods of time and the network itself is dynamic,” adds Vos Fellman, an expert in mathematical modeling and strategy. The nature of a dynamic network is akin to the robust Internet but contrasts starkly with the structure of the armed forces or homeland security systems, which tend to be centralized and hierarchical.

Vos Fellman has used network analysis, agent-based simulation, and dynamic NK Boolean fitness landscapes to try and understand the complexities of terrorist networks. In particular, he has focused on how long-term operational and strategic planning might be undertaken so that tactics which appear to offer immediate impact are avoided if they cause little long-term damage to the terrorist network.

Vos Fellman’s computer simulations of terrorist networks suggest that isolation rather than removal could be the key to successfully defeating them.

Posted on September 27, 2010 at 12:00 PM35 Comments

New Attack Against ASP.NET

It’s serious:

The problem lies in the way that ASP.NET, Microsoft’s popular Web framework, implements the AES encryption algorithm to protect the integrity of the cookies these applications generate to store information during user sessions. A common mistake is to assume that encryption protects the cookies from tampering so that if any data in the cookie is modified, the cookie will not decrypt correctly. However, there are a lot of ways to make mistakes in crypto implementations, and when crypto breaks, it usually breaks badly.

“We knew ASP.NET was vulnerable to our attack several months ago, but we didn’t know how serious it is until a couple of weeks ago. It turns out that the vulnerability in ASP.NET is the most critical amongst other frameworks. In short, it totally destroys ASP.NET security,” said Thai Duong, who along with Juliano Rizzo, developed the attack against ASP.NET.

Here’s a demo of the attack, and the Microsoft Security Advisory. More articles. The theory behind this attack is here.

EDITED TO ADD (9/27): Three blog posts from Scott Guthrie.

EDITED TO ADD (9/28): There’s a patch.

EDITED TO ADD (10/13): Two more articles.

Posted on September 27, 2010 at 6:51 AM37 Comments

Real-Time NSA Eavesdropping

In an article about Robert Woodward’s new book, Obama’s Wars, this is listed as one of the book’s “disclosures”:

A new capability developed by the National Security Agency has dramatically increased the speed at which intercepted communications can be turned around into useful information for intelligence analysts and covert operators. “They talk, we listen. They move, we observe. Given the opportunity, we react operationally,” then-Director of National Intelligence Mike McConnell explained to Obama at a briefing two days after he was elected president.

Eavesdropping is easy. Getting actual intelligence to the hands of people is hard. It sounds as if the NSA has advanced capabilities to automatically sift through massive amounts of electronic communications and find the few bits worth relaying to intelligence officers.

Posted on September 24, 2010 at 1:23 PM27 Comments

Evercookies

Extremely persistent browser cookies:

evercookie is a javascript API available that produces extremely persistent cookies in a browser. Its goal is to identify a client even after they’ve removed standard cookies, Flash cookies (Local Shared Objects or LSOs), and others.

evercookie accomplishes this by storing the cookie data in several types of storage mechanisms that are available on the local browser. Additionally, if evercookie has found the user has removed any of the types of cookies in question, it recreates them using each mechanism available.

Specifically, when creating a new cookie, it uses the following storage mechanisms when available:

  • Standard HTTP Cookies
  • Local Shared Objects (Flash Cookies)
  • Storing cookies in RGB values of auto-generated, force-cached PNGs using HTML5 Canvas tag to read pixels (cookies) back out
  • Storing cookies in Web History (seriously. see FAQ)
  • HTML5 Session Storage
  • HTML5 Local Storage
  • HTML5 Global Storage
  • HTML5 Database Storage via SQLite

And the arms race continues….

EDITED TO ADD (9/24): WARNING—When you visit this site, it stores an evercookie on your machine.

Posted on September 23, 2010 at 11:48 AM65 Comments

Details Removed from Book at Request of U.S. Department of Defense

From the AFP:

A publisher has agreed to remove US intelligence details from a memoir by a former army officer in Afghanistan after the Pentagon raised last-minute objections, officials said Friday.

The book, “Operation Dark Heart,” had been printed and prepared for release in August but St. Martin’s Press will now issue a revised version of the spy memoir after negotiations with the Pentagon, US and company officials said.

In an unusual step, the Defense Department has agreed to reimburse the company for the cost of the first printing, spokesman Colonel Dave Lapan told AFP.

The original manuscript “contained classified information which had not been properly reviewed” by the military and US spy agencies, he said.

St. Martin’s press will destroy copies from the first printing with Pentagon representatives observing “to ensure it’s done in accordance with our standards,” Lapan said.

The second, revised edition would be ready by the end of next week, said the author’s lawyer, Mark Zaid.

EDITED TO ADD (9/30): An analysis of the redacted material—obtained by comparing the two versions—is amusing.

Posted on September 23, 2010 at 7:19 AM29 Comments

The Stuxnet Worm

It’s impressive:

The Stuxnet worm is a “groundbreaking” piece of malware so devious in its use of unpatched vulnerabilities, so sophisticated in its multipronged approach, that the security researchers who tore it apart believe it may be the work of state-backed professionals.

“It’s amazing, really, the resources that went into this worm,” said Liam O Murchu, manager of operations with Symantec’s security response team.

“I’d call it groundbreaking,” said Roel Schouwenberg, a senior antivirus researcher at Kaspersky Lab. In comparison, other notable attacks, like the one dubbed Aurora that hacked Google’s network and those of dozens of other major companies, were child’s play.

EDITED TO ADD (9/22): Here’s an interesting theory:

By August, researchers had found something more disturbing: Stuxnet appeared to be able to take control of the automated factory control systems it had infected – and do whatever it was programmed to do with them. That was mischievous and dangerous.

But it gets worse. Since reverse engineering chunks of Stuxnet’s massive code, senior US cyber security experts confirm what Mr. Langner, the German researcher, told the Monitor: Stuxnet is essentially a precision, military-grade cyber missile deployed early last year to seek out and destroy one real-world target of high importance – a target still unknown.

The article speculates that the target is Iran’s Bushehr nuclear power plant, but there’s not much in the way of actual evidence to support that.

Some more articles.

Posted on September 22, 2010 at 6:25 AM134 Comments

Prepaid Electricity Meter Fraud

New attack:

Criminals across the UK have hacked the new keycard system used to top up pre-payment energy meters and are going door-to-door, dressed as power company workers, selling illegal credit at knock-down prices.

The pre-paid power meters use a key system. Normally people visit a shop to put credit on their key, which they then take home and slot into their meter.

The conmen have cracked the system and can go into people’s houses and put credit on their machine using a hacked key. If they use this, it can be detected the next time they top up their key legitimately.

The system detects the fraud, in that it shows up on audit at a later time. But by then, the criminals are long gone. Clever.

It gets worse:

Conmen sell people the energy credit and then warn them that if they go back to official shops they will end up being charged for the energy they used illegally.

They then trap people and ratchet up the sales price to customers terrified they will have to pay twice ­ something Scottish Power confirmed is starting to happen here in Scotland.

Posted on September 21, 2010 at 1:42 PM66 Comments

Statistical Distribution of Combat Wounds to the Head

This is interesting:

The study, led by physician Yuval Ran, looked at Israeli combat deaths from 2000 to 2004 and tracked where bullet entries appeared on the skull (illustrated above), finding that the lower back (occipital region) and front of the temple areas (anterior-temporal regions) were most likely.

I’m not sure it’s useful, but it is interesting.

Posted on September 20, 2010 at 1:58 PM45 Comments

Four Irrefutable Security Laws

This list is from Malcolm Harkins, Intel’s chief information security officer, and it’s a good one (from a talk at Forrester’s Security Forum):

  1. Users want to click on things.
  2. Code wants to be wrong.
  3. Services want to be on.
  4. Security features can be used to harm.

His dig at open source software is just plain dumb, though:

Harkins cited mobile apps: “What kind of security do we think is in something that sells for 99 cents? Not much.”

Posted on September 20, 2010 at 6:20 AM33 Comments

Questioning Terrorism Policy

Worth reading:

…what if we chose to accept the fact that every few years, despite all reasonable precautions, some hundreds or thousands of us may die in the sort of ghastly terrorist attack that a democratic republic cannot 100-percent protect itself from without subverting the very principles that make it worth protecting?

Is this thought experiment monstrous? Would it be monstrous to refer to the 40,000-plus domestic highway deaths we accept each year because the mobility and autonomy of the car are evidently worth that high price? Is monstrousness why no serious public figure now will speak of the delusory trade-off of liberty for safety that Ben Franklin warned about more than 200 years ago? What exactly has changed between Franklin’s time and ours? Why now can we not have a serious national conversation about sacrifice, the inevitability of sacrifice—either of (a) some portion of safety or (b) some portion of the rights and protections that make the American idea so incalculably precious?

Posted on September 18, 2010 at 6:05 AM42 Comments

Master HDCP Key Cracked

The master key for the High-Bandwidth Digital Content Protection standard—that’s what encrypts digital television between set-top boxes and digital televisions—has been cracked and published. (Intel confirmed that the key is real.) The ramifications are unclear:

But even if the code is real, it might not immediately foster piracy as the cracking of CSS on DVDs did more than a decade ago. Unlike CSS, which could be implemented in software, HDCP requires custom hardware. The threat model for Hollywood, then, isn’t that a hacker could use the master key to generate a DeCSS-like program for HD, but that shady hardware makers, perhaps in China, might eventually create and sell black-market HDCP cards that would allow the free copying of protected high-def content.

Posted on September 17, 2010 at 1:57 PM41 Comments

Automatic Document Declassification

DARPA is looking for something that can automatically declassify documents:

I’ll be honest: I’m not exactly sure what kind of technological solution you can build to facilitate declassification. From the way the challenge is structured, it sounds like a semantic-search problem: Plug in keywords that help you comb through deserts of stored information in the bowels of the Pentagon and the intelligence community, and figure out whether the results of the fishing expedition can be tossed out from the depths onto dry land in accordance with declassification policies. But that’s a matter of building an algorithm, something that might be too, well, quotidian for Darpa.

Posted on September 17, 2010 at 10:15 AM13 Comments

DHS Still Worried About Terrorists Using Internet Surveillance

Profound analysis from the Department of Homeland Security:

Detailed video obtained through live Web-based camera feeds combined with street-level and direct overhead imagery views from Internet imagery sites allow terrorists to conduct remote surveillance of multiple potential targets without exposing themselves to detection.

Cameras, too.

Remember, anyone who searches for anything on the Internet may be a terrorist. Report him immediately.

Posted on September 16, 2010 at 6:34 AM58 Comments

Not Answering Questions at U.S. Customs

Interesting story:

I was detained last night by federal authorities at San Francisco International Airport for refusing to answer questions about why I had travelled outside the United States.

The end result is that, after waiting for about half an hour and refusing to answer further questions, I was released ­ because U.S. citizens who have produced proof of citizenship and a written customs declaration are not obligated to answer questions.

Posted on September 14, 2010 at 12:58 PM113 Comments

Parental Fears vs. Realities

From NPR:

Based on surveys Barnes collected, the top five worries of parents are, in order:

  1. Kidnapping
  2. School snipers
  3. Terrorists
  4. Dangerous strangers
  5. Drugs

But how do children really get hurt or killed?

  1. Car accidents
  2. Homicide (usually committed by a person who knows the child, not a stranger)
  3. Abuse
  4. Suicide
  5. Drowning

Why such a big discrepancy between worries and reality? Barnes says parents fixate on rare events because they internalize horrific stories they hear on the news or from a friend without stopping to think about the odds the same thing could happen to their children.

No surprise to any regular reader of this blog.

Posted on September 8, 2010 at 6:06 AM82 Comments

Consumerization and Corporate IT Security

If you’re a typical wired American, you’ve got a bunch of tech tools you like and a bunch more you covet. You have a cell phone that can easily text. You’ve got a laptop configured just the way you want it. Maybe you have a Kindle for reading, or an iPad. And when the next new thing comes along, some of you will line up on the first day it’s available.

So why can’t work keep up? Why are you forced to use an unfamiliar, and sometimes outdated, operating system? Why do you need a second laptop, maybe an older and clunkier one? Why do you need a second cell phone with a new interface, or a BlackBerry, when your phone already does e-mail? Or a second BlackBerry tied to corporate e-mail? Why can’t you use the cool stuff you already have?

More and more companies are letting you. They’re giving you an allowance and allowing you to buy whatever laptop you want, and to connect into the corporate network with whatever device you choose. They’re allowing you to use whatever cell phone you have, whatever portable e-mail device you have, whatever you personally need to get your job done. And the security office is freaking.

You can’t blame them, really. Security is hard enough when you have control of the hardware, operating system and software. Lose control of any of those things, and the difficulty goes through the roof. How do you ensure that the employee devices are secure, and have up-to-date security patches? How do you control what goes on them? How do you deal with the tech support issues when they fail? How do you even begin to manage this logistical nightmare? Better to dig your heels in and say “no.”

But security is on the losing end of this argument, and the sooner it realizes that, the better.

The meta-trend here is consumerization: cool technologies show up for the consumer market before they’re available to the business market. Every corporation is under pressure from its employees to allow them to use these new technologies at work, and that pressure is only getting stronger. Younger employees simply aren’t going to stand for using last year’s stuff, and they’re not going to carry around a second laptop. They’re either going to figure out ways around the corporate security rules, or they’re going to take another job with a more trendy company. Either way, senior management is going to tell security to get out of the way. It might even be the CEO, who wants to get to the company’s databases from his brand new iPad, driving the change. Either way, it’s going to be harder and harder to say no.

At the same time, cloud computing makes this easier. More and more, employee computing devices are nothing more than dumb terminals with a browser interface. When corporate e-mail is all webmail, corporate documents are all on GoogleDocs, and when all the specialized applications have a web interface, it’s easier to allow employees to use any up-to-date browser. It’s what companies are already doing with their partners, suppliers, and customers.

Also on the plus side, technology companies have woken up to this trend and—from Microsoft and Cisco on down to the startups—are trying to offer security solutions. Like everything else, it’s a mixed bag: some of them will work and some of them won’t, most of them will need careful configuration to work well, and few of them will get it right. The result is that we’ll muddle through, as usual.

Security is always a tradeoff, and security decisions are often made for non-security reasons. In this case, the right decision is to sacrifice security for convenience and flexibility. Corporations want their employees to be able to work from anywhere, and they’re going to have loosened control over the tools they allow in order to get it.

This essay first appeared as the second half of a point/counterpoint with Marcus Ranum in Information Security Magazine. You can read Marcus’s half here.

Posted on September 7, 2010 at 7:25 AM63 Comments

Terrorism Entrapment

Back in 2007, I wrote an essay, “Portrait of the Modern Terrorist as an Idiot,” where I said:

The JFK Airport plotters seem to have been egged on by an informant, a twice-convicted drug dealer. An FBI informant almost certainly pushed the Fort Dix plotters to do things they wouldn’t have ordinarily done. The Miami gang’s Sears Tower plot was suggested by an FBI undercover agent who infiltrated the group. And in 2003, it took an elaborate sting operation involving three countries to arrest an arms dealer for selling a surface-to-air missile to an ostensible Muslim extremist. Entrapment is a very real possibility in all of these cases.

Over on Salon, Stephan Salisbury has an essay on FBI entrapment and domestic terrorism plots. It’s well worth reading.

Posted on September 6, 2010 at 7:24 AM46 Comments

UAE Man-in-the-Middle Attack Against SSL

Interesting:

Who are these certificate authorities? At the beginning of Web history, there were only a handful of companies, like Verisign, Equifax, and Thawte, that made near-monopoly profits from being the only providers trusted by Internet Explorer or Netscape Navigator. But over time, browsers have trusted more and more organizations to verify Web sites. Safari and Firefox now trust more than 60 separate certificate authorities by default. Microsoft’s software trusts more than 100 private and government institutions.

Disturbingly, some of these trusted certificate authorities have decided to delegate their powers to yet more organizations, which aren’t tracked or audited by browser companies. By scouring the Net for certificates, security researchers have uncovered more than 600 groups who, through such delegation, are now also automatically trusted by most browsers, including the Department of Homeland Security, Google, and Ford Motors­and a UAE mobile phone company called Etisalat.

In 2005, a company called CyberTrust­—which has since been purchased by Verizon­—gave Etisalat, the government-connected mobile company in the UAE, the right to verify that a site is valid. Here’s why this is trouble: Since browsers now automatically trust Etisalat to confirm a site’s identity, the company has the potential ability to fake a secure connection to any site Etisalat subscribers might visit using a man-in-the-middle scheme.

EDITED TO ADD (9/14): EFF has gotten involved.

Posted on September 3, 2010 at 6:27 AM58 Comments

Successful Attack Against a Quantum Cryptography System

Clever:

Quantum cryptography is often touted as being perfectly secure. It is based on the principle that you cannot make measurements of a quantum system without disturbing it. So, in theory, it is impossible for an eavesdropper to intercept a quantum encryption key without disrupting it in a noticeable way, triggering alarm bells.

Vadim Makarov at the Norwegian University of Science and Technology in Trondheim and his colleagues have now cracked it. “Our hack gave 100% knowledge of the key, with zero disturbance to the system,” he says.

[…]

The cunning part is that while blinded, Bob’s detector cannot function as a ‘quantum detector’ that distinguishes between different quantum states of incoming light. However, it does still work as a ‘classical detector’ ­ recording a bit value of 1 if it is hit by an additional bright light pulse, regardless of the quantum properties of that pulse.

That means that every time Eve intercepts a bit value of 1 from Alice, she can send a bright pulse to Bob, so that he also receives the correct signal, and is entirely unaware that his detector has been sabotaged. There is no mismatch between Eve and Bob’s readings because Eve sends Bob a classical signal, not a quantum one. As quantum cryptographic rules no longer apply, no alarm bells are triggered, says Makarov.

“We have exploited a purely technological loophole that turns a quantum cryptographic system into a classical system, without anyone noticing,” says Makarov.

Makarov and his team have demonstrated that the hack works on two commercially available systems: one sold by ID Quantique (IDQ), based in Geneva, Switzerland, and one by MagiQ Technologies, based in Boston, Massachusetts. “Once I had the systems in the lab, it took only about two months to develop a working hack,” says Makarov.

Just because something is secure in theory doesn’t mean it’s secure in practice. Or, to put it more cleverly: in theory, theory and practice are the same; but in practice, they’re very different.

The paper is here.

Posted on September 2, 2010 at 1:46 PM26 Comments

Cyber-Offence is the New Cyber-Defense

This is beyond stupid:

The Pentagon is contemplating an aggressive approach to defending its computer systems that includes preemptive actions such as knocking out parts of an adversary’s computer network overseas—but it is still wrestling with how to pursue the strategy legally.

The department is developing a range of weapons capabilities, including tools that would allow “attack and exploitation of adversary information systems” and that can “deceive, deny, disrupt, degrade and destroy” information and information systems, according to Defense Department budget documents.

But officials are reluctant to use the tools until questions of international law and technical feasibility are resolved, and that has proved to be a major challenge for policymakers. Government lawyers and some officials question whether the Pentagon could take such action without violating international law or other countries’ sovereignty.

“Some” officials are questioning it. The rest are trying to ignore the issue.

I wrote about this back in 2007.

Posted on September 2, 2010 at 7:33 AM44 Comments

Wanted: Skein Hardware Help

As part of NIST’s SHA-3 selection process, people have been implementing the candidate hash functions on a variety of hardware and software platforms. Our team has implemented Skein in Intel’s 32 nm ASIC process, and got some impressive performance results (presentation and paper). Several other groups have implemented Skein in FPGA and ASIC, and have seen significantly poorer performance. We need help understanding why.

For example, a group led by Brian Baldwin at the Claude Shannon Institute for Discrete Mathematics, Coding and Cryptography implemented all the second-round candidates in FPGA (presentation and paper). Skein performance was terrible, but when they checked their code, they found an error. Their corrected performance comparison (presentation and paper) has Skein performing much better and in the top ten.

We suspect that the adders in all the designs may not be properly optimized, although there may be other performance issues. If we can at least identify (or possibly even fix) the slowdowns in the design, it would be very helpful, both for our understanding and for Skein’s hardware profile. Even if we find that the designs are properly optimized, that would also be good to know.

A group at George Mason University led by Kris Gaj implemented all the second-round candidates in FPGA (presentation, paper, and much longer paper). Skein had the worst performance of any of the implementations. We’re looking for someone who can help us understand the design, and determine if it can be improved.

Another group, led by Stefan Tillich at University of Bristol, implemented all the candidates in 180 nm custom ASIC (presentation and paper). Here, Skein is one of the worst performers. We’re looking for someone who can help us understand what this group did.

Three other groups—one led by Patrick Schaumont of Virginia Tech (presentation and paper), another led by Shin’ichiro Matsuo at National Institute of Information and Communications Technology in Japan (presentation and paper), and a third led by Luca Henzen at ETH Zurich (paper with appendix, and conference version)—implemented the SHA-3 candidates. Again, we need help understanding how their Skein performance numbers are so different from ours.

We’re looking for people with FPGA and ASIC skills to work with the Skein team. We don’t have money to pay anyone; co-authorship on a paper (and a Skein polo shirt) is our primary reward. Please send me e-mail if you’re interested.

Posted on September 1, 2010 at 1:17 PM42 Comments

More Skein News

Skein is my new hash function. Well, “my” is an overstatement; I’m one of the eight designers. It was submitted to NIST for their SHA-3 competition, and one of the 14 algorithms selected to advance to the second round. Here’s the Skein paper; source code is here. The Skein website is here.

Last week was the Second SHA-3 Candidate Conference. Lots of people presented papers on the candidates: cryptanalysis papers, implementation papers, performance comparisons, etc. There were two cryptanalysis papers on Skein. The first was by Kerry McKay and Poorvi L. Vora (presentation and paper). They tried to extend linear cryptanalysis to groups of bits to attack Threefish (the block cipher inside Skein). It was a nice analysis, but it didn’t get very far at all.

The second was a fantastic piece of cryptanalysis by Dmitry Khovratovich, Ivica Nikolié, and Christian Rechberger. They used a rotational rebound attack (presentation and paper) to mount a “known-key distinguisher attack” on 57 out of 72 Threefish rounds faster than brute force. It’s a new type of attack—some go so far as to call it an “observation”—and the community is still trying to figure out what it means. It only works if the attacker can manipulate both the plaintexts and the keys in a structured way. Against 57-round Threefish, it requires 2503 work—barely better than brute force. And it only distinguishes reduced-round Threefish from a random permutation; it doesn’t actually recover any key bits.

Even with the attack, Threefish has a good security margin. Also, the attack doesn’t affect Skein. But changing one constant in the algorithm’s key schedule makes the attack impossible. NIST has said they’re allowing second-round tweaks, so we’re going to make the change. It won’t affect any performance numbers or obviate any other cryptanalytic results—but the best attack would be 33 out of 72 rounds.

Our update on Skein, which we presented at the conference, is here. All the other papers and presentations are here. (My 2008 essay on SHA-3 is here, and my 2009 update is here.) The second-round algorithms are: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grøstl, Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein. You can find details on all of them, as well as the current state of their cryptanalysis, here. NIST will select approximately five algorithms to go on to the third round by the end of the year.

In other news, we’re once again making Skein polo shirts available to the public. Those of you who attended either of the two SHA-3 conferences might have noticed the stylish black Skein polo shirts worn by the Skein team. Anyone who wants one is welcome to buy it, at cost. Details (with photos) are here. All orders must be received before October 1, and we’ll have all the shirts made in one batch.

Posted on September 1, 2010 at 6:01 AM20 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.