Blog: September 2009 Archives

Immediacy Affects Risk Assessments

New experiment demonstrates what we already knew:

That’s because people tend to view their immediate emotions, such as their perceptions of threats or risks, as more intense and important than their previous emotions.

In one part of the study focusing on terrorist threats, using materials adapted from the U.S. Department of Homeland Security, Van Boven and his research colleagues presented two scenarios to people in a college laboratory depicting warnings about traveling abroad to two countries.

Participants were then asked to report which country seemed to have greater terrorist threats. Many of them reported that the country they last read about was more dangerous.

“What our study has shown is that when people learn about risks, even in very rapid succession where the information is presented to them in a very clear and vivid way, they still respond more strongly to what is right in front of them,” Van Boven said.

[…]

Human emotions stem from a very old system in the brain, Van Boven says. When it comes to reacting to threats, real or exaggerated, it goes against the grain of thousands of years of evolution to just turn off that emotional reaction. It’s not something most people can do, he said.

“And that’s a problem, because people’s emotions are fundamental to their judgments and decisions in everyday life,” Van Boven said. “When people are constantly being bombarded by new threats or things to be fearful of, they can forget about the genuinely big problems, like global warming, which really need to be dealt with on a large scale with public support.”

In today’s 24-hour society, talk radio, the Internet and extensive media coverage of the “threat of the day” only exacerbate the trait of focusing on our immediate emotions, he said.

“One of the things we know about how emotional reactions work is they are not very objective, so people can get outraged or become fearful of what might actually be a relatively minor threat,” Van Boven said. “One worry is some people are aware of these kinds of effects and can use them to manipulate our actions in ways that we may prefer to avoid.”

[…]

“If you’re interested in having an informed citizenry you tell people about all the relevant risks, but what our research shows is that is not sufficient because those things still happen in sequence and people will still respond immediately to whatever happens to be in front of them,” he said. “In order to make good decisions and craft good policies we need to know how people are going to respond.”

Posted on September 30, 2009 at 1:17 PM10 Comments

The Doghouse: Crypteto

Crypteto has a 49,152-bit symmetric key:

The most important issue of any encryption product is the ‘bit key strength’. To date the strongest known algorithm has a 448-bit key. Crypteto now offers a
49,152-bit key. This means that for every extra 1 bit increase that Crypteto has over its competition makes it 100% stronger. The security and privacy this offers
is staggering.

Yes, every key bit doubles an algorithm’s strength against brute-force attacks. But it’s hard to find any real meaning in a work factor of 249152.

Coupled with this truly remarkable breakthrough Crypteto does not compromise on encryption speed. In the past, incremental key strength improvements have effected the speed that data is encrypted. The usual situation was that for every 1 bit increase in key strength there was a consequent reduction in encryption
speed by 50%.

That’s not even remotely true. It’s not at all obvious how key length is related to encryption speed. Blowfish has the same speed, regardless of key length. AES-192 is about 20% slower than AES-128, and AES-256 is about 40% slower. Threefish, the block cipher inside Skein, encrypts data at 7.6 clock cycles/byte with a 256-bit key, 6.1 clock cycles/byte with a 512-bit key, and 6.5 clock cycles/byte with a 1024-bit key. I’m not claiming that Threefish is secure and ready for commercial use—at any keylength—but there simply isn’t a chance that encryption speed will drop by half for every key bit added.

This is a fundamental asymmetry of cryptography, and it’s important to get right. The cost to encrypt is linear as a function of key length, while cost to break is geometric. It’s one of the reasons why, of all the links in a security chain, cryptography is the strongest.

Normally I wouldn’t bother with this kind of thing, but they explicitly asked me to comment:

But Hawthorne Davies has overcome this issue. By offering an algorithm with an unequalled key strength of 49,152 bits, we are able to encrypt and decrypt data at speeds in excess of 8 megabytes per second. This means that the aforementioned Gigabyte of data would take 2 minutes 13 seconds. If Bruce Schneier, the United State’s foremost cryptologist, were to increase his Blowfish 448 bit encryption algorithm to Blowfish 49152, he would be hard pressed to encrypt one Gigabyte in 4 hours.

[…]

We look forward to receiving advice and encouragement from the good Dr. Schneier.

I’m not a doctor of anything, but sure. Read my 1999 essay on snake-oil cryptography:

Warning Sign #5: Ridiculous key lengths.

Jaws Technology boasts: “Thanks to the JAWS L5 algorithm’s statistically unbreakable 4096 bit key, the safety of your most valued data files is ensured.” Meganet takes the ridiculous a step further: “1 million bit symmetric keys—The market offer’s [sic] 40-160 bit only!!”

Longer key lengths are better, but only up to a point. AES will have 128-bit, 192-bit, and 256-bit key lengths. This is far longer than needed for the foreseeable future. In fact, we cannot even imagine a world where 256-bit brute force searches are possible. It requires some fundamental breakthroughs in physics and our understanding of the universe. For public-key cryptography, 2048-bit keys have same sort of property; longer is meaningless.

Think of this as a sub-example of Warning Sign #4: if the company doesn’t understand keys, do you really want them to design your security product?

Or read what I wrote about symmetric key lengths in 1996, in Applied Cryptography (pp. 157–8):

One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)

Given that k = 1.38×10-16 erg/°Kelvin, and that the ambient temperature of the universe is 3.2°Kelvin, an ideal computer running at 3.2°K would consume 4.4×10-16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.

Now, the annual energy output of our sun is about 1.21×1041 ergs. This is enough to power about 2.7×1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn’t have the energy left over to perform any useful calculations with this counter.

But that’s just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.

These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.

Ten years later, there is still no reason to use anything more than a 256-bit symmetric key. I gave the same advice in 2003 Practical Cryptography (pp. 65-6). Even a mythical quantum computer won’t be able to brute-force that large a keyspace. (Public keys are different, of course—see Table 2.2 of this NIST document for recommendations).

Of course, in the real world there are smarter ways than to brute-force keysearch. And the whole point of cipher cryptanalysis is to find shortcuts to brute-force search (like this attack on AES), but a 49,152-bit key is just plain stupid.

EDITED TO ADD (9/30): Now this is funny:

Some months ago I sent individual emails to each of seventeen experts in cryptology, all with the title of Doctor or Professor. My email was a first announcement to the academic world of the TOUAREG Encryption Algorithm, which, somewhat unusually, has a session key strength of over 49,000 bits and yet runs at 3 Megabytes per second. Bearing in mind that the strongest version of BLOWFISH has a session key of 448 bits and that every additional bit doubles the task of key-crashing, I imagined that my announcement would create more than a mild flutter of interest.

Much to his surprise, no one responded.

Here’s some more advice: my 1998 essay, “Memo to the Amateur Cipher Designer.” Anyone can design a cipher that he himself cannot break. It’s not even hard. So when you tell a cryptographer that you’ve designed a cipher that you can’t break, his first question will be “who the hell are you?” In other words, why should the fact that you can’t break a cipher be considered evidence of the cipher’s security?

If you want to design algorithms, start by breaking the ones out there. Practice by breaking algorithms that have already been broken (without peeking at the answers). Break something no one else has broken. Break another. Get your breaks published. When you have established yourself as someone who can break algorithms, then you can start designing new algorithms. Before then, no one will take you seriously.

EDITED TO ADD (9/30): I just did the math. An encryption speed of 8 megabytes per second on a 3.33 GHz CPU translates to about 400 clock cycles per byte. This is much, much slower than any of the AES finalists ten years ago, or any of the SHA-3 second round candidates today. It’s kind of embarrassingly slow, really.

Posted on September 30, 2009 at 5:52 AM115 Comments

The Problem of Vague Laws

The average American commits three felonies a day: the title of a new book by Harvey Silverglate. More specifically, the problem is the intersection of vague laws and fast-moving technology:

Technology moves so quickly we can barely keep up, and our legal system moves so slowly it can’t keep up with itself. By design, the law is built up over time by court decisions, statutes and regulations. Sometimes even criminal laws are left vague, to be defined case by case. Technology exacerbates the problem of laws so open and vague that they are hard to abide by, to the point that we have all become potential criminals.

Boston civil-liberties lawyer Harvey Silverglate calls his new book “Three Felonies a Day,” referring to the number of crimes he estimates the average American now unwittingly commits because of vague laws. New technology adds its own complexity, making innocent activity potentially criminal.

[…]

In 2001, a man named Bradford Councilman was charged in Massachusetts with violating the wiretap laws. He worked at a company that offered an online book-listing service and also acted as an Internet service provider to book dealers. As an ISP, the company routinely intercepted and copied emails as part of the process of shuttling them through the Web to recipients.

The federal wiretap laws, Mr. Silverglate writes, were “written before the dawn of the Internet, often amended, not always clear, and frequently lagging behind the whipcrack speed of technological change.” Prosecutors chose to interpret the ISP role of momentarily copying messages as they made their way through the system as akin to impermissibly listening in on communications. The case went through several rounds of litigation, with no judge making the obvious point that this is how ISPs operate. After six years, a jury found Mr. Councilman not guilty.

Other misunderstandings of the Web criminalize the exercise of First Amendment rights. A Saudi student in Idaho was charged in 2003 with offering “material support” to terrorists. He had operated Web sites for a Muslim charity that focused on normal religious training, but was prosecuted on the theory that if a user followed enough links off his site, he would find violent, anti-American comments on other sites. The Internet is a series of links, so if there’s liability for anything in an online chain, it would be hard to avoid prosecution.

EDITED TO ADD (10/12): Audio interview with Harvey Silvergate.

Posted on September 29, 2009 at 1:08 PM41 Comments

Predicting Characteristics of People by the Company they Keep

Turns out “gaydar” can be automated:

Using data from the social network Facebook, they made a striking discovery: just by looking at a person’s online friends, they could predict whether the person was gay. They did this with a software program that looked at the gender and sexuality of a person’s friends and, using statistical analysis, made a prediction. The two students had no way of checking all of their predictions, but based on their own knowledge outside the Facebook world, their computer program appeared quite accurate for men, they said. People may be effectively “outing” themselves just by the virtual company they keep.

This sort of thing can be generalized:

The work has not been published in a scientific journal, but it provides a provocative warning note about privacy. Discussions of privacy often focus on how to best keep things secret, whether it is making sure online financial transactions are secure from intruders, or telling people to think twice before opening their lives too widely on blogs or online profiles. But this work shows that people may reveal information about themselves in another way, and without knowing they are making it public. Who we are can be revealed by, and even defined by, who our friends are: if all your friends are over 45, you’re probably not a teenager; if they all belong to a particular religion, it’s a decent bet that you do, too. The ability to connect with other people who have something in common is part of the power of social networks, but also a possible pitfall. If our friends reveal who we are, that challenges a conception of privacy built on the notion that there are things we tell, and things we don’t.

EDITED TO ADD (9/29): Better information from the MIT Newspaper.

Posted on September 29, 2009 at 7:13 AM34 Comments

Unauthentication

In computer security, a lot of effort is spent on the authentication problem. Whether it’s passwords, secure tokens, secret questions, image mnemonics, or something else, engineers are continually coming up with more complicated—and hopefully more secure—ways for you to prove you are who you say you are over the Internet.

This is important stuff, as anyone with an online bank account or remote corporate network knows. But a lot less thought and work have gone into the other end of the problem: how do you tell the system on the other end of the line that you’re no longer there? How do you unauthenticate yourself?

My home computer requires me to log out or turn my computer off when I want to unauthenticate. This works for me because I know enough to do it, but lots of people just leave their computers on and running when they walk away. As a result, many office computers are left logged in when people go to lunch, or when they go home for the night. This, obviously, is a security vulnerability.

The most common way to combat this is by having the system time out. I could have my computer log me out automatically after a certain period of inactivity—five minutes, for example. Getting it right requires some fine tuning, though. Log the person out too quickly, and he gets annoyed; wait too long before logging him out, and the system could be vulnerable during that time. My corporate e-mail server logs me out after 10 minutes or so, and I regularly get annoyed at my corporate e-mail system.

Some systems have experimented with a token: a USB authentication token that has to be plugged in for the computer to operate, or an RFID token that logs people out automatically when the token moves more than a certain distance from the computer. Of course, people will be prone to just leave the token plugged in to their computer all the time; but if you attach it to their car keys or the badge they have to wear at all times when walking around the office, the risk is minimized.

That’s expensive, though. A research project used a Bluetooth device, like a cellphone, and measured its proximity to a computer. The system could be programmed to lock the computer if the Bluetooth device moved out of range.

Some systems log people out after every transaction. This wouldn’t work for computers, but it can work for ATMs. The machine spits my card out before it gives me my cash, or just requires a card swipe, and makes sure I take it out of the machine. If I want to perform another transaction, I have to reinsert my card and enter my PIN a second time.

There’s a physical analogue that everyone can explain: door locks. Does your door lock behind you when you close the door, or does it remain unlocked until you lock it? The first instance is a system that automatically logs you out, and the second requires you to log out manually. Both types of locks are sold and used, and which one you choose depends on both how you use the door and who you expect to try to break in.

Designing systems for usability is hard, especially when security is involved. Almost by definition, making something secure makes it less usable. Choosing an unauthentication method depends a lot on how the system is used as well as the threat model. You have to balance increasing security with pissing the users off, and getting that balance right takes time and testing, and is much more an art than a science.

This essay originally appeared on ThreatPost.

Posted on September 28, 2009 at 1:34 PM42 Comments

Ass Bomber

Nobody tell the TSA, but last month someone tried to assassinate a Saudi prince by exploding a bomb stuffed in his rectum. He pretended to be a repentant militant, when in fact he was a Trojan horse:

The resulting explosion ripped al-Asiri to shreds but only lightly injured the shocked prince—the target of al-Asiri’s unsuccessful assassination attempt.

Other news articles are here, and here are two blog posts.

For years, I have made the joke about Richard Reid: “Just be glad that he wasn’t the underwear bomber.” Now, sadly, we have an example of one.

Lewis Page, an “improvised-device disposal operator tasked in support of the UK mainland police from 2001-2004,” pointed out that this isn’t much of a threat for three reasons: 1) you can’t stuff a lot of explosives into a body cavity, 2) detonation is, um, problematic, and 3) the human body can stifle an explosion pretty effectively (think of someone throwing himself on a grenade to save his friends).

But who ever accused the TSA of being rational?

Posted on September 28, 2009 at 6:19 AM116 Comments

Friday Squid Blogging: 20-Foot Squid Caught in the Gulf of Mexico

First one sighted in the Gulf since 1954:

The new specimen, weighing 103 pounds, was found during a preliminary survey of the Gulf during which scientists hope to identify the types of fish and squid that sperm whales feed on.

The squid, like other deep catches, was dead when brought to the surface because the animals can’t survive the rapid changes in water depth as they are hauled in. Scientists at the Smithsonian Museum in Washington are studying the specimen further to determine its exact species.

Giant squid can grow up to 40 feet in length, and because scientists know so little about them, they’re not sure if the Gulf specimen is a full-grown adult, Epperson said.

Posted on September 25, 2009 at 1:04 PM7 Comments

Texas Instruments Signing Keys Broken

Texas Instruments’ calculators use RSA digital signatures to authenticate any updates to their operating system. Unfortunately, their signing keys are too short: 512-bits. Earlier this month, a collaborative effort factored the moduli and published the private keys. Texas Instruments responded by threatening websites that published the keys with the DMCA, but it’s too late.

So far, we have the operating-system signing keys for the TI-92+, TI-73, TI-89, TI-83+/TI-83+ Silver Edition, Voyage 200, TI-89 Titanium, and the TI-84+/TI-84 Silver Edition, and the date-stamp signing key for the TI-73, Explorer, TI-83 Plus, TI-83 Silver Edition, TI-84 Plus, TI-84 Silver Edition, TI-89, TI-89 Titanium, TI-92 Plus, and the Voyage 200.

Moral: Don’t assume that if your application is obscure, or if there’s no obvious financial incentive for doing so, that your cryptography won’t be broken if you use too-short keys.

Posted on September 25, 2009 at 6:17 AM49 Comments

Sears Spies on its Customers

It’s not just hackers who steal financial and medical information:

Between April 2007 and January 2008, visitors to the Kmart and Sears web sites were invited to join an “online community” for which they would be paid $10 with the idea they would be helping the company learn more about their customers. It turned out they learned a lot more than participants realized or that the feds thought was reasonable.

To join the “My SHC Community,” users downloaded software that ended up grabbing some members’ prescription information, emails, bank account data and purchases on other sites.

Reminds me of the 2005 Sony rootkit, which—oddly enough—is in the news again too:

After purchasing an Anastacia CD, the plaintiff played it in his computer but his anti-virus software set off an alert saying the disc was infected with a rootkit. He went on to test the CD on three other computers. As a result, the plaintiff ended up losing valuable data.

Claiming for his losses, the plaintiff demanded 200 euros for 20 hours wasted dealing with the virus alerts and another 100 euros for 10 hours spent restoring lost data. Since the plaintiff was self-employed, he also claimed for loss of profits and in addition claimed 800 euros which he paid to a computer expert to repair his network after the infection. Added to this was 185 euros in legal costs making a total claim of around 1,500 euros.

The judge’s assessment was that the CD sold to the plaintiff was faulty, since he should be able to expect that the CD could play on his system without interfering with it.

The court ordered the retailer of the CD to pay damages of 1,200 euros.

Posted on September 24, 2009 at 6:37 AM31 Comments

Monopoly Sets for WWII POWs: More Information

I already blogged about this; there’s more information in this new article:

Included in the items the German army allowed humanitarian groups to distribute in care packages to imprisoned soldiers, the game was too innocent to raise suspicion. But it was the ideal size for a top-secret escape kit that could help spring British POWs from German war camps.

The British secret service conspired with the U.K. manufacturer to stuff a compass, small metal tools, such as files, and, most importantly, a map, into cut-out compartments in the Monopoly board itself.

Posted on September 23, 2009 at 1:43 PM29 Comments

Eliminating Externalities in Financial Security

This is a good thing:

An Illinois district court has allowed a couple to sue their bank on the novel grounds that it may have failed to sufficiently secure their account, after an unidentified hacker obtained a $26,500 loan on the account using the customers’ user name and password.

[…]

In February 2007, someone with a different IP address than the couple gained access to Marsha Shames-Yeakel’s online banking account using her user name and password and initiated an electronic transfer of $26,500 from the couple’s home equity line of credit to her business account. The money was then transferred through a bank in Hawaii to a bank in Austria.

The Austrian bank refused to return the money, and Citizens Financial insisted that the couple be liable for the funds and began billing them for it. When they refused to pay, the bank reported them as delinquent to the national credit reporting agencies and threatened to foreclose on their home.

The couple sued the bank, claiming violations of the Electronic Funds Transfer Act and the Fair Credit Reporting Act, claiming, among other things, that the bank reported them as delinquent to credit reporting agencies without telling the agencies that the debt in question was under dispute and was the result of a third-party theft. The couple wrote 19 letters disputing the debt, but began making monthly payments to the bank for the stolen funds in late 2007 following the bank’s foreclosure threats.

In addition to these claims, the plaintiffs also accused the bank of negligence under state law.

According to the plaintiffs, the bank had a common law duty to protect their account information from identity theft and failed to maintain state-of-the-art security standards. Specifically, the plaintiffs argued, the bank used only single-factor authentication for customers logging into its server (a user name and password) instead of multi-factor authentication, such as combining the user name and password with a token the customer possesses that authenticates the customer’s computer to the bank’s server or dynamically generates a single-use password for logging in.

As I’ve previously written, this is the only way to mitigate this kind of fraud:

Fraudulent transactions have nothing to do with the legitimate account holders. Criminals impersonate legitimate users to financial institutions. That means that any solution can’t involve the account holders. That leaves only one reasonable answer: financial institutions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

They can’t claim that the user must keep his password secure or his machine virus free. They can’t require the user to monitor his accounts for fraudulent activity, or his credit reports for fraudulently obtained credit cards. Those aren’t reasonable requirements for most users. The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions. They’ve pushed most of the actual costs onto the merchants. And almost no security centers around trying to authenticate the cardholder.

It’s an important security principle: ensure that the person who has the ability to mitigate the risk is responsible for the risk. In this case, the account holders had nothing to do with the security of their account. They could not audit it. They could not improve it. The bank, on the other hand, has the ability to improve security and mitigate the risk, but because they pass the cost on to their customers, they have no incentive to do so. Litigation like this has the potential to fix the externality and improve security.

Posted on September 23, 2009 at 7:13 AM58 Comments

Quantum Computer Factors the Number 15

This is an important development:

Shor’s algorithm was first demonstrated in a computing system based on nuclear magnetic resonance—manipulating molecules in a solution with strong magnetic fields. It was later demonstrated with quantum optical methods but with the use of bulk components like mirrors and beam splitters that take up an unwieldy area of several square meters.

Last year, the Bristol researchers showed they could miniaturize this optical setup, building a quantum photonic circuit on a silicon chip mere millimeters square. They replaced mirrors and beam splitters with waveguides that weave their way around the chip and interact to split, reflect, and transmit light through the circuit. They then injected photons into the waveguides to act as their qubits.

Now they’ve put their circuit to work: Using four photons that pass through a sequence of quantum logic gates, the optical circuit helped find the prime factors of the number 15. While the researchers showed that it was possible to solve for the factors, the chip itself didn’t just spit out 5 and 3. Instead, it came up with an answer to the “order-finding routine,” the “computationally hard” part of Shor’s algorithm that requires a quantum calculation to solve the problem in a reasonable amount of time, according to Jeremy O’Brien, a professor of physics and electrical engineering at the University of Bristol. The researchers then finished the computation using an ordinary computer to finally come up with the correct factors.

I’ve written about quantum computing (and quantum cryptography):

Several groups are working on designing and building a quantum computer, which is fundamentally different from a classical computer. If one were built—and we’re talking science fiction here—then it could factor numbers and solve discrete-logarithm problems very quickly. In other words, it could break all of our commonly used public-key algorithms. For symmetric cryptography it’s not that dire: A quantum computer would effectively halve the key length, so that a 256-bit key would be only as secure as a 128-bit key today. Pretty serious stuff, but years away from being practical.

Here’s a really good essay on the realities of quantum computing and its potential effects on public-key cryptography.

Posted on September 22, 2009 at 2:00 PM34 Comments

Hacking Two-Factor Authentication

Back in 2005, I wrote about the failure of two-factor authentication to mitigate banking fraud:

Here are two new active attacks we’re starting to see:

  • Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank’s real website. Done right, the user will never realize that he isn’t at the bank’s website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.
  • Trojan attack. Attacker gets Trojan installed on user’s computer. When user logs into his bank’s website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.

See how two-factor authentication doesn’t solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.

Here’s an example:

The theft happened despite Ferma’s use of a one-time password, a six-digit code issued by a small electronic device every 30 or 60 seconds. Online thieves have adapted to this additional security by creating special programs—real-time Trojan horses—that can issue transactions to a bank while the account holder is online, turning the one-time password into a weak link in the financial security chain. “I think it’s a broken model,” Ferrari says.

Of course it’s a broken model. We have to stop trying to authenticate the person; instead, we need to authenticate the transaction:

One way to think about this is that two-factor authentication solves security problems involving authentication. The current wave of attacks against financial systems are not exploiting vulnerabilities in the authentication system, so two-factor authentication doesn’t help.

Security is always an arms race, and you could argue that this situation is simply the cost of treading water. The problem with this reasoning is it ignores countermeasures that permanently reduce fraud. By concentrating on authenticating the individual rather than authenticating the transaction, banks are forced to defend against criminal tactics rather than the crime itself.

Credit cards are a perfect example. Notice how little attention is paid to cardholder authentication. Clerks barely check signatures. People use their cards over the phone and on the Internet, where the card’s existence isn’t even verified. The credit card companies spend their security dollar authenticating the transaction, not the cardholder.

More on mitigating identity theft.

Posted on September 22, 2009 at 6:39 AM98 Comments

Inferring Friendship from Location Data

Interesting:

For nine months, Eagle’s team recorded data from the phones of 94 students and staff at MIT. By using blue-tooth technology and phone masts, they could monitor the movements of the participants, as well as their phone calls. Their main goal with this preliminary study was to compare data collected from the phones with subjective self-report data collected through traditional survey methodology.

The participants were asked to estimate their average spatial proximity to the other participants, whether they were close friends, and to indicate how satisfied they were at work.

Some intriguing findings emerged. For example, the researchers could predict with around 95 per cent accuracy who was friends with whom by looking at how much time participants spent with each other during key periods, such as Saturday nights.

According to the abstract:

Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95% of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction.

We all leave data shadows everywhere we go, and maintaining privacy is very hard. Here’s the EFF writing about locational privacy.

EDITED TO ADD (10/12): More information.

Posted on September 21, 2009 at 1:41 PM15 Comments

Terrorist Havens

Good essay on “terrorist havens”—like Afghanistan—and why they’re not as big a worry as some maintain:

Rationales for maintaining the counterinsurgency in Afghanistan are varied and complex, but they all center on one key tenet: that Afghanistan must not be allowed to again become a haven for terrorist groups, especially al-Qaeda.

[…]

The debate has largely overlooked a more basic question: How important to terrorist groups is any physical haven? More to the point: How much does a haven affect the danger of terrorist attacks against U.S. interests, especially the U.S. homeland? The answer to the second question is: not nearly as much as unstated assumptions underlying the current debate seem to suppose. When a group has a haven, it will use it for such purposes as basic training of recruits. But the operations most important to future terrorist attacks do not need such a home, and few recruits are required for even very deadly terrorism. Consider: The preparations most important to the Sept. 11, 2001, attacks took place not in training camps in Afghanistan but, rather, in apartments in Germany, hotel rooms in Spain and flight schools in the United States.

In the past couple of decades, international terrorist groups have thrived by exploiting globalization and information technology, which has lessened their dependence on physical havens.

By utilizing networks such as the Internet, terrorists’ organizations have become more network-like, not beholden to any one headquarters. A significant jihadist terrorist threat to the United States persists, but that does not mean it will consist of attacks instigated and commanded from a South Asian haven, or that it will require a haven at all. Al-Qaeda’s role in that threat is now less one of commander than of ideological lodestar, and for that role a haven is almost meaningless.

Posted on September 21, 2009 at 6:46 AM50 Comments

Modifying the Color-Coded Threat Alert System

I wrote about the DHS’s color-coded threat alert system in 2003, in Beyond Fear:

The color-coded threat alerts issued by the Department of Homeland Security are useless today, but may become useful in the future. The U.S. military has a similar system; DEFCON 1-5 corresponds to the five threat alerts levels: Green, Blue, Yellow, Orange, and Red. The difference is that the DEFCON system is tied to particular procedures; military units have specific actions they need to perform every time the DEFCON level goes up or down. The color-alert system, on the other hand, is not tied to any specific actions. People are left to worry, or are given nonsensical instructions to buy plastic sheeting and duct tape. Even local police departments and government organizations largely have no idea what to do when the threat level changes. The threat levels actually do more harm than good, by needlessly creating fear and confusion (which is an objective of terrorists) and anesthetizing people to future alerts and warnings. If the color-alert system became something better defined, so that people know exactly what caused the levels to change, what the change means, and what actions they need to take in the event of a change, then it could be useful. But even then, the real measure of effectiveness is in the implementation. Terrorist attacks are rare, and if the color-threat level changes willy-nilly with no obvious cause or effect, then people will simply stop paying attention. And the threat levels are publicly known, so any terrorist with a lick of sense will simply wait until the threat level goes down.

Of course, the codes never became useful. There were never any actions associated with them. And we now know that their primary use was political. They were, and remain, a security joke.

This is what I wrote in 2004:

The DHS’s threat warnings have been vague, indeterminate, and unspecific. The threat index goes from yellow to orange and back again, although no one is entirely sure what either level means. We’ve been warned that the terrorists might use helicopters, scuba gear, even cheap prescription drugs from Canada. New York and Washington, D.C., were put on high alert one day, and the next day told that the alert was based on information years old. The careful wording of these alerts allows them not to require any sound, confirmed, accurate intelligence information, while at the same time guaranteeing hysterical media coverage. This headline-grabbing stuff might make for good movie plots, but it doesn’t make us safer.

This kind of behavior is all that’s needed to generate widespread fear and uncertainty. It keeps the public worried about terrorism, while at the same time reminding them that they’re helpless without the government to defend them.

It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People respond to the warning, and there is a discrete period when their lives are markedly different. They feel there was a usefulness to the higher alert mode, even if nothing came of it.

It’s quite another to tell people to remain on alert, but not to alter their plans. According to scientists, California is expecting a huge earthquake sometime in the next 200 years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for 200 years. It goes against human nature. Residents of California have the same level of short-term fear and long-term apathy regarding the threat of earthquakes that the rest of the nation has developed regarding the DHS’s terrorist threat alert.

A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Even worse, it echoes the very tactics of the terrorists. There are two basic ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.

Finally, in 2009, the DHS is considering changes to the system:

A proposal by the Homeland Security Advisory Council, unveiled late Tuesday, recommends removing two of the five colors, with a standard state of affairs being a “guarded” Yellow. The Green “low risk of terrorist attacks” might get removed altogether, meaning stay prepared for your morning subway commute to turn deadly at any moment.

That’s right, according to the DHS the problem was too many levels. I hope you all feel safer now.

Here are some more whimsical designs, but I want the whole thing be ditched. And it should be easy to ditch; no one thinks it has any value. Unfortunately, if the Obama Administration can’t make this simple change, I don’t think they have the political will to make any of the harder changes we need.

Posted on September 18, 2009 at 6:45 AM44 Comments

Printing Police Handcuff Keys

Using a 3D printer. Impressive.

At the end of the day he talked the officers into trying the key on their handcuffs and … it did work! At least the Dutch Police now knows there is a plastic key on the market that will open their handcuffs. A plastic key undetectable by metal detectors….

EDITED TO ADD (10/12): Additional comments from the author.

Posted on September 16, 2009 at 9:00 AM42 Comments

Skein News

Skein is one of the 14 SHA-3 candidates chosen by NIST to advance to the second round. As part of the process, NIST allowed the algorithm designers to implement small “tweaks” to their algorithms. We’ve tweaked the rotation constants of Skein. This change does not affect Skein’s performance in any way.

The revised Skein paper contains the new rotation constants, as well as information about how we chose them and why we changed them, the results of some new cryptanalysis, plus new IVs and test vectors. Revised source code is here.

The latest information on Skein is always here.

Tweaks were due today, September 15. Now the SHA-3 process moves into the second round. According to NIST’s timeline, they’ll choose a set of final round candidate algorithms in 2010, and then a single hash algorithm in 2012. Between now and then, it’s up to all of us to evaluate the algorithms and let NIST know what we want. Cryptanalysis is important, of course, but so is performance.

Here’s my 2008 essay on SHA-3. The second-round algorithms are: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grøstl, Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein. You can find details on all of them, as well as the current state of their cryptanalysis, here.

In other news, we’re making Skein shirts available to the public. Those of you who attended the First Hash Function Candidate Conference in Leuven, Belgium, earlier this year might have noticed the stylish black Skein polo shirts worn by the Skein team. Anyone who wants one is welcome to buy it, at cost. Details (with photos) are here. All orders must be received before 1 October, and then we’ll have all the shirts made in one batch.

Posted on September 15, 2009 at 6:10 AM27 Comments

Robert Sawyer's Alibis

Back in 2002, science fiction author Robert J. Sawyer wrote an essay about the trade-off between privacy and security, and came out in favor of less privacy. I disagree with most of what he said, and have written pretty much the opposite essay—and others on the value of privacy and the future of privacy—several times since then.

The point of this blog entry isn’t really to debate the topic, though. It’s to reprint the opening paragraph of Sawyer’s essay, which I’ve never forgotten:

Whenever I visit a tourist attraction that has a guest register, I always sign it. After all, you never know when you’ll need an alibi.

Since I read that, whenever I see a tourist attraction with a guest register, I do the same thing. I sign “Robert J. Sawyer, Toronto, ON”—because you never know when he’ll need an alibi.

EDITED TO ADD (9/15): Sawyer’s essay now has a preface, which states that he wrote it to promote a book of his:

The following was written as promotion for my science-fiction novel Hominids, and does not necessarily reflect the author’s personal views.

In the comments below, though, Sawyer says that the essay does not reflect his personal views. So I’m not sure about the waffling on the essay page.

I am completely surprised that Sawyer’s essay was fictional. For years I thought that he meant what he wrote, that it was a non-fiction essay written for a non-fiction publication. He has other essays on his website; I have no idea if any of those reflect his personal views. The whole thing makes absolutely no sense to me.

Posted on September 14, 2009 at 7:24 AM127 Comments

Eighth Anniversary of 9/11

On September 30, 2001, I published a special issue of Crypto-Gram discussing the terrorist attacks. I wrote about the novelty of the attacks, airplane security, diagnosing intelligence failures, the potential of regulating cryptography—because it could be used by the terrorists—and protecting privacy and liberty. Much of what I wrote is still relevant today:

Appalled by the recent hijackings, many Americans have declared themselves willing to give up civil liberties in the name of security. They’ve declared it so loudly that this trade-off seems to be a fait accompli. Article after article talks about the balance between privacy and security, discussing whether various increases of security are worth the privacy and civil-liberty losses. Rarely do I see a discussion about whether this linkage is a valid one.

Security and privacy are not two sides of a teeter-totter. This association is simplistic and largely fallacious. It’s easy and fast, but less effective, to increase security by taking away liberty. However, the best ways to increase security are not at the expense of privacy and liberty.

It’s easy to refute the notion that all security comes at the expense of liberty. Arming pilots, reinforcing cockpit doors, and teaching flight attendants karate are all examples of security measures that have no effect on individual privacy or liberties. So are better authentication of airport maintenance workers, or dead-man switches that force planes to automatically land at the closest airport, or armed air marshals traveling on flights.

Liberty-depriving security measures are most often found when system designers failed to take security into account from the beginning. They’re Band-aids, and evidence of bad security planning. When security is designed into a system, it can work without forcing people to give up their freedoms.

[…]

There are copycat criminals and terrorists, who do what they’ve seen done before. To a large extent, this is what the hastily implemented security measures have tried to prevent. And there are the clever attackers, who invent new ways to attack people. This is what we saw on September 11. It’s expensive, but we can build security to protect against yesterday’s attacks. But we can’t guarantee protection against tomorrow’s attacks: the hacker attack that hasn’t been invented, or the terrorist attack yet to be conceived.

Demands for even more surveillance miss the point. The problem is not obtaining data, it’s deciding which data is worth analyzing and then interpreting it. Everyone already leaves a wide audit trail as we go through life, and law enforcement can already access those records with search warrants. The FBI quickly pieced together the terrorists’ identities and the last few months of their lives, once they knew where to look. If they had thrown up their hands and said that they couldn’t figure out who did it or how, they might have a case for needing more surveillance data. But they didn’t, and they don’t.

More data can even be counterproductive. The NSA and the CIA have been criticized for relying too much on signals intelligence, and not enough on human intelligence. The East German police collected data on four million East Germans, roughly a quarter of their population. Yet they did not foresee the peaceful overthrow of the Communist government because they invested heavily in data collection instead of data interpretation. We need more intelligence agents squatting on the ground in the Middle East arguing the Koran, not sitting in Washington arguing about wiretapping laws.

People are willing to give up liberties for vague promises of security because they think they have no choice. What they’re not being told is that they can have both. It would require people to say no to the FBI’s power grab. It would require us to discard the easy answers in favor of thoughtful answers. It would require structuring incentives to improve overall security rather than simply decreasing its costs. Designing security into systems from the beginning, instead of tacking it on at the end, would give us the security we need, while preserving the civil liberties we hold dear.

Some broad surveillance, in limited circumstances, might be warranted as a temporary measure. But we need to be careful that it remain temporary, and that we do not design surveillance into our electronic infrastructure. Thomas Jefferson once said: “Eternal vigilance is the price of liberty.” Historically, liberties have always been a casualty of war, but a temporary casualty. This war—a war without a clear enemy or end condition—has the potential to turn into a permanent state of society. We need to design our security accordingly.

Posted on September 11, 2009 at 6:26 AM31 Comments

File Deletion

File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn’t care about whether the file could be recovered or not, and a file erase program—I use BCWipe for Windows—if you wanted to ensure no one could ever recover the file.

As we move more of our data onto cloud computing platforms such as Gmail and Facebook, and closed proprietary platforms such as the Kindle and the iPhone, deleting data is much harder.

You have to trust that these companies will delete your data when you ask them to, but they’re generally not interested in doing so. Sites like these are more likely to make your data inaccessible than they are to physically delete it. Facebook is a known culprit: actually deleting your data from its servers requires a complicated procedure that may or may not work. And even if you do manage to delete your data, copies are certain to remain in the companies’ backup systems. Gmail explicitly says this in its privacy notice.

Online backups, SMS messages, photos on photo sharing sites, smartphone applications that store your data in the network: you have no idea what really happens when you delete pieces of data or your entire account, because you’re not in control of the computers that are storing the data.

This notion of control also explains how Amazon was able to delete a book that people had previously purchased on their Kindle e-book readers. The legalities are debatable, but Amazon had the technical ability to delete the file because it controls all Kindles. It has designed the Kindle so that it determines when to update the software, whether people are allowed to buy Kindle books, and when to turn off people’s Kindles entirely.

Vanish is a research project by Roxana Geambasu and colleagues at the University of Washington. They designed a prototype system that automatically deletes data after a set time interval. So you can send an email, create a Google Doc, post an update to Facebook, or upload a photo to Flickr, all designed to disappear after a set period of time. And after it disappears, no one—not anyone who downloaded the data, not the site that hosted the data, not anyone who intercepted the data in transit, not even you—will be able to read it. If the police arrive at Facebook or Google or Flickr with a warrant, they won’t be able to read it.

The details are complicated, but Vanish breaks the data’s decryption key into a bunch of pieces and scatters them around the web using a peer-to-peer network. Then it uses the natural turnover in these networks—machines constantly join and leave—to make the data disappear. Unlike previous programs that supported file deletion, this one doesn’t require you to trust any company, organisation, or website. It just happens.

Of course, Vanish doesn’t prevent the recipient of an email or the reader of a Facebook page from copying the data and pasting it into another file, just as Kindle’s deletion feature doesn’t prevent people from copying a book’s files and saving them on their computers. Vanish is just a prototype at this point, and it only works if all the people who read your Facebook entries or view your Flickr pictures have it installed on their computers as well; but it’s a good demonstration of how control affects file deletion. And while it’s a step in the right direction, it’s also new and therefore deserves further security analysis before being adopted on a wide scale.

We’ve lost the control of data on some of the computers we own, and we’ve lost control of our data in the cloud. We’re not going to stop using Facebook and Twitter just because they’re not going to delete our data when we ask them to, and we’re not going to stop using Kindles and iPhones because they may delete our data when we don’t want them to. But we need to take back control of data in the cloud, and projects like Vanish show us how we can.

Now we need something that will protect our data when a large corporation decides to delete it.

This essay originally appeared in The Guardian.

EDITED TO ADD (9/30): Vanish has been broken, paper here.

Posted on September 10, 2009 at 6:08 AM63 Comments

NSA Intercepts Used to Convict Liquid Bombers

Three of the UK liquid bombers were convicted Monday. NSA-intercepted e-mail was introduced as evidence in the trial:

The e-mails, several of which have been reprinted by the BBC and other publications, contained coded messages, according to prosecutors. They were intercepted by the NSA in 2006 but were not included in evidence introduced in a first trial against the three last year.

That trial resulted in the men being convicted of conspiracy to commit murder; but a jury was not convinced that they had planned to use soft drink bottles filled with liquid explosives to blow up seven trans-Atlantic planes—the charge for which they were convicted this week in a second trial.

According to Channel 4, the NSA had previously shown the e-mails to their British counterparts, but refused to let prosecutors use the evidence in the first trial, because the agency didn’t want to tip off an alleged accomplice in Pakistan named Rashid Rauf that his e-mail was being monitored. U.S. intelligence agents said Rauf was al Qaeda’s director of European operations at the time and that the bomb plot was being directed by Rauf and others in Pakistan.

The NSA later changed its mind and allowed the evidence to be introduced in the second trial, which was crucial to getting the jury conviction. Channel 4 suggests the NSA’s change of mind occurred after Rauf, a Briton born of Pakistani parents, was reportedly killed last year by a U.S. drone missile that struck a house where he was staying in northern Pakistan.

Although British prosecutors were eager to use the e-mails in their second trial against the three plotters, British courts prohibit the use of evidence obtained through interception. So last January, a U.S. court issued warrants directly to Yahoo to hand over the same correspondence.

It’s unclear if the NSA intercepted the messages as they passed through internet nodes based in the U.S. or intercepted them overseas.

EDITED TO ADD (9/9): Just to be sure, this has nothing to do with any illegal warrantless wiretapping the NSA has done over the years; the wiretap used to intercept these e-mails was obtained with a FISA warrant.

Posted on September 9, 2009 at 10:10 AM29 Comments

The Global Illicit Economy

Interesting video:

A new class of global actors is playing an increasingly important role in globalization: smugglers, warlords, guerrillas, terrorists, gangs, and bandits of all stripes. Since the end of the Cold War, the global illicit economy has consistently grown at twice the rate of the licit global economy. Increasingly, illicit actors will represent not just an economic but a political force. As globalization hollows out traditional nation-states, what will fill the power vacuum in slums and hinterlands will be informal non-state governance structures. These zones will be globally connected, effectively run by local gangs, religious leaders, or quasi-tribal organizations—organizations that will govern without aspiring to statehood.

Malware is one of Nils Gilman’s examples, at about the nine-minute mark.

The seven rules of the illicit global economy (he seems to use “illicit” and “deviant” interchangeably in the talk):

  1. Perfectly legitimate forms of demand can produce perfectly deviant forms of supply.
  2. Uneven global regulatory structures create arbitrage opportunities for deviant entrepreneurs.
  3. Pathways for legitimate globalization are always also pathways for deviant globalization.
  4. Once a deviant industry professionalizes, crackdowns merely promote innovation.
  5. States themselves undermine the distinction between legitimate and deviant economics.
  6. Unchecked, deviant entrepreneurs will overtake the legitimate economy.
  7. Deviant globalization presents an existential challenge to state legitimacy.

Posted on September 8, 2009 at 7:12 AM53 Comments

David Kilcullen on Security and Insurgency

Very interesting hour-long interview.

Australian-born David Kilcullen was the senior advisor to US General David Petraeus during his time in Iraq, advising on counterinsurgency. The implementation of his strategies are now regarded as a major turning point in the war.

Here, in a fascinating discussion with human rights lawyer Julian Burnside at the Melbourne Writers’ Festival, he talks about the ethics and tactics of contemporary warfare.

Posted on September 7, 2009 at 7:33 AM15 Comments

Subpoenas as a Security Threat

Blog post from Ed Felten:

Usually when the threat model mentions subpoenas, the bigger threats in reality come from malicious intruders or insiders. The biggest risk in storing my documents on CloudCorp’s servers is probably that somebody working at CloudCorp, or a contractor hired by them, will mess up or misbehave.

So why talk about subpoenas rather than intruders or insiders? Perhaps this kind of talk is more diplomatic than the alternative. If I’m talking about the risks of Gmail, I might prefer not to point out that my friends at Google could hire someone who is less than diligent, or less than honest. If I talk about subpoenas as the threat, nobody in the room is offended, and the security measures I recommend might still be useful against intruders and insiders. It’s more polite to talk about data losses that are compelled by a mysterious, powerful Other—in this case an Anonymous Lawyer.

Politeness aside, overemphasizing subpoena threats can be harmful in at least two ways. First, we can easily forget that enforcement of subpoenas is often, though not always, in society’s interest. Our legal system works better when fact-finders have access to a broader range of truthful evidence. That’s why we have subpoenas in the first place. Not all subpoenas are good—and in some places with corrupt or evil legal systems, subpoenas deserve no legitimacy at all—but we mustn’t lose sight of society’s desire to balance the very real cost imposed on the subpoena’s target and affected third parties, against the usefulness of the resulting evidence in administering justice.

The second harm is to security. To the extent that we focus on the subpoena threat, rather than the larger threats of intruders and insiders, we risk finding “solutions” that fail to solve our biggest problems. We might get lucky and end up with a solution that happens to address the bigger threats too. We might even design a solution for the bigger threats, and simply use subpoenas as a rhetorical device in explaining our solution—though it seems risky to mislead our audience about our motivations. If our solution flows from our threat model, as it should, then we need to be very careful to get our threat model right.

Posted on September 4, 2009 at 6:18 AM24 Comments

"The Cult of Schneier"

If there’s actually a cult out there, I want to hear about it. In an essay by that name, John Viega writes about the dangers of relying on Applied Cryptography to design cryptosystems:

But, after many years of evaluating the security of software systems, I’m incredibly down on using the book that made Bruce famous when designing the cryptographic aspects of a system. In fact, I can safely say I have never seen a secure system come out the other end, when that is the primary source for the crypto design. And I don’t mean that people forget about the buffer overflows. I mean, the crypto is crappy.

My rule for software development teams is simple: Don’t use Applied Cryptography in your system design. It’s fine and fun to read it, just don’t build from it.

[…]

The book talks about the fundamental building blocks of cryptography, but there is no guidance on things like, putting together all the pieces to create a secure, authenticated connection between two parties.

Plus, in the nearly 13 years since the book was last revised, our understanding of cryptography has changed greatly. There are things in it that were thought to be true at the time that turned out to be very false….

I agree. And, to his credit, Viega points out that I agree:

But in the introduction to Bruce Schneier’s book, Practical Cryptography, he himself says that the world is filled with broken systems built from his earlier book. In fact, he wrote Practical Cryptography in hopes of rectifying the problem.

This is all true.

Designing a cryptosystem is hard. Just as you wouldn’t give a person—even a doctor—a brain-surgery instruction manual and then expect him to operate on live patients, you shouldn’t give an engineer a cryptography book and then expect him to design and implement a cryptosystem. The patient is unlikely to survive, and the cryptosystem is unlikely to be secure.

Even worse, security doesn’t provide immediate feedback. A dead patient on the operating table tells the doctor that maybe he doesn’t understand brain surgery just because he read a book, but an insecure cryptosystem works just fine. It’s not until someone takes the time to break it that the engineer might realize that he didn’t do as good a job as he thought. Remember: Anyone can design a security system that he himself cannot break. Even the experts regularly get it wrong. The odds that an amateur will get it right are extremely low.

For those who are interested, a second edition of Practical Cryptography will be published in early 2010, renamed Cryptography Engineering and featuring a third author: Tadayoshi Kohno.

EDITED TO ADD (9/16): Commentary.

Posted on September 3, 2009 at 1:56 PM63 Comments

Real-World Access Control

Access control is difficult in an organizational setting. On one hand, every employee needs enough access to do his job. On the other hand, every time you give an employee more access, there’s more risk: he could abuse that access, or lose information he has access to, or be socially engineered into giving that access to a malfeasant. So a smart, risk-conscious organization will give each employee the exact level of access he needs to do his job, and no more.

Over the years, there’s been a lot of work put into role-based access control. But despite the large number of academic papers and high-profile security products, most organizations don’t implement it—at all—with the predictable security problems as a result.

Regularly we read stories of employees abusing their database access-control privileges for personal reasons: medical records, tax records, passport records, police records. NSA eavesdroppers spy on their wives and girlfriends. Departing employees take corporate secrets

A spectacular access control failure occurred in the UK in 2007. An employee of Her Majesty’s Revenue & Customs had to send a couple of thousand sample records from a database on all children in the country to National Audit Office. But it was easier for him to copy the entire database of 25 million people onto a couple of disks and put it in the mail than it was to select out just the records needed. Unfortunately, the discs got lost in the mail and the story was a huge embarrassment for the government.

Eric Johnson at Dartmouth’s Tuck School of Business has been studying the problem, and his results won’t startle anyone who has thought about it at all. RBAC is very hard to implement correctly. Organizations generally don’t even know who has what role. The employee doesn’t know, the boss doesn’t know—and these days the employee might have more than one boss—and senior management certainly doesn’t know. There’s a reason RBAC came out of the military; in that world, command structures are simple and well-defined.

Even worse, employees’ roles change all the time—Johnson chronicled one business group of 3,000 people that made 1,000 role changes in just three months—and it’s often not obvious what information an employee needs until he actually needs it. And information simply isn’t that granular. Just as it’s much easier to give someone access to an entire file cabinet than to only the particular files he needs, it’s much easier to give someone access to an entire database than only the particular records he needs.

This means that organizations either over-entitle or under-entitle employees. But since getting the job done is more important than anything else, organizations tend to over-entitle. Johnson estimates that 50 percent to 90 percent of employees are over-entitled in large organizations. In the uncommon instance where an employee needs access to something he normally doesn’t have, there’s generally some process for him to get it. And access is almost never revoked once it’s been granted. In large formal organizations, Johnson was able to predict how long an employee had worked there based on how much access he had.

Clearly, organizations can do better. Johnson’s current work involves building access-control systems with easy self-escalation, audit to make sure that power isn’t abused, violation penalties (Intel, for example, issues “speeding tickets” to violators), and compliance rewards. His goal is to implement incentives and controls that manage access without making people too risk-averse.

In the end, a perfect access control system just isn’t possible; organizations are simply too chaotic for it to work. And any good system will allow a certain number of access control violations, if they’re made in good faith by people just trying to do their jobs. The “speeding ticket” analogy is better than it looks: we post limits of 55 miles per hour, but generally don’t start ticketing people unless they’re going over 70.

This essay previously appeared in Information Security, as part of a point/counterpoint with Marcus Ranum. You can read Marcus’s response here—after you answer some nosy questions to get a free account.

Posted on September 3, 2009 at 12:54 PM26 Comments

The History of One-Time Pads and the Origins of SIGABA

Blog post from Steve Bellovin:

It is vital that the keystream values (a) be truly random and (b) never be reused. The Soviets got that wrong in the 1940s; as a result, the U.S. Army’s Signal Intelligence Service was able to read their spies’ traffic in the Venona program. The randomness requirement means that the values cannot be generated by any algorithm; they really have to be random, and created by a physical process, not a mathematical one.

A consequence of these requirements is that the key stream must be as long as the data to be encrypted. If you want to encrypt a 1 megabyte file, you need 1 megabyte of key stream that you somehow have to share securely with the recipient. The recipient, in turn, has to store this data securely. Furthermore, both the sender and the recipient must ensure that they never, ever reuse the key stream. The net result is that, as I’ve often commented, “one-time pads are theoretically unbreakable, but practically very weak. By contrast, conventional ciphers are theoretically breakable, but practically strong.” They’re useful for things like communicating with high-value spies. The Moscow-Washington hotline used them, too. For ordinary computer usage, they’re not particularly practical.

I wrote about one-time pads, and their practical insecurity, in 2002:

What a one-time pad system does is take a difficult message security problem—that’s why you need encryption in the first place—and turn it into a just-as-difficult key distribution problem. It’s a “solution” that doesn’t scale well, doesn’t lend itself to mass-market distribution, is singularly ill-suited to computer networks, and just plain doesn’t work.

[…]

One-time pads may be theoretically secure, but they are not secure in a practical sense. They replace a cryptographic problem that we know a lot about solving—how to design secure algorithms—with an implementation problem we have very little hope of solving.

Posted on September 3, 2009 at 5:36 AM94 Comments

The Exaggerated Fears of Cyber-War

Good article, which basically says our policies are based more on fear than on reality.

On cyber-terrorism:

So why is there so much concern about “cyber-terrorism”? Answering a question with a question: who frames the debate? Much of the data are gathered by ultra-secretive government agencies—which need to justify their own existence—and cyber-security companies—which derive commercial benefits from popular anxiety. Journalists do not help. Gloomy scenarios and speculations about cyber-Armaggedon draw attention, even if they are relatively short on facts.

Politicians, too, deserve some blame, as they are usually quick to draw parallels between cyber-terrorism and conventional terrorism—often for geopolitical convenience—while glossing over the vast differences that make military metaphors inappropriate. In particular, cyber-terrorism is anonymous, decentralized, and even more detached than ordinary terrorism from physical locations. Cyber-terrorists do not need to hide in caves or failed states; “cyber-squads” typically reside in multiple geographic locations, which tend to be urban and well-connected to the global communications grid. Some might still argue that state sponsorship (or mere toleration) of cyber-terrorism could be treated as casus belli, but we are yet to see a significant instance of cyber-terrorists colluding with governments. All of this makes talk of large-scale retaliation impractical, if not irresponsible, but also understandable if one is trying to attract attention.

Much of the cyber-security problem, then, seems to be exaggerated: the economy is not about to be brought down, data and networks can be secured, and terrorists do not have the upper hand.

On cyber-war:

Putting these complexities aside and focusing just on states, it is important to bear in mind that the cyber-attacks on Estonia and especially Georgia did little damage, particularly when compared to the physical destruction caused by angry mobs in the former and troops in the latter. One argument about the Georgian case is that cyber-attacks played a strategic role by thwarting Georgia’s ability to communicate with the rest of the world and present its case to the international community. This argument both overestimates the Georgian government’s reliance on the Internet and underestimates how much international PR—particularly during wartime—is done by lobbyists and publicity firms based in Washington, Brussels, and London. There is, probably, an argument to be made about the vast psychological effects of cyber-attacks—particularly those that disrupt ordinary economic life. But there is a line between causing inconvenience and causing human suffering, and cyber-attacks have not crossed it yet.

The real risk isn’t cyber-war or cyber-terrorism, it’s cyber-crime.

Posted on September 2, 2009 at 7:40 AM33 Comments

Hacking Swine Flu

Interesting:

So how many bits are in this instance of H1N1? The raw number of bits, by my count, is 26,022; the actual number of coding bits approximately 25,054—I say approximately because the virus does the equivalent of self-modifying code to create two proteins out of a single gene in some places (pretty interesting stuff actually), so it’s hard to say what counts as code and what counts as incidental non-executing NOP sleds that are required for self-modifying code.

So it takes about 25 kilobits—3.2 kbytes—of data to code for a virus that has a non-trivial chance of killing a human. This is more efficient than a computer virus, such as MyDoom, which rings in at around 22 kbytes.

It’s humbling that I could be killed by 3.2 kbytes of genetic data. Then again, with 850 Mbytes of data in my genome, there’s bound to be an exploit or two.

Posted on September 1, 2009 at 1:13 PM50 Comments

Matthew Weigman

Fascinating story of a 16-year-old blind phone phreaker.

One afternoon, not long after Proulx was swatted, Weigman came home to find his mother talking to what sounded like a middle-aged male. The man introduced himself as Special Agent Allyn Lynd of the FBI’s cyber squad in Dallas, which investigates hacking and other computer crimes. A West Point grad, Lynd had spent 10 years combating phreaks and hackers. Now, with Proulx’s cooperation, he was aiming to take down Stuart Rosoff and the Wrecking Crew—and he wanted Weigman’s help.

Lynd explained that Rosoff, Roberson and other party-liners were being investigated in a swatting conspiracy. Because Weigman was a minor, however, he would not be charged—as long as he cooperated with the authorities. Realizing that this was a chance to turn his life around, Weigman confessed his role in the phone assaults.

Weigman’s auditory skills had always been central to his exploits, the means by which he manipulated the phone system. Now he gave Lynd a first-hand display of his powers. At one point during the visit, Lynd’s cellphone rang. “I can’t talk to you right now,” the agent told the caller. “I’m out doing something.” When he hung up, Weigman turned to him from across the room. “Oh,” the kid asked, “is that Billy Smith from Verizon?”

Lynd was stunned. William Smith was a fraud investigator with Verizon who had been working with him on the swatting case. Weigman not only knew all about the man and his role in the investigation, but he had identified Smith simply by hearing his Southern-accented voice on the cellphone—a sound which would have been inaudible to anyone else in the room. Weigman then shocked Lynd again, rattling off the names of a host of investigators working for other phone companies. Matt, it turned out, had spent weeks identifying phone-company employees, gaining their trust and obtaining confidential information about the FBI investigation against him. Even the phone account in his house, he revealed to Lynd, had been opened under the name of a telephone-company investigator. Lynd had rarely seen anything like it—even from cyber gangs who tried to hack into systems at the White House and the FBI. “Weigman flabbergasted me,” he later testified.

Posted on September 1, 2009 at 6:21 AM32 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.