Schneier on Security
A blog covering security and security technology.
July 2009 Archives
Friday Squid Blogging: Spicy Squid on a Stick
Snake Oil Salesman
In cryptography, we've long used the term "snake oil" to refer to crypto systems with good marketing hype and little actual security. It's the phrase I generalized into "security theater."
Well, it turns out that there really is a snake oil salesman.
Eve Ensler on Security
Interesting TED talk by Eve Ensler on security. She doesn't use any of the terms, but in the beginning she's echoing a lot of the current thinking about evolutionary psychology and how it relates to security.
More fearmongering. The headline is "Terrorists could use internet to launch nuclear attack: report." The subhead: "The risk of cyber-terrorism escalating to a nuclear strike is growing daily, according to a study." In the article:
The claims come in a study commissioned by the International Commission on Nuclear Non-proliferation and Disarmament (ICNND), which suggests that under the right circumstances, terrorists could break into computer systems and launch an attack on a nuclear state triggering a catastrophic chain of events that would have a global impact.
Note the weasel words: the study "suggests that under the right circumstances." We're "leaving open the possibility." The report "outlines a number of potential threats and situations" where the bad guys could "make a nuclear attack more likely."
Another New AES Attack
A new and very impressive attack against AES has just been announced.
Over the past couple of months, there have been two (the second blogged about here) new cryptanalysis papers on AES. The attacks presented in the papers are not practical -- they're far too complex, they're related-key attacks, and they're against larger-key versions and not the 128-bit version that most implementations use -- but they are impressive pieces of work all the same.
This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir, is much more devastating. It is a completely practical attack against ten-round AES-256:
They also describe an attack against 11-round AES-256 that requires 270 time -- almost practical.
These new results greatly improve on the Biryukov, Khovratovich, and Nikolic papers mentioned above, and a paper I wrote with six others in 2000, where we describe a related-key attack against 9-round AES-256 (then called Rijndael) in 2224 time. (This again proves the cryptographer's adage: attacks always get better, they never get worse.)
By any definition of the term, this is a huge result.
There are three reasons not to panic:
Not much comfort there, I agree. But it's what we have.
Cryptography is all about safety margins. If you can break n round of a cipher, you design it with 2n or 3n rounds. What we're learning is that the safety margin of AES is much less than previously believed. And while there is no reason to scrap AES in favor of another algorithm, NST should increase the number of rounds of all three AES variants. At this point, I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds. Or maybe even more; we don't want to be revising the standard again and again.
And for new applications I suggest that people don't use AES-256. AES-128 provides more than enough security margin for the forseeable future. But if you're already using AES-256, there's no reason to change.
The paper I have is still a draft. It is being circulated among cryptographers, and should be online in a couple of days. I will post the link as soon as I have it.
UPDATED TO ADD (8/3): The paper is public.
Risks of Cloud Computing
Excellent essay by Jonathan Zittrain on the risks of cloud computing:
The cloud, however, comes with real dangers.
Here's me on cloud computing.
iPhone Encryption Useless
Interesting, although I want some more technical details.
...the new iPhone 3GS' encryption feature is "broken" when it comes to protecting sensitive information such as credit card numbers and social-security digits, Zdziarski said.
New Real Estate Scam
Nigerian scammers find homes listed for sale on these public search sites, copy the pictures and listings verbatim, and then post the information onto Craigslist under available housing rentals, without the consent or knowledge of Craigslist, who has been notified.
Large Signs a Security Risk
A large sign saying "United States" at a border crossing was deemed a security risk:
Yet three weeks ago, less than a month after the station opened, workers began prying the big yellow letters off the building's facade on orders from Customs and Border Protection. The plan is to dismantle the rest of the sign this week.
Swiss Security Problem: Storing Gold
Seems like the Swiss may be running out of secure gold storage. If this is true, it's a real security issue. You can't just store the stuff behind normal locks. Building secure gold storage takes time and money.
I am reminded of a related problem the EU had during the transition to the euro: where to store all the bills and coins before the switchover date. There wasn't enough vault space in banks, because the vast majority of currency is in circulation. It's a similar problem, although the EU banks could solve theirs with lots of guards, because it was only a temporary problem.
Tips for Staying Safe Online
This is funny:
Tips for Staying Safe Online
Those must be some pretty nasty attachments.
Base Rate Fallacy
Nice description of the base rate fallacy.
Friday Squid Blogging: Humboldt Squid Invasion
Thousands of jumbo flying squid, aggressive 5-foot-long sea monsters with razor-sharp beaks and toothy tentacles, have invaded the shallow waters off San Diego, spooking scuba divers and washing up dead on beaches.
One diver described how one of the rust-coloured creatures ripped the buoyancy aid and light from her chest, and grabbed her with its tentacles.
...a powerful, outsize squid that features eight snakelike arms lined with suckers full of nasty little teeth, a razor-sharp beak that can rapidly rip flesh into bite-size chunks, and an unrelenting hunger. It's called the Humboldt, or jumbo, squid, and it's not the sort of calamari you're used to forking off your dinner plate. This squid grows to seven feet or more and perhaps a couple hundred pounds. It has a rep as the outlaw biker of the marine world: intelligent and opportunistic, a stone-cold cannibal willing to attack divers with a seemingly deliberate hostility.
That article is a really fun read.
Info on cooking them.
SHA-3 Second Round Candidates Announced
NIST has announced the 14 SHA-3 candidates that have advanced to the second round: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grøstl, Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein.
In February, I chose my favorites: Arirang, BLAKE, Blue Midnight Wish, ECHO, Grøstl, Keccak, LANE, Shabal, and Skein. Of the ones NIST eventually chose, I am most surprised to see CubeHash and most surprised not to see LANE.
Social Security Numbers are Not Random
Information about an individual's place and date of birth can be exploited to predict his or her Social Security number (SSN). Using only publicly available information, we observed a correlation between individuals' SSNs and their birth data and found that for younger cohorts the correlation allows statistical inference of private SSNs. The inferences are made possible by the public availability of the Social Security Administration's Death Master File and the widespread accessibility of personal information from multiple sources, such as data brokers or profiles on social networking sites. Our results highlight the unexpected privacy consequences of the complex interactions among multiple data sources in modern information economies and quantify privacy risks associated with information revelation in public forums.
I don't see any new insecurities here. We already know that Social Security Numbers are not secrets. And anyone who wants to steal a million SSNs is much more likely to break into one of the gazillion databases out there that store them.
The Twitter Attack
Excellent article detailing the Twitter attack.
Mapping Drug Use by Testing Sewer Water
Scientists from Oregon State University, the University of Washington and McGill University partnered with city workers in 96 communities, including Pendleton, Hermiston and Umatilla, to gather samples on one day, March 4, 2008. The scientists then tested the samples for evidence of methamphetamine, cocaine and ecstasy, or MDMA.
These techniques can detect drug usage at individual houses. It's just a matter of where you take your samples.
Verifiable Dismantling of Nuclear Bombs
Cryptography has zero-knowledge proofs, where Alice can prove to Bob that she knows something without revealing it to Bob. Here's something similar from the real world. It's a research project to allow weapons inspectors from one nation to verify the disarming of another nation's nuclear weapons without learning any weapons secrets in the process, such as the amount of nuclear material in the weapon.
"Distributed Security: A New Model of Law Enforcement," Susan W. Brenner and Leo L. Clarke.
It's from 2005, but I've never seen it before.
Friday Squid Blogging: Bottled Water Plus Squid
Only in Japan:
Bandai toy company from Japan has finally realized that bottles of water just aren't cute. As Japan is the cute capital of the world, this just wouldn't do. To fix the problem, they developed these adorable floating squids that can be added to any bottle of water. Thank god for Japanese innovation. Of course, they're only available in Japan, but at least they're affordable at only $6 each.
Pepper Spray–Equipped ATMs
South Africa takes its security seriously. Here's an ATM that automatically squirts pepper spray into the face of "people tampering with the card slots."
Sounds cool, but these kinds of things are all about false positives:
But the mechanism backfired in one incident last week when pepper spray was inadvertently inhaled by three technicians who required treatment from paramedics.
Privacy Salience and Social Networking Sites
Reassuring people about privacy makes them more, not less, concerned. It's called "privacy salience," and Leslie John, Alessandro Acquisti, and George Loewenstein -- all at Carnegie Mellon University -- demonstrated this in a series of clever experiments. In one, subjects completed an online survey consisting of a series of questions about their academic behavior -- "Have you ever cheated on an exam?" for example. Half of the subjects were first required to sign a consent warning -- designed to make privacy concerns more salient -- while the other half did not. Also, subjects were randomly assigned to receive either a privacy confidentiality assurance, or no such assurance. When the privacy concern was made salient (through the consent warning), people reacted negatively to the subsequent confidentiality assurance and were less likely to reveal personal information.
In another experiment, subjects completed an online survey where they were asked a series of personal questions, such as "Have you ever tried cocaine?" Half of the subjects completed a frivolous-looking survey -- "How BAD are U??" -- with a picture of a cute devil. The other half completed the same survey with the title "Carnegie Mellon University Survey of Ethical Standards," complete with a university seal and official privacy assurances. The results showed that people who were reminded about privacy were less likely to reveal personal information than those who were not.
Privacy salience does a lot to explain social networking sites and their attitudes towards privacy. From a business perspective, social networking sites don't want their members to exercise their privacy rights very much. They want members to be comfortable disclosing a lot of data about themselves.
Joseph Bonneau and Soeren Preibusch of Cambridge University have been studying privacy on 45 popular social networking sites around the world. (You may not have realized that there are 45 popular social networking sites around the world.) They found that privacy settings were often confusing and hard to access; Facebook, with its 61 privacy settings, is the worst. To understand some of the settings, they had to create accounts with different settings so they could compare the results. Privacy tends to increase with the age and popularity of a site. General-use sites tend to have more privacy features than niche sites.
But their most interesting finding was that sites consistently hide any mentions of privacy. Their splash pages talk about connecting with friends, meeting new people, sharing pictures: the benefits of disclosing personal data.
It's the Carnegie Mellon experimental result in the real world. Users care about privacy, but don't really think about it day to day. The social networking sites don't want to remind users about privacy, even if they talk about it positively, because any reminder will result in users remembering their privacy fears and becoming more cautious about sharing personal data. But the sites also need to reassure those "privacy fundamentalists" for whom privacy is always salient, so they have very strong pro-privacy rhetoric for those who take the time to search them out. The two different marketing messages are for two different audiences.
Social networking sites are improving their privacy controls as a result of public pressure. At the same time, there is a counterbalancing business pressure to decrease privacy; watch what's going on right now on Facebook, for example. Naively, we should expect companies to make their privacy policies clear to allow customers to make an informed choice. But the marketing need to reduce privacy salience will frustrate market solutions to improve privacy; sites would much rather obfuscate the issue than compete on it as a feature.
This essay originally appeared in the Guardian.
Laptop Security while Crossing Borders
Last year, I wrote about the increasing propensity for governments, including the U.S. and Great Britain, to search the contents of people's laptops at customs. What we know is still based on anecdote, as no country has clarified the rules about what their customs officers are and are not allowed to do, and what rights people have.
Companies and individuals have dealt with this problem in several ways, from keeping sensitive data off laptops traveling internationally, to storing the data -- encrypted, of course -- on websites and then downloading it at the destination. I have never liked either solution. I do a lot of work on the road, and need to carry all sorts of data with me all the time. It's a lot of data, and downloading it can take a long time. Also, I like to work on long international flights.
There's another solution, one that works with whole-disk encryption products like PGP Disk (I'm on PGP's advisory board), TrueCrypt, and BitLocker: Encrypt the data to a key you don't know.
It sounds crazy, but stay with me. Caveat: Don't try this at home if you're not very familiar with whatever encryption product you're using. Failure results in a bricked computer. Don't blame me.
Step One: Before you board your plane, add another key to your whole-disk encryption (it'll probably mean adding another "user") -- and make it random. By "random," I mean really random: Pound the keyboard for a while, like a monkey trying to write Shakespeare. Don't make it memorable. Don't even try to memorize it.
Technically, this key doesn't directly encrypt your hard drive. Instead, it encrypts the key that is used to encrypt your hard drive -- that's how the software allows multiple users.
So now there are two different users named with two different keys: the one you normally use, and some random one you just invented.
Step Two: Send that new random key to someone you trust. Make sure the trusted recipient has it, and make sure it works. You won't be able to recover your hard drive without it.
Step Three: Burn, shred, delete or otherwise destroy all copies of that new random key. Forget it. If it was sufficiently random and non-memorable, this should be easy.
Step Four: Board your plane normally and use your computer for the whole flight.
Step Five: Before you land, delete the key you normally use.
At this point, you will not be able to boot your computer. The only key remaining is the one you forgot in Step Three. There's no need to lie to the customs official; you can even show him a copy of this article if he doesn't believe you.
Step Six: When you're safely through customs, get that random key back from your confidant, boot your computer and re-add the key you normally use to access your hard drive.
And that's it.
This is by no means a magic get-through-customs-easily card. Your computer might be impounded, and you might be taken to court and compelled to reveal who has the random key.
But the purpose of this protocol isn't to prevent all that; it's just to deny any possible access to your computer to customs. You might be delayed. You might have your computer seized. (This will cost you any work you did on the flight, but -- honestly -- at that point that's the least of your troubles.) You might be turned back or sent home. But when you're back home, you have access to your corporate management, your personal attorneys, your wits after a good night's sleep, and all the rights you normally have in whatever country you're now in.
This procedure not only protects you against the warrantless search of your data at the border, it also allows you to deny a customs official your data without having to lie or pretend -- which itself is often a crime.
Now the big question: Who should you send that random key to?
Certainly it should be someone you trust, but -- more importantly -- it should be someone with whom you have a privileged relationship. Depending on the laws in your country, this could be your spouse, your attorney, your business partner or your priest. In a larger company, the IT department could institutionalize this as a policy, with the help desk acting as the key holder.
You could also send it to yourself, but be careful. You don't want to e-mail it to your webmail account, because then you'd be lying when you tell the customs official that there is no possible way you can decrypt the drive.
You could put the key on a USB drive and send it to your destination, but there are potential failure modes. It could fail to get there in time to be waiting for your arrival, or it might not get there at all. You could airmail the drive with the key on it to yourself a couple of times, in a couple of different ways, and also fax the key to yourself ... but that's more work than I want to do when I'm traveling.
If you only care about the return trip, you can set it up before you return. Or you can set up an elaborate one-time pad system, with identical lists of keys with you and at home: Destroy each key on the list you have with you as you use it.
Remember that you'll need to have full-disk encryption, using a product such as PGP Disk, TrueCrypt or BitLocker, already installed and enabled to make this work.
I don't think we'll ever get to the point where our computer data is safe when crossing an international border. Even if countries like the U.S. and Britain clarify their rules and institute privacy protections, there will always be other countries that will exercise greater latitude with their authority. And sometimes protecting your data means protecting your data from yourself.
This essay originally appeared on Wired.com.
Data Leakage Through Power Lines
The NSA has known about this for decades:
Security researchers found that poor shielding on some keyboard cables means useful data can be leaked about each character typed.
Poor Man's Steganography
Hide files inside pdf documents: "embed a file in a PDF document and corrupt the reference, thereby effectively making the embedded file invisible to the PDF reader."
Gaze Tracking Software Protecting Privacy
Interesting use of gaze tracking software to protect privacy:
Chameleon uses gaze-tracking software and camera equipment to track an authorized reader's eyes to show only that one person the correct text. After a 15-second calibration period in which the software essentially "learns" the viewer's gaze patterns, anyone looking over that user's shoulder sees dummy text that randomly and constantly changes.
How effective this is will mostly be a usability problem, but I like the idea of a system detecting if anyone else is looking at my screen.
EDITED TO ADD (7/14): A demo.
North Korean Cyberattacks
To hear the media tell it, the United States suffered a major cyberattack last week. Stories were everywhere. "Cyber Blitz hits U.S., Korea" was the headline in Thursday's Wall Street Journal. North Korea was blamed.
Where were you when North Korea attacked America? Did you feel the fury of North Korea's armies? Were you fearful for your country? Or did your resolve strengthen, knowing that we would defend our homeland bravely and valiantly?
My guess is that you didn't even notice, that -- if you didn't open a newspaper or read a news website -- you had no idea anything was happening. Sure, a few government websites were knocked out, but that's not alarming or even uncommon. Other government websites were attacked but defended themselves, the sort of thing that happens all the time. If this is what an international cyberattack looks like, it hardly seems worth worrying about at all.
Politically motivated cyber attacks are nothing new. We've seen UK vs. Ireland. Israel vs. the Arab states. Russia vs. several former Soviet Republics. India vs. Pakistan, especially after the nuclear bomb tests in 1998. China vs. the United States, especially in 2001 when a U.S. spy plane collided with a Chinese fighter jet. And so on and so on.
The big one happened in 2007, when the government of Estonia was attacked in cyberspace following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial. The networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and -- in many cases -- shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.
It was hyped as the first cyberwar, but after two years there is still no evidence that the Russian government was involved. Though Russian hackers were indisputably the major instigators of the attack, the only individuals positively identified have been young ethnic Russians living inside Estonia, who were angry over the statue incident.
Poke at any of these international incidents, and what you find are kids playing politics. Last Wednesday, South Korea's National Intelligence Service admitted that it didn't actually know that North Korea was behind the attacks: "North Korea or North Korean sympathizers in the South" was what it said. Once again, it'll be kids playing politics.
This isn't to say that cyberattacks by governments aren't an issue, or that cyberwar is something to be ignored. The constant attacks by Chinese nationals against U.S. networks may not be government-sponsored, but it's pretty clear that they're tacitly government-approved. Criminals, from lone hackers to organized crime syndicates, attack networks all the time. And war expands to fill every possible theater: land, sea, air, space, and now cyberspace. But cyberterrorism is nothing more than a media invention designed to scare people. And for there to be a cyberwar, there first needs to be a war.
Israel is currently considering attacking Iran in cyberspace, for example. If it tries, it'll discover that attacking computer networks is an inconvenience to the nuclear facilities it's targeting, but doesn't begin to substitute for bombing them.
This is the face of cyberwar: easily preventable attacks that, even when they succeed, only a few people notice. Even this current incident is turning out to be a sloppily modified five-year-old worm that no modern network should still be vulnerable to.
Securing our networks doesn't require some secret advanced NSA technology. It's the boring network security administration stuff we already know how to do: keep your patches up to date, install good anti-malware software, correctly configure your firewalls and intrusion-detection systems, monitor your networks. And while some government and corporate networks do a pretty good job at this, others fail again and again.
This essay originally appeared on the Minnesota Public Radio website.
Strong Web Passwords
Interesting paper from HotSec '07: "Do Strong Web Passwords Accomplish Anything?" by Dinei Florêncio, Cormac Herley, and Baris Coskun.
ABSTRACT: We find that traditional password advice given to users is somewhat dated. Strong passwords do nothing to protect online users from password stealing attacks such as phishing and keylogging, and yet they place considerable burden on users. Passwords that are too weak of course invite brute-force attacks. However, we find that relatively weak passwords, about 20 bits or so, are sufficient to make brute-force attacks on a single account unrealistic so long as a "three strikes" type rule is in place. Above that minimum it appears that increasing password strength does little to address any real threat If a larger credential space is needed it appears better to increase the strength of the user ID's rather than the passwords. For large institutions this is just as effective in deterring bulk guessing attacks and is a great deal better for users. For small institutions there appears little reason to require strong passwords for online accounts.
Friday Squid Blogging: Humboldt Squid Caught Off Seattle
They're still moving North.
Lost Suitcases in Airport Restrooms
Want to cause chaos at an airport? Leave a suitcase in the restroom:
Three incoming flights from London were cancelled and about 150 others were delayed for up to three hours, while the army's bomb squad carried out its investigation, before giving the all-clear at about 5pm.
Oddest quote is from a police spokesperson:
"Inquires are under way to establish how the luggage came to be located within the toilets."
My guess is that someone left it there.
I'd suggest this as a good denial-of-service attack, but certainly there is a video camera recording of the person bringing the suitcase into the airport. The article says it was left in the "domestic arrivals area." I don't know if that's inside airport security or not.
Making an Operating System Virus Free
Commenting on Google's claim that Chrome was designed to be virus-free, I said:
Bruce Schneier, the chief security technology officer at BT, scoffed at Google's promise. "It's an idiotic claim," Schneier wrote in an e-mail. "It was mathematically proved decades ago that it is impossible -- not an engineering impossibility, not technologically impossible, but the 2+2=3 kind of impossible -- to create an operating system that is immune to viruses."
What I was referring to, although I couldn't think of his name at the time, was Fred Cohen's 1986 Ph.D. thesis where he proved that it was impossible to create a virus-checking program that was perfect. That is, it is always possible to write a virus that any virus-checking program will not detect.
This reaction to my comment is accurate:
That seems to us like he's picking on the semantics of Google's statement just a bit. Google says that users "won't have to deal with viruses," and Schneier is noting that it's simply not possible to create an OS that can't be taken down by malware. While that may be the case, it's likely that Chrome OS is going to be arguably more secure than the other consumer operating systems currently in use today. In fact, we didn't take Google's statement to mean that Chrome OS couldn't get a virus EVER; we just figured they meant it was a lot harder to get one on their new OS - didn't you?
When I said that I had not seen Google's statement. I was responding to what the reporter was telling me on the phone. So yes, I jumped on the reporter's claim about Google's claim. I did try to temper my comment:
Redesigning an operating system from scratch, "[taking] security into account all the way up and down," could make for a more secure OS than ones that have been developed so far, Schneier said. But that's different from Google's promise that users won't have to deal with viruses or malware, he added.
To summarize, there is a lot that can be done in an OS to reduce the threat of viruses and other malware. If the Chrome team started from scratch and took security seriously all through the design and development process, they have to potential to develop something really secure. But I don't know if they did.
NSA Building Massive Data Center in Utah
The years-in-the-making project, which may cost billions over time, got a $181 million start last week when President Obama signed a war spending bill in which Congress agreed to pay for primary construction, power access and security infrastructure. The enormous building, which will have a footprint about three times the size of the Utah State Capitol building, will be constructed on a 200-acre site near the Utah National Guard facility's runway.
The ATM Vulnerability You Won't Hear About
The talk has been pulled from the BlackHat conference:
Barnaby Jack, a researcher with Juniper Networks, was to present a demonstration showing how he could jackpot a popular ATM brand by exploiting a vulnerability in its software.
"The vulnerability Barnaby was to discuss has far reaching consequences, not only to the affected ATM vendor, but to other ATM vendors and--ultimately--the public," wrote Brendan Lewis, director of corporate social media relations for Juniper in a statement posted to the company's official blog last week. "To publicly disclose the research findings before the affected vendor could properly mitigate the exposure would have potentially placed their customers at risk. That is something we don't want to see happen."
Homomorphic Encryption Breakthrough
Last month, IBM made some pretty brash claims about homomorphic encryption and the future of security. I hate to be the one to throw cold water on the whole thing -- as cool as the new discovery is -- but it's important to separate the theoretical from the practical.
Homomorphic cryptosystems are ones where mathematical operations on the ciphertext have regular effects on the plaintext. A normal symmetric cipher -- DES, AES, or whatever -- is not homomorphic. Assume you have a plaintext P, and you encrypt it with AES to get a corresponding ciphertext C. If you multiply that ciphertext by 2, and then decrypt 2C, you get random gibberish instead of P. If you got something else, like 2P, that would imply some pretty strong nonrandomness properties of AES and no one would trust its security.
The RSA algorithm is different. Encrypt P to get C, multiply C by 2, and then decrypt 2C -- and you get 2P. That's a homomorphism: perform some mathematical operation to the ciphertext, and that operation is reflected in the plaintext. The RSA algorithm is homomorphic with respect to multiplication, something that has to be taken into account when evaluating the security of a security system that uses RSA.
This isn't anything new. RSA's homomorphism was known in the 1970s, and other algorithms that are homomorphic with respect to addition have been known since the 1980s. But what has eluded cryptographers is a fully homomorphic cryptosystem: one that is homomorphic under both addition and multiplication and yet still secure. And that's what IBM researcher Craig Gentry has discovered.
This is a bigger deal than might appear at first glance. Any computation can be expressed as a Boolean circuit: a series of additions and multiplications. Your computer consists of a zillion Boolean circuits, and you can run programs to do anything on your computer. This algorithm means you can perform arbitrary computations on homomorphically encrypted data. More concretely: if you encrypt data in a fully homomorphic cryptosystem, you can ship that encrypted data to an untrusted person and that person can perform arbitrary computations on that data without being able to decrypt the data itself. Imagine what that would mean for cloud computing, or any outsourcing infrastructure: you no longer have to trust the outsourcer with the data.
Unfortunately -- you knew that was coming, right? -- Gentry’s scheme is completely impractical. It uses something called an ideal lattice as the basis for the encryption scheme, and both the size of the ciphertext and the complexity of the encryption and decryption operations grow enormously with the number of operations you need to perform on the ciphertext -- and that number needs to be fixed in advance. And converting a computer program, even a simple one, into a Boolean circuit requires an enormous number of operations. These aren't impracticalities that can be solved with some clever optimization techniques and a few turns of Moore's Law; this is an inherent limitation in the algorithm. In one article, Gentry estimates that performing a Google search with encrypted keywords -- a perfectly reasonable simple application of this algorithm -- would increase the amount of computing time by about a trillion. Moore’s law calculates that it would be 40 years before that homomorphic search would be as efficient as a search today, and I think he’s being optimistic with even this most simple of examples.
Despite this, IBM’s PR machine has been in overdrive about the discovery. Its press release makes it sound like this new homomorphic scheme is going to rewrite the business of computing: not just cloud computing, but "enabling filters to identify spam, even in encrypted email, or protection information contained in electronic medical records." Maybe someday, but not in my lifetime.
This is not to take anything away anything from Gentry or his discovery. Visions of a fully homomorphic cryptosystem have been dancing in cryptographers' heads for thirty years. I never expected to see one. It will be years before a sufficient number of cryptographers examine the algorithm that we can have any confidence that the scheme is secure, but -- practicality be damned -- this is an amazing piece of work.
Spanish Police Foil Remote-Controlled Zeppelin Jailbreak
Sometimes movie plots actually happen:
...three people have been arrested after police discovered their plan to free a drug trafficker from an island prison using a 13-foot airship carrying night goggles, climbing gear and camouflage paint.
EDITED TO ADD (7/14): Another story, with more detail.
Court Limits on TSA Searches
This is good news:
A federal judge in June threw out seizure of three fake passports from a traveler, saying that TSA screeners violated his Fourth Amendment rights against unreasonable search and seizure. Congress authorizes TSA to search travelers for weapons and explosives; beyond that, the agency is overstepping its bounds, U.S. District Court Judge Algenon L. Marbley said.
I wrote about this a couple of weeks ago:
...Obama should mandate that airport security be solely about terrorism, and not a general-purpose security checkpoint to catch everyone from pot smokers to deadbeat dads.
Why People Don't Understand Risks
Yesterday's Minneapolis Star Tribune had the front-page headline: "Co-sleeping kills about 20 infants each year." (The headline in the web article is different.) The only problem is, in either case, there's no additional information with which to make sense of the statistic.
How many infants don't die each year? How many infants die each year in separate beds? Is the death rate for co-sleepers greater or less than the death rate for separate-bed sleepers? Without this information, it's impossible to know whether this statistic is good or bad.
But the media rarely provides context for the data. The story is in the aftermath of an incident where a baby was accidentally smothered in his sleep.
Oh, and that 20-infants-per-year number is for Minnesota only. No word as to whether the situation is better or worse in other states.
More Low-Tech Security Solutions
Anti-theft lunch bags, for those who have a problem with their lunches being stolen.
Only works until the thief figures it out, though.
Pocketless Trousers to Protect Against Bribery
I wonder if it will work.
Nepal's anti-corruption authority has come up with a novel solution to rampant bribe-taking at the country's only international airport -- the pocketless trouser.
Terrorist Risk of Cloud Computing
I don't even know where to begin on this one:
As we have seen in the past with other technologies, while cloud resources will likely start out decentralized, as time goes by and economies of scale take hold, they will start to collect into mega-technology hubs. These hubs could, as the end of this cycle, number in the low single digits and carry most of the commerce and data for a nation like ours. Elsewhere, particularly in Europe, those hubs could handle several nations' public and private data.
It's only been eight years, and this author thinks that the 9/11 attacks "took down a major portion of the U.S. infrastructure." That's just plain ridiculous. I was there (in the U.S, not in New York). The government, the banks, the power system, commerce everywhere except lower Manhattan, the Internet, the water supply, the food supply, and every other part of the U.S. infrastructure I can think of worked just fine during and after the attacks. The New York Stock Exchange was up and running in a few days. Even the piece of our infrastructure that was the most disrupted -- the airplane network -- was up and running in a week. I think the author of that piece needs to travel to somewhere on the planet where major portions of the infrastructure actually get disrupted, so he can see what it's like.
No less ridiculous is the main point of the article, which seems to imply that terrorists will someday decide that disrupting people's Lands' End purchases will be more attractive than killing them. Okay, that was a caricature of the article, but not by much. Terrorism is an attack against our minds, using random death and destruction as a tactic to cause terror in everyone. To even suggest that data disruption would cause more terror than nuclear fallout completely misunderstands terrorism and terrorists.
And anyway, any e-commerce, banking, etc. site worth anything is backed up and dual-homed. There are lots of risks to our data networks, but physically blowing up a data center isn't high on the list.
Friday Squid Blogging: Office Squid
The Pros and Cons of Password Masking
Usability guru Jakob Nielsen opened up a can of worms when he made the case for unmasking passwords in his blog. I chimed in that I agreed. Almost 165 comments on my blog (and several articles, essays, and many other blog posts) later, the consensus is that we were wrong.
I was certainly too glib. Like any security countermeasure, password masking has value. But like any countermeasure, password masking is not a panacea. And the costs of password masking need to be balanced with the benefits.
The cost is accuracy. When users don't get visual feedback from what they're typing, they're more prone to make mistakes. This is especially true with character strings that have non-standard characters and capitalization. This has several ancillary costs:
The benefits of password masking are more obvious:
I believe that shoulder surfing isn't nearly the problem it's made out to be. One, lots of people use their computers in private, with no one looking over their shoulders. Two, personal handheld devices are used very close to the body, making shoulder surfing all that much harder. Three, it's hard to quickly and accurately memorize a random non-alphanumeric string that flashes on the screen for a second or so.
This is not to say that shoulder surfing isn't a threat. It is. And, as many readers pointed out, password masking is one of the reasons it isn't more of a threat. And the threat is greater for those who are not fluent computer users: slow typists and people who are likely to choose bad passwords. But I believe that the risks are overstated.
Password masking is definitely important on public terminals with short PINs. (I'm thinking of ATMs.) The value of the PIN is large, shoulder surfing is more common, and a four-digit PIN is easy to remember in any case.
And lastly, this problem largely disappears on the Internet on your personal computer. Most browsers include the ability to save and then automatically populate password fields, making the usability problem go away at the expense of another security problem (the security of the password becomes the security of the computer). There's a Firefox plugin that gets rid of password masking. And programs like my own Password Safe allow passwords to be cut and pasted into applications, also eliminating the usability problem.
One approach is to make it a configurable option. High-risk banking applications could turn password masking on by default; other applications could turn it off by default. Browsers in public locations could turn it on by default. I like this, but it complicates the user interface.
A reader mentioned BlackBerry's solution, which is to display each character briefly before masking it; that seems like an excellent compromise.
I, for one, would like the option. I cannot type complicated WEP keys into Windows -- twice! what's the deal with that? -- without making mistakes. I cannot type my rarely used and very complicated PGP keys without making a mistake unless I turn off password masking. That's what I was reacting to when I said "I agree."
So was I wrong? Maybe. Okay, probably. Password masking definitely improves security; many readers pointed out that they regularly use their computer in crowded environments, and rely on password masking to protect their passwords. On the other hand, password masking reduces accuracy and makes it less likely that users will choose secure and hard-to-remember passwords, I will concede that the password masking trade-off is more beneficial than I thought in my snap reaction, but also that the answer is not nearly as obvious as we have historically assumed.
The Insecurity of Secrecy
Good essay -- "The Staggering Cost of Playing it 'Safe'" -- about the political motivations for terrorist security policy.
Senator Barbara Boxer has led an effort to at least put together a public database of ash storage sites so that people can judge the risk to the areas where they live. However, even this effort has been blocked not by coal companies or utilities, but by the DHS. How could it possibly be a national security interest to cover up the location of material that's "not toxic or anything?" It's not. In fact, even if the ash turns out to be as bad as its worst critics fear, blocking the database is far more dangerous than revealing the location of these sites. Not only has there not been any threat against these sites by terrorists, and no workable scenario by which they might cause a problem, coal slurry impoundments are already failing with regularity, dousing parts of America with millions of gallons of this material. It doesn't take terrorists to make this happen.
Information Leakage from Keypads
Can anyone guess the entry codes for these door locks?
There are 10,000 possible four-digit codes, but you only have to try 24 on these keypads. The first is most likely 1986 or 1968. The second is almost certainly 1234.
More Security Countermeasures from the Natural World
The plant caladium steudneriifolium pretends to be ill so mining moths won't eat it.
She believes that the plant essentially fakes being ill, producing variegated leaves that mimic those that have already been damaged by mining moth larvae. That deters the moths from laying any further larvae on the leaves, as the insects assume the previous caterpillars have already eaten most of the leaves' nutrients.
Cabbage aphids arm themselves with chemical bombs:
Its body carries two reactive chemicals that only mix when a predator attacks it. The injured aphid dies. But in the process, the chemicals in its body react and trigger an explosion that delivers lethal amounts of poison to the predator, saving the rest of the colony.
M.melanotarsa is a jumping spider that protects itself from predators (like other jumping spiders) by resembling an ant. Earlier this month, Ximena Nelson and Robert Jackson showed that they bolster this illusion by living in silken apartment complexes and travelling in groups, mimicking not just the bodies of ants but their social lives too.
My previous post about security stories from the insect world.
MD6 Withdrawn from SHA-3 Competition
We suggest that MD6 is not yet ready for the next SHA-3 round, and we also provide some suggestions for NIST as the contest moves forward.
Basically, the issue is that in order for MD6 to be fast enough to be competitive, the designers have to reduce the number of rounds down to 30-40, and at those rounds, the algorithm loses its proofs of resistance to differential attacks.
Thus, while MD6 appears to be a robust and secure cryptographic hash algorithm, and has much merit for multi-core processors, our inability to provide a proof of security for a reduced-round (and possibly tweaked) version of MD6 against differential attacks suggests that MD6 is not ready for consideration for the next SHA-3 round.
EDITED TO ADD (7/1): This is a very classy withdrawal, as we expect from Ron Rivest -- especially given the fact that there are no attacks on it, while other algorithms have been seriously broken and their submitters keep trying to pretend that no one has noticed.
EDITED TO ADD (7/6): From the MD6 website:
We are not withdrawing our submission; NIST is free to select MD6 for further consideration in the next round if it wishes. But at this point MD6 doesn't meet our own standards for what we believe should be required of a SHA-3 candidate, and we suggest that NIST might do better looking elsewhere. In particular, we feel that a minimum "ticket of admission" for SHA-3 consideration should be a proof of resistance to basic differential attacks, and we don't know how to make such a proof for a reduced-round MD6.
New Attack on AES
There's a new cryptanalytic attack on AES that is better than brute force:
Abstract. In this paper we present two related-key attacks on the full AES. For AES-256 we show the first key recovery attack that works for all the keys and has complexity 2119, while the recent attack by Biryukov-Khovratovich-Nikolic works for a weak key class and has higher complexity. The second attack is the first cryptanalysis of the full AES-192. Both our attacks are boomerang attacks, which are based on the recent idea of finding local collisions in block ciphers and enhanced with the boomerang switching techniques to gain free rounds in the middle.
In an e-mail, the authors wrote:
We also expect that a careful analysis may reduce the complexities. As a preliminary result, we think that the complexity of the attack on AES-256 can be lowered from 2119 to about 2110.5 data and time.
Agreed. While this attack is better than brute force -- and some cryptographers will describe the algorithm as "broken" because of it -- it is still far, far beyond our capabilities of computation. The attack is, and probably forever will be, theoretical. But remember: attacks always get better, they never get worse. Others will continue to improve on these numbers. While there's no reason to panic, no reason to stop using AES, no reason to insist that NIST choose another encryption standard, this will certainly be a problem for some of the AES-based SHA-3 candidate hash functions.
EDITED TO ADD (7/14): An FAQ.
Security, Group Size, and the Human Brain
If the size of your company grows past 150 people, it's time to get name badges. It's not that larger groups are somehow less secure, it's just that 150 is the cognitive limit to the number of people a human brain can maintain a coherent social relationship with.
An edited version of this essay, without links, appeared in the July/August 2009 issue of IEEE Security & Privacy.
Powered by Movable Type. Photo at top by Geoffrey Stone.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.