Schneier on Security
A blog covering security and security technology.
August 2011 Archives
This job can't be fun:
This Public Affairs Specialist position is located in the Office of Strategic Communications and Public Affairs (SCPA), Transportation Security Administration (TSA), Department of Homeland Security (DHS). If selected for this position, you will serve as the Press Secretary and senior representative/liaison working with Federal and stakeholder partners. You will utilize your expert knowledge and mastery of advanced public affairs principles, concepts, regulations, practices, analytical methods, and techniques (internet, print, TV, and radio) on a variety of transportation security and TSA related issues.
The posting expires today, so you don't have much time. If you apply for and get the job, please continue to post here under a pseudonym. And if there's a file on how to deal with me, I'd be really interested in seeing a copy.
Social networking sites make it very difficult, if not impossible, to have undercover police officers:
"The results found that 90 per cent of female officers were using social media compared with 81 per cent of males."
There's another side to this issue as well. Social networking sites can help undercover officers with their backstory, by building a fictional history. Some of this might require help from the company that owns the social networking site, but that seems like a reasonable request by the police.
I am in the middle of reading Diego Gambetta's book Codes of the Underworld: How Criminals Communicate. He talks about the lengthy vetting process organized crime uses to vet new members -- often relying on people who knew the person since birth, or people who served time with him in jail -- to protect against police informants. I agree that social networking sites can make undercover work even harder, but it's gotten pretty hard even without that.
It's actually pretty good.
Also note that the site is redesigning its privacy. As we learned from Microsoft, nothing motivates a company to improve its security like competition.
We finally have some, even though the company isn't talking:
So just how well crafted was the e-mail that got RSA hacked? Not very, judging by what F-Secure found.
It's hard to know how serious this really is:
The screenshots appear as B-roll footage in the documentary for six secondsbetween 11:04 and 11:10 minutes -- showing custom built Chinese software apparently launching a cyber-attack against the main website of the Falun Gong spiritual practice, by using a compromised IP address belonging to a United States university. As of Aug. 22 at 1:30pm EDT, in addition to Youtube, the whole documentary is available on the CCTV website.
The industry is in decline:
A generation ago, most of the island's 10,000 residents worked in the squid industry, either as sellers like Kim or as farmer-fishermen who toiled in the fields each winter and went to sea during summer.
As before, use the comments to this post to write about and discuss security stories that don't have their own post.
This is a picture of a pair of wire cutters secured to a table with a wire.
Someone isn't thinking this through....
Nice essay on the problems with talking about cyberspace risks using "Cold War" metaphors:
The problem with threat inflation and misapplied history is that there are extremely serious risks, but also manageable responses, from which they steer us away. Massive, simultaneous, all-encompassing cyberattacks on the power grid, the banking system, transportation networks, etc. along the lines of a Cold War first strike or what Defense Secretary Leon Panetta has called the "next Pearl Harbor" (another overused and ill-suited analogy) would certainly have major consequences, but they also remain completely theoretical, and the nation would recover. In the meantime, a real national security danger is being ignored: the combination of online crime and espionage that's gradually undermining our finances, our know-how and our entrepreneurial edge. While would-be cyber Cold Warriors stare at the sky and wait for it to fall, they're getting their wallets stolen and their offices robbed.
Ross Anderson is the first person I heard comparing today's cybercrime threats to global piracy in the 19th century.
John Mueller and his students analyze the 33 cases of attempted [EDITED TO ADD: Islamic extremist] terrorism in the U.S. since 9/11. So few of them are actually real, and so many of them were created or otherwise facilitated by law enforcement.
The death toll of all these is fourteen: thirteen at Ft. Hood and one in Little Rock. I think it's fair to add to this the 2002 incident at Los Angeles Airport where a lone gunman killed two people at the El Al ticket counter, so that's sixteen deaths in the U.S. to terrorism in the past ten years.
Given the credible estimate that we've spent $1 trillion on anti-terrorism security (this does not include our many foreign wars), that's $62.5 billion per life [EDITED: lost]. Is there any other risk that we are even remotely as crazy about?
Note that everyone who died was shot with a gun. No Islamic extremist has been able to successfully detonate a bomb in the U.S. in the past ten years, not even a Molotov cocktail. (In the U.K. there has only been one successful terrorist bombing in the last ten years; the 2005 London Underground attacks.) And almost all of the 33 incidents (34 if you add LAX) have been lone actors, with no ties to al Qaeda.
I remember the government fear mongering after 9/11. How there were hundreds of sleeper cells in the U.S. How terrorism would become the new normal unless we implemented all sorts of Draconian security measures. You'd think that -- if this were even remotely true -- we would have seen more attempted terrorism in the U.S. over the past decade.
And I think arguments like "the government has secretly stopped lots of plots" don't hold any water. Just look at the list, and remember how the Bush administration would hype even the most tenuous terrorist incident. Stoking fear was the policy. If the government stopped any other plots, they would have made as much of a big deal of them as they did of these 33 incidents.
EDITED TO ADD (8/26): According to the State Department's recent report, fifteen American private citizens died in terrorist attacks in 2010: thirteen in Afghanistan and one each in Iraq and Uganda. Worldwide, 13,186 people died from terrorism in 2010. These numbers pale even in comparison to things that aren't very risky.
Also, look at Table 3 on page 16. The risk of dying in the U.S. from terrorism is substantially less than the risk of drowning in your bathtub, the risk of a home appliance killing you, or the risk of dying in an accident caused by a deer. Remember that more people die every month in automobile crashes than died in 9/11.
EDITED TO ADD (8/26): Looking over the incidents again, some of them would make pretty good movie plots. The point of my "movie-plot threat" phrase is not that terrorist attacks are never like that, but that concentrating defensive resources against them is pointless because 1) there are too many of them and 2) it is too easy for the terrorists to change tactics or targets.
EDITED TO ADD (9/1): As was pointed out here, I accidentally typed "lives saved" when I meant to type "lives lost." I corrected that, above. We generally have a regulatory safety goal of $1 - $10M per life saved. In order for the $100B we have spent per year on counterterrorism to be worth it, it would need to have saved 10,000 lives per year.
Nick Helm won an award for the funniest joke at the Edinburgh Fringe Festival:
Nick Helm: "I needed a password with eight characters so I picked Snow White and the Seven Dwarves."
Note that two other jokes were about security:
Tim Vine: "Crime in multi-storey car parks. That is wrong on so many different levels."
The security problems associated with moving $12B in gold from London to Venezuela.
It seems to me that Chávez has four main choices here. He can go the FT’s route, and just fly the gold to Caracas while insuring each shipment for its market value. He can go the Spanish route, and try to transport the gold himself, perhaps making use of the Venezuelan navy. He could attempt the mother of all repo transactions. Or he could get clever.
Any other ideas?
Essay by George Ledin on the security risks of not teaching students malware.
Researchers from UCSD pointed thermal cameras towards plastic ATM PIN pads and metal ATM PIN pads to test how effective they were at stealing PIN numbers. The thermal cams didn't work against metal pads but on plastic pads the success rate of detecting all the digits was 80% after 10 seconds and 60% after 45 seconds. If you think about your average ATM trip, that's a pretty wide window and an embarrassingly high success rate for thieves to take advantage of.
"When the user types on the soft keyboard on her smartphone (especially when she holds her phone by hand rather than placing it on a fixed surface), the phone vibrates. We discover that keystroke vibration on touch screens are highly correlated to the keys being typed."
Worried about someone hacking your implanted medical devices? Here's a signal-jamming device you can wear.
Sleeve cameras aren't new, but they're now smaller than ever and the cheaters are getting more sophisticated:
In January, at the newly opened $4-billion Cosmopolitan casino in Las Vegas, a gang called the Cutters cheated at baccarat. Before play began, the dealer offered one member of the group a stack of eight decks of cards for a pre-game cut. The player probably rubbed the stack for good luck, at the same instant riffling some of the corners of the cards underneath with his index finger. A small camera, hidden under his forearm, recorded the order.
James Fallows has a nice debunking of a movie-plot threat.
I thought this was an interesting read.
Long essay on the value of pseudonymity. From the conclusions:
Here lies the huge irony in this discussion. Persistent pseudonyms aren't ways to hide who you are. They provide a way to be who you are. You can finally talk about what you really believe; your real politics, your real problems, your real sexuality, your real family, your real self. Much of the support for "real names" comes from people who don't want to hear about controversy, but controversy is only a small part of the need for pseudonyms. For most of us, it's simply the desire to be able to talk openly about the things that matter to every one of us who uses the Internet. The desire to be judged -- not by our birth, not by our sex, and not by who we work for -- but by what we say.
This is, of a course, a response to the Google+ names policy.
Nice essay on the danger of too much security:
The great lie of the war on terror is not that we can sacrifice a little liberty for greater security. It is that fear can be eliminated, and that all we need to do to improve our society is defeat terrorism, rather than look at the other causes of our social, economic, and political anxiety. That is the great seduction of fear: It allows us to do nothing. It is easier to find new threats than new possibilities.
Any institution delegated with the task of preventing terrorism has a dilemma: they can either do their best to prevent terrorism, or they can do their best to make sure they're not blamed for any terrorist attacks. I've talked about this dilemma for a while now, and it's nice to see some research results that demonstrate its effects.
A. Peter McGraw, Alexander Todorov, and Howard Kunreuther, "A Policy Maker's Dilemma: Preventing Terrorism or Preventing Blame," Organizational Behavior and Human Decision Processes, 115 (May 2011): 25-34.
Abstract: Although anti-terrorism policy should be based on a normative treatment of risk that incorporates likelihoods of attack, policy makers' anti-terror decisions may be influenced by the blame they expect from failing to prevent attacks. We show that people's anti-terror budget priorities before a perceived attack and blame judgments after a perceived attack are associated with the attack's severity and how upsetting it is but largely independent of its likelihood. We also show that anti-terror budget priorities are influenced by directly highlighting the likelihood of the attack, but because of outcome biases, highlighting the attack's prior likelihood has no influence on judgments of blame, severity, or emotion after an attack is perceived to have occurred. Thus, because of accountability effects, we propose policy makers face a dilemma: prevent terrorism using normative methods that incorporate the likelihood of attack or prevent blame by preventing terrorist attacks the public find most blameworthy.
Think about this with respect to the TSA. Are they doing their best to mitigate terrorism, or are they doing their best to ensure that if there's a terrorist attack the public doesn't blame the TSA for missing it?
It's almost time for a deluge of "Ten Years After 9/11" essays. Here's Steven Pinker:
The discrepancy between the panic generated by terrorism and the deaths generated by terrorism is no accident. Panic is the whole point of terrorism, as the root of the word makes clear: "Terror" refers to a psychological state, not an enemy or an event. The effects of terrorism depend completely on the psychology of the audience.
"Biclique Cryptanalysis of the Full AES," by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger.
Abstract. Since Rijndael was chosen as the Advanced Encryption Standard, improving upon 7-round attacks on the 128-bit key variant or upon 8-round attacks on the 192/256-bit key variants has been one of the most difficult challenges in the cryptanalysis of block ciphers for more than a decade. In this paper we present a novel technique of block cipher cryptanalysis with bicliques, which leads to the following results:
This is what I wrote about AES in 2009. I still agree with my advice:
Cryptography is all about safety margins. If you can break n round of a cipher, you design it with 2n or 3n rounds. What we're learning is that the safety margin of AES is much less than previously believed. And while there is no reason to scrap AES in favor of another algorithm, NST should increase the number of rounds of all three AES variants. At this point, I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds. Or maybe even more; we don't want to be revising the standard again and again.
The advice about AES-256 was because of a 2009 attack, not this result.
Again, I repeat the saying I've heard came from inside the NSA: "Attacks always get better; they never get worse."
A prison in Brazil uses geese as part of its alarm system.
There's a long tradition of this. Circa 400 BC, alarm geese alerted a Roman citadel to a Gaul attack.
Nice essay by Christopher Soghoian on why cell phone and Internet providers need to enable security options by default.
Really interesting research.
Search-redirection attacks combine several well-worn tactics from black-hat SEO and web security. First, an attacker identifies high-visibility websites (e.g., at universities) that are vulnerable to code-injection attacks. The attacker injects code onto the server that intercepts all incoming HTTP requests to the compromised page and responds differently based on the type of request: Requests from search-engine crawlers return a mix of the original content, along with links to websites promoted by the attacker and text that makes the website appealing to drug-related queries.
And the paper.
A couple of weeks ago Wired reported the discovery of a new, undeletable, web cookie:
Researchers at U.C. Berkeley have discovered that some of the net’s most popular sites are using a tracking service that can’t be evaded -- even when users block cookies, turn off storage in Flash, or use browsers’ “incognito” functions.
The Wired article was very short on specifics, so I waited until one of the researchers -- Ashkan Soltani -- wrote up more details. He finally did, in a quite technical essay:
Here's an interview with me from the Homeland Security News Wire.
Publication is still scheduled for the end of February -- in time for the RSA Conference -- assuming I finish the manuscript in time.
EDITED TO ADD (8/12): The cover was inspired by a design by Luke Fretwell. He sent me an unsolicited cover design, which I liked and sent to my publisher. They liked the general idea, but refined it into the cover you see. Luke has a blog post on the exchange, which includes a picture of his cover.
The African crested rat applies tree poison to its fur to make itself more deadly.
The researchers made their discovery after presenting a wild-caught crested rat with branches and roots of the Acokanthera tree, whose bark includes the toxin ouabain.
This seems like a really bad idea:
...the Transportation Security Administration began a program Tuesday allowing pilots to skirt the security-screening process. The TSA has deployed approximately 500 body scanners to airports nationwide in a bid to prevent terrorists from boarding domestic flights, but pilots don't have to go through the controversial nude body scanners or other forms of screening. They don't have to be patted down or go through metal detectors. Their carry-on bags are not searched.
I agree that it doesn't make sense to screen pilots, that they're at the controls of the plane and can crash it if they want to. But the TSA isn't in a position to screen pilots; all they can decide to do is to not screen people who are in pilot uniforms with pilot IDs. And it's far safer to just screen everybody than to trust that TSA agents will be able figure out who is a real pilot and who is someone just pretending to be a pilot.
I wrote about this in 2006.
"Why (Special Agent) Johnny (Still) Can’t Encrypt: A Security Analysis of the APCO Project 25 Two-Way Radio System," by Sandy Clark, Travis Goodspeed, Perry Metzger, Zachary Wasserman, Kevin Xu, and Matt Blaze.
Abstract: APCO Project 25a (“P25”) is a suite of wireless communications protocols used in the US and elsewhere for public safety two-way (voice) radio systems. The protocols include security options in which voice and data traffic can be cryptographically protected from eavesdropping. This paper analyzes the security of P25 systems against both passive and active adversaries. We found a number of protocol, implementation, and user interface weaknesses that routinely leak information to a passive eavesdropper or that permit highly efficient and difficult to detect active attacks. We introduce new selective subframe jamming attacks against P25, in which an active attacker with very modest resources can prevent specific kinds of traffic (such as encrypted messages) from being received, while emitting only a small fraction of the aggregate power of the legitimate transmitter. We also found that even the passive attacks represent a serious practical threat. In a study we conducted over a two year period in several US metropolitan areas, we found that a significant fraction of the “encrypted” P25 tactical radio traffic sent by federal law enforcement surveillance operatives is actually sent in the clear, in spite of their users’ belief that they are encrypted, and often reveals such sensitive data as the such sensitive data as the names of informants in criminal investigations.
I've heard Matt talk about this project several times. It's great work, and a fascinating insight into the usability problems of encryption in the real world.
It's kind of like a covert channel.
Nohl's group found a number of problems with GPRS. First, he says, lax authentication rules could allow an attacker to set up a fake cellular base station and eavesdrop on information transmitted by users passing by. In some countries, they found that GPRS communications weren't encrypted at all. When they were encrypted, Nohl adds, the ciphers were often weak and could be either broken or decoded with relatively short keys that were easy to guess.
I'm a big fan of taxonomies, and this -- from Carnegie Mellon -- seems like a useful one:
The taxonomy of operational cyber security risks, summarized in Table 1 and detailed in this section, is structured around a hierarchy of classes, subclasses, and elements. The taxonomy has four main classes:
There's a security story from biology I've used a few times: plants that use chemicals to call in airstrikes by wasps on the herbivores attacking them. This is a new variation: a species of orchid that emits the same signals as a trick, to get pollinated.
An article from Salon -- lots of interesting research.
My previous blog post on the topic.
The German Federal Criminal Police (the “Bundeskriminalamt” or BKA for short) recently warned consumers about a new Windows malware strain that waits until the victim logs in to his bank account. The malware then presents the customer with a message stating that a credit has been made to his account by mistake, and that the account has been frozen until the errant payment is transferred back.
I've been using the phrase "arms race" to describe the world's militaries' rush into cyberspace for a couple of years now. Here's a good article on the topic that uses the same phrase.
I just can't make this stuff up:
A report of a severed hand found at an Oahu seabird sanctuary has turned out to be dried squid.
Remember: if you see something, say something.
I admit I don't pay much attention to pencil-and-paper ciphers, so I knew nothing about the Zodiac cipher. Seems it has finally been broken:
The Zodiac Killer was a serial killer who preyed on couples in Northern California in the years between 1968 and 1970. Of his seven confirmed victims, five died. More victims and attacks are suspected.
Code and solution -- with photos -- here.
EDITED TO ADD (8/5): Solution seems to be a hoax.
I'm not surprised:
The weekly Welt am Sonntag, quoting a police report, said 35 percent of the 730,000 passengers checked by the scanners set off the alarm more than once despite being innocent.
However, this surprised me:
The European parliament backed on July 6 the deployment of body scanners at airports, but on condition that travellers have the right to refuse to walk through the controversial machines.
I was told in Amsterdam that there was no option. I either had to walk through the machines, or not fly.
Here's a story about full-body scanners that are overly sensitive to sweaty armpits.
Two items on hacking lotteries. The first is about someone who figured out how to spot winner in a scratch-off tic-tac-toe style game, and a daily draw style game where expcted payout can exceed the ticket price. The second is about someone who has won the lottery four times, with speculation that she had advance knowledge of where and when certain jackpot-winning scratch-off tickets would be sold.
EDITED TO ADD (8/13): The Boston Globe has a on how to make money on Massachusetts' Cash WinFall.
Seems that the one-time pad was not first invented by Vernam:
He could plainly see that the document described a technique called the one-time pad fully 35 years before its supposed invention during World War I by Gilbert Vernam, an AT&T engineer, and Joseph Mauborgne, later chief of the Army Signal Corps.
It seems that Vernam was not aware of Miller's work, and independently invented the one-time pad.
The article is in the context of the big Facebook lawsuit, but the part about identifying people by their writing style is interesting:
Recently, a team of computer scientists at Concordia University in Montreal took advantage of an unusual set of data to test another method of determining e-mail authorship. In 2003, the Federal Energy Regulatory Commission, as part of its investigation into Enron, released into the public domain hundreds of thousands of employee e-mails, which have become an important resource for forensic research. (Unlike novels, newspapers or blogs, e-mails are a private form of communication and aren’t usually available as a sizable corpus for analysis.)
It seems reasonable that we have a linguistic fingerprint, although 1) there are far fewer of them than finger fingerprints, 2) they're easier to fake. It's probably not much of a stretch to take that software that "identifies bundles of linguistic features, hundreds in all" and use the data to automatically modify my writing to look like someone else's.
Eventually, it will work. You'll be able to wear a camera that will automatically recognize someone walking towards you, and a earpiece that will relay who that person is and maybe something about him. None of the technologies required to make this work are hard; it's just a matter of getting the error rate down low enough for it to be a useful system. And there have been a number of recent research results and news stories that illustrate what this new world might look like.
The police want this sort of system. I already blogged about MORIS, an iris-scanning technology that several police forces in the U.S. are using. The next step is the face-scanning glasses that the Brazilian police claim they will be wearing at the 2014 World Cup.
A small camera fitted to the glasses can capture 400 facial images per second and send them to a central computer database storing up to 13 million faces.
In the future, this sort of thing won't be limited to the police. Facebook has recently embarked on a major photo tagging project, and already has the largest collection of identified photographs in the world outside of a government. Researchers at Carnegie Mellon University have combined the public part of that database with a camera and face-recognition software to identify students on campus. (The paper fully describing their work is under review and not online yet, but slides describing the results can be found here.)
Of course, there are false positives -- as there are with any system like this. That's not a big deal if the application is a billboard with face-recognition serving different ads depending on the gender and age -- and eventually the identity -- of the person looking at it, but is more problematic if the application is a legal one.
In Boston, someone erroneously had his driver's licence revoked:
It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.
The problem is less with the system, and more with the guilty-until-proven-innocent way in which the system is used.
Kaprielian said the Registry gives drivers enough time to respond to the suspension letters and that it is the individual’s "burden’" to clear up any confusion. She added that protecting the public far outweighs any inconvenience Gass or anyone else might experience.
EDITED TO ADD (8/3): Here's a system embedded in a pair of glasses that automatically analyzes and relays micro-facial expressions. The goal is to help autistic people who have trouble reading emotions, but you could easily imagine this sort of thing becoming common. And what happens when we start relying on these computerized systems and ignoring our own intuition?
EDITED TO ADD: CV Dazzle is camouflage from face detection.
Embedded system vulnerabilities in prisons:
Some of the same vulnerabilities that the Stuxnet superworm used to sabotage centrifuges at a nuclear plant in Iran exist in the country’s top high-security prisons, according to security consultant and engineer John Strauchs, who plans to discuss the issue and demonstrate an exploit against the systems at the DefCon hacker conference next week in Las Vegas.
This seems like a minor risk today; Stuxnet was a military-grade effort, and beyond the reach of your typical criminal organization. But that can only change, as people study and learn from the reverse-engineered Stuxnet code and as hacking PLCs becomes more common.
As we move from mechanical, or even electro-mechanical, systems to digital systems, and as we network those digital systems, this sort of vulnerability is going to only become more common.
Abstract: Over the last two decades FPGAs have become central components for many advanced digital systems, e.g., video signal processing, network routers, data acquisition and military systems. In order to protect the intellectual property and to prevent fraud, e.g., by cloning an FPGA or manipulating its content, many current FPGAs employ a bitstream encryption feature. We develop a successful attack on the bitstream encryption engine integrated in the widespread Virtex-II Pro FPGAs from Xilinx, using side-channel analysis. After measuring the power consumption of a single power-up of the device and a modest amount of o-line computation, we are able to recover all three different keys used by its triple DES module. Our method allows extracting secret keys from any real-world device where the bitstream encryption feature of Virtex-II Pro is enabled. As a consequence, the target product can be cloned and manipulated at will of the attacker. Also, more advanced attacks such as reverse engineering or the introduction of hardware Trojans become potential threats. As part of the side-channel attack, we were able to deduce certain internals of the hardware encryption engine. To our knowledge, this is the first attack against the bitstream encryption of a commercial FPGA reported in the open literature.
Interesting paper: "Science Fiction Prototyping and Security Education: Cultivating Contextual and Societal Thinking in Computer Security Education and Beyond," by Tadayoshi Kohno and Brian David Johnson.
Abstract: Computer security courses typically cover a breadth of technical topics, including threat modeling, applied cryptography, software security, and Web security. The technical artifacts of computer systems -- and their associated computer security risks and defenses -- do not exist in isolation, however; rather, these systems interact intimately with the needs, beliefs, and values of people. This is especially true as computers become more pervasive, embedding themselves not only into laptops, desktops, and the Web, but also into our cars, medical devices, and toys. Therefore, in addition to the standard technical material, we argue that students would benefit from developing a mindset focused on the broader societal and contextual issues surrounding computer security systems and risks. We used science fiction (SF) prototyping to facilitate such societal and contextual thinking in a recent undergraduate computer security course. We report on our approach and experiences here, as well as our recommendations for future computer security and other computer science courses.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.