July 15, 2011
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1107.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively comment section. An RSS feed is available.
In this issue:
- Man Flies with Someone Else’s Ticket and No Legal ID
- Court Ruling on “Reasonable” Electronic Banking Security
- Protecting Private Information on Smart Phones
- Schneier News
- Yet Another “People Plug in Strange USB Sticks” Story
Last week, I got a bunch of press calls about Olajide Oluwaseun Noibi, who flew from New York to Los Angeles using an expired ticket in someone else’s name and a university ID. They all wanted to know what this says about airport security.
It says that airport security isn’t perfect, and that people make mistakes. But it’s not something that anyone should worry about. It’s not like Noibi figured out a new hole in the airport security system, one that he was able to exploit repeatedly. He got lucky. He got real lucky. It’s not something a terrorist can build a plot around.
I’m even less concerned because I’ve never thought the photo ID check had any value. Noibi was screened, just like any other passenger. Even the TSA blog makes this point: “In this case, TSA did not properly authenticate the passenger’s documentation. That said, it’s important to note that this individual received the same thorough physical screening as other passengers, including being screened by advanced imaging technology (body scanner).”
Seems like the TSA is regularly downplaying the value of the photo ID check. This is from a Q&A about Secure Flight, their new system to match passengers with watch lists:
Q: This particular “layer” isn’t terribly effective. If this
“layer” of security can be circumvented by anyone with a printer
and a word processor, this doesn’t seem to be a terribly useful
“layer” … especially looking at the amount of money being
expended on this particular “layer”. It might be that this money
could be more effectively spent on other “layers”.
A: TSA uses layers of security to ensure the security of the
traveling public and the Nation’s transportation system. Secure
Flight’s watchlist name matching constitutes only one security
layer of the many in place to protect aviation. Others include
intelligence gathering and analysis, airport checkpoints, random
canine team searches at airports, federal air marshals, federal
flight deck officers and more security measures both visible and
invisible to the public.
Each one of these layers alone is capable of stopping a terrorist
attack. In combination their security value is multiplied,
creating a much stronger, formidable system. A terrorist who has
to overcome multiple security layers in order to carry out an
attack is more likely to be pre-empted, deterred, or to fail
during the attempt.
Yes, the answer says that they need to spend millions to ensure that terrorists with a viable plot also need a computer, but you can tell that their heart wasn’t in the answer. “Checkpoints! Dogs! Air marshals! Ignore the stupid photo ID requirement.”
Noibi is an embarrassment for the TSA and for the airline Virgin America, who are both supposed to catch this kind of thing. But I’m not worried about the security risk, and neither is the TSA.
TSA blog on ID checking:
Fourth Security and Human Behavior (SHB 2011) workshop.
Threat models colliding at movie-theater projectors.
Interesting essay on the decline of al Qaeda:
Similar article from The Economist:
Excellent satire: horse “no ride” list
New paper from the RAND Corporation: “Assessing the Security Benefits of a Trusted Traveler Program in the Presence of Attempted Attacker Exploitation and Compromise”:
The life cycle of cryptographic hash functions:
Good paper: “Sex, Lies and Cyber-crime Surveys,” Dinei Florencio and Cormac Herley, Microsoft Research. I’ve been complaining about our reliance on self-reported statistics for cyber-crime.
Nice article on Firesheep in action.
Many of our informal security systems involve convincing others to do what we want them to. Here’s a theory that says human reasoning evolved not as a tool to better understand the world or solve problems, but to win arguments and persuade other humans.
Details of an insider attack against M&A information. The attacker only looked at document titles, so as not to trigger any audit records.
National Security Agency (NSA) SIGINT Reporter’s Style and Usage Manual, 2010.
People assisting a hostage taker via his Facebook page.
Selling a good reputation on eBay:
There’s some great data on common iPhone passwords. I’m sure the results also apply to banking PINs.
This is a really weird story about the Chinese army developing an online first-person shooter game:
Unsurprisingly, the U.S. military is funding research in secure chips.
A really interesting essay comparing the IRA and al Qaeda.
The evolution of organized crime in Ireland in the face of increased security:
Nice article on the history of Stuxnet.
Interesting research: insurgent groups exhibit learning curves.
Interview with Evgeny Kaspersky.
This creates far more security risks than it solves: “The city council in Cedar Falls, Iowa has absolutely crossed the line. They voted 6-1 in favor of expanding the use of lock boxes on commercial property. Property owners would be forced to place the keys to their businesses in boxes outside their doors so that firefighters, in that one-in-a-million chance, would have easy access to get inside.”
We in the computer security world have been here before, over ten years ago.
Indiana University of Pennsylvania is offering a Master of Science in Strategic Studies in Weapons of Mass Destruction.
One of the pleasant side effects of being too busy to write longer blog posts is that — if I wait long enough — someone else writes what I would have wanted to.
The ruling in the Patco Construction vs. People’s United Bank case is important, because the judge basically ruled that the bank’s substandard security was good enough — and Patco is stuck paying for the fraud that was a result of that substandard security. The details are important, and Brian Krebs has written an excellent summary.
AppFence is a technology — with a working prototype — that protects personal information on smart phones. It does this by either substituting innocuous information in place of sensitive information or blocking attempts by the application to send the sensitive information over the network.
The significance of systems like AppFence is that they have the potential to change the balance of power in privacy between mobile application developers and users. Today, application developers get to choose what information an application will have access to, and the user faces a take-it-or-leave-it proposition: users must either grant all the permissions requested by the application developer or abandon installation. Take-it-or-leave it offers may make it easier for applications to obtain access to information that users don’t want applications to have. Many applications take advantage of this to gain access to users’ device identifiers and location for behavioral tracking and advertising. Systems like AppFence could make it harder for applications to access these types of information without more explicit consent and cooperation from users.
The problem is that the mobile OS providers might not like AppFence. Google probably doesn’t care, but Apple is one of the biggest consumers of iPhone personal information. Right now, the prototype only works on Android, because it requires flashing the phone. In theory, the technology can be made to work on any mobile OS, but good luck getting Apple to agree to it.
Blog post on the potential title and cover for my next book.
The title decision was made last week. It’s “Liars and Outliers: How Security Holds Society Together.”
Interview with me from Infosecurity magazine:
I’m really getting tired of stories like this: “Computer disks and USB sticks were dropped in parking lots of government buildings and private contractors, and 60% of the people who picked them up plugged the devices into office computers. And if the drive or CD had an official logo on it, 90% were installed.”
Of *course* people plugged in USB sticks and computer disks. It’s like “75% of people who picked up a discarded newspaper on the bus read it.” What else are people supposed to do with them?
And this is not the right response: “Mark Rasch, director of network security and privacy consulting for Falls Church, Virginia-based Computer Sciences Corp., told Bloomberg: ‘There’s no device known to mankind that will prevent people from being idiots.'”
Maybe it would be the right response if 60% of people tried to play the USB sticks like ocarinas, or tried to make omelettes out of the computer disks. But not if they plugged them into their computers. That’s what they’re for.
People get USB sticks all the time. The problem isn’t that people are idiots, that they should know that a USB stick found on the street is automatically bad and a USB stick given away at a trade show is automatically good. The problem is that the OS trusts random USB sticks. The problem is that the OS will automatically run a program that can install malware from a USB stick. The problem is that it isn’t safe to plug a USB stick into a computer.
Quit blaming the victim. They’re just trying to get by.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2011 by Bruce Schneier.