October 15, 2009
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0910.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
Nobody tell the TSA, but last month someone tried to assassinate a Saudi prince by exploding a bomb stuffed in his rectum. He pretended to be a repentant militant, when in fact he was a Trojan horse: "The resulting explosion ripped al-Asiri to shreds but only lightly injured the shocked prince -- the target of al-Asiri's unsuccessful assassination attempt."
For years, I have made the joke about Richard Reid: "Just be glad that he wasn't the underwear bomber." Now, sadly, we have an example of one.
Lewis Page, an "improvised-device disposal operator tasked in support of the UK mainland police from 2001-2004," pointed out that this isn't much of a threat for three reasons: 1) you can't stuff a lot of explosives into a body cavity, 2) detonation is, um, problematic, and 3) the human body can stifle an explosion pretty effectively (think of someone throwing himself on a grenade to save his friends).
But who ever accused the TSA of being rational?
Printing police handcuff keys using a 3D printer:
Good essay on "terrorist havens" -- like Afghanistan -- and why they're not as big a worry as some maintain.
Back in 2005, I wrote about the failure of two-factor authentication to mitigate banking fraud. We're now seeing attacks that bypass that security measure.
Quantum computer factors the number 15. It's an important development, but don't give up on public-key cryptography just yet.
This is a good thing: "An Illinois district court has allowed a couple to sue their bank on the novel grounds that it may have failed to sufficiently secure their account, after an unidentified hacker obtained a $26,500 loan on the account using the customers' user name and password." As I've previously written, this is the only way to mitigate this kind of fraud. It's an important security principle: ensure that the person who has the ability to mitigate the risk is responsible for the risk. In this case, the account holders had nothing to do with the security of their account. They could not audit it. They could not improve it. The bank, on the other hand, has the ability to improve security and mitigate the risk, but because they pass the cost on to their customers, they have no incentive to do so. Litigation like this has the potential to fix the externality and improve security.
Sears spies on its customers; it's not just hackers who steal financial and medical information.
A stick figure guide to AES.
Predicting characteristics of people by the company they keep:
The average American commits three felonies a day: the title of a new book by Harvey Silverglate. More specifically, the problem is the intersection of vague laws and fast-moving technology.
Immediacy affects risk assessments:
During a daring bank robbery in Sweden that involved a helicopter, the criminals disabled a police helicopter by placing a package with the word "bomb" near the helicopter hangar, thus engaging the full caution/evacuation procedure while they escaped. This attack worked, even though the police had been warned.
Reproducing keys from distant and angled photographs:
Proving a computer program's correctness:
Security theater in New York for the U.N. General Assembly:
Moving hippos in a post-9/11 world:
There's a Trojan horse out there that not only makes transactions in your name from your bank accounts, but alters your online bank statements so you won't notice the money transfers. If there's a moral here, it's that banks can't rely on the customer to detect fraud. But we already knew that.
You'd think this would be an obvious piece of advice: don't let hacker inmates reprogram the prison's computers. But, then again, this is the same prison that gave a lockpicking inmate access to the prison's keys. What's next: inmate sharpshooters in charge of prison's gun locker?
Detecting forged signatures using pen pressure and angle:
Earlier this month, DHS Secretary Janet Napolitano said that the U.S. needed to hire 1,000 cybersecurity experts over the next three years. Bob Cringeley doubts that there even are 1,000 cybersecurity experts out there to hire. I suppose it depends on what she means by "experts."
Pigs defeating RFID-enabled feeding systems:
Using wi-fi to "see" through walls:
Wi-fi blocking paint:
Good essay by David Dittrich: "Malware to crimeware: How far have they gone, and how do we catch up?"
The current state of P versus NP:
In computer security, a lot of effort is spent on the authentication problem. Whether it's passwords, secure tokens, secret questions, image mnemonics, or something else, engineers are continually coming up with more complicated -- and hopefully more secure -- ways for you to prove you are who you say you are over the Internet.
This is important stuff, as anyone with an online bank account or remote corporate network knows. But a lot less thought and work have gone into the other end of the problem: how do you tell the system on the other end of the line that you're no longer there? How do you unauthenticate yourself?
My home computer requires me to log out or turn my computer off when I want to unauthenticate. This works for me because I know enough to do it, but lots of people just leave their computers on and running when they walk away. As a result, many office computers are left logged in when people go to lunch, or when they go home for the night. This, obviously, is a security vulnerability.
The most common way to combat this is by having the system time out. I could have my computer log me out automatically after a certain period of inactivity -- five minutes, for example. Getting it right requires some fine tuning, though. Log the person out too quickly, and he gets annoyed; wait too long before logging him out, and the system could be vulnerable during that time. My corporate e-mail server logs me out after 10 minutes or so, and I regularly get annoyed at my corporate e-mail system.
Some systems have experimented with a token: a USB authentication token that has to be plugged in for the computer to operate, or an RFID token that logs people out automatically when the token moves more than a certain distance from the computer. Of course, people will be prone to just leave the token plugged in to their computer all the time; but if you attach it to their car keys or the badge they have to wear at all times when walking around the office, the risk is minimized.
That's expensive, though. A research project used a Bluetooth device, like a cell phone, and measured its proximity to a computer. The system could be programmed to lock the computer if the Bluetooth device moved out of range.
Some systems log people out after every transaction. This wouldn't work for computers, but it can work for ATMs. The machine spits my card out before it gives me my cash, or just requires a card swipe, and makes sure I take it out of the machine. If I want to perform another transaction, I have to reinsert my card and enter my PIN a second time.
There's a physical analogue that everyone can explain: door locks. Does your door lock behind you when you close the door, or does it remain unlocked until you lock it? The first instance is a system that automatically logs you out, and the second requires you to log out manually. Both types of locks are sold and used, and which one you choose depends on both how you use the door and who you expect to try to break in.
Designing systems for usability is hard, especially when security is involved. Almost by definition, making something secure makes it less usable. Choosing an unauthentication method depends a lot on how the system is used as well as the threat model. You have to balance increasing security with pissing the users off, and getting that balance right takes time and testing, and is much more an art than a science.
This essay originally appeared on ThreatPost.
This is just silly:
Beaver Stadium is a terrorist target. It is most likely the No. 1 target in the region. As such, it deserves security measures commensurate with such a designation, but is the stadium getting such security?
When the stadium is not in use it does not mean it is not a target. It must be watched constantly. An easy solution is to assign police officers there 24 hours a day, seven days a week. This is how a plot to destroy the Brooklyn Bridge was thwarted -- police presence. Although there are significant costs to this, the costs pale in comparison if the stadium is destroyed or damaged.
The idea is to create omnipresence, which is a belief in everyone's minds (terrorists and pranksters included) that the stadium is constantly being watched so that any attempt would be futile.
Actually, the Brooklyn Bridge plot failed because the plotters were idiots and the plot -- cutting through cables with blowtorches -- was dumb. That, and the all-too-common police informant who egged the plotters on.
But never mind that. Beaver Stadium is Pennsylvania State University's football stadium, and this article argues that it's a potential terrorist target that needs 24/7 police protection.
The problem with that kind of reasoning is that it makes no sense. As I said in an article that will appear in "New Internationalist":
To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: aeroplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it's impossible to defend every place against everything, and it's impossible to predict which tactic and target terrorists will try next.
Defending individual targets only makes sense if the number of potential targets is few. If there are seven terrorist targets and you defend five of them, you seriously reduce the terrorists' ability to do damage. But if there are a million terrorist targets and you defend five of them, the terrorists won't even notice. I tend to dislike security measures that merely cause the bad guys to make a minor change in their plans.
And the expense would be enormous. Add up these secondary terrorist targets -- stadiums, theaters, churches, schools, malls, office buildings, anyplace where a lot of people are packed together -- and the number is probably around 200,000, including Beaver Stadium. Full-time police protection requires people, so that's 1,000,000 policemen. At an encumbered cost of $100,000 per policeman per year, probably a low estimate, that's a total annual cost of $100B. (That's about what we're spending each year in Iraq.) On the other hand, hiring one out of every 300 Americans to guard our nation's infrastructure would solve our unemployment problem. And since policemen get health care, our health care problem as well. Just make sure you don't accidentally hire a terrorist to guard against terrorists -- that would be embarrassing.
The whole idea is nonsense. As I've been saying for years, what works is investigation, intelligence, and emergency response:
We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn't make arbitrary assumptions about the next terrorist act. We need to spend more money on intelligence and investigation: identifying the terrorists themselves, cutting off their funding, and stopping them regardless of what their plans are. We need to spend more money on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy and how it helps or hinders terrorism.
Beaver Stadium piece:
Terrorists as idiots:
My essay on investigation, intelligence, and emergency response:
I'm speaking at Information Security Decisions in Chicago on October 21.
I'm speaking at the ISF Annual World Congress in Vancouver on November 2.
I'm speaking at the Gartner Identity and Access Management Conference in San Diego on November 9.
I'm speaking at the Internet Governance Forum in Sharm el-Sheikh, Egypt, on November 15.
Texas Instruments' calculators use RSA digital signatures to authenticate any updates to their operating system. Unfortunately, their signing keys are too short: 512 bits. Earlier this month, a collaborative effort factored the moduli and published the private keys. Texas Instruments responded by threatening websites that published the keys with the DMCA, but it's too late.
So far, we have the operating-system signing keys for the TI-92+, TI-73, TI-89, TI-83+/TI-83+ Silver Edition, Voyage 200, TI-89 Titanium, and the TI-84+/TI-84 Silver Edition, and the date-stamp signing key for the TI-73, Explorer, TI-83 Plus, TI-83 Silver Edition, TI-84 Plus, TI-84 Silver Edition, TI-89, TI-89 Titanium, TI-92 Plus, and the Voyage 200.
Moral: Don't assume that if your application is obscure, or if there's no obvious financial incentive for doing so, that your cryptography won't be broken if you use too-short keys.
Two entries this time.
Both are entertaining to read.
It's over 2,000 pages, so it'll take time to make any sense of. According to Ross Anderson, who's given it a quick look over, "it seems to be the bureaucratic equivalent of spaghetti code: a hodgepodge of things written by people from different backgrounds, and with different degrees of clue, in different decades."
The computer security stuff starts at page 1,531.
There are thousands of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.