August 15, 2012
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1208.html>. These same essays and news items appear in the "Schneier on Security" blog at <http://www.schneier.com/blog>, along with a lively comment section. An RSS feed is available.
In this issue:
- Overreaction and Overly Specific Reactions to Rare Risks
- Yet Another Risk of Storing Everything in the Cloud
- Schneier News
- Sexual Harassment at DefCon (and Other Hacker Cons)
- Police Sting Operation Yields No Mobile Phone Thefts
- Remote Scanning Technology
Horrific events, such as the massacre in Aurora, can be catalysts for social and political change. Sometimes it seems that they're the only catalyst; recall how drastically our policies toward terrorism changed after 9/11 despite how moribund they were before.
The problem is that fear can cloud our reasoning, causing us to overreact and to overly focus on the specifics. And the key is to steer our desire for change in that time of fear.
Our brains aren't very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.
There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.
If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.
Novelty plus dread plus a good story equals overreaction.
And who are the major storytellers these days? Television and the Internet. So when news programs and sites endlessly repeat the story from Aurora, with interviews with those in the theater, interviews with the families, and commentary by anyone who has a point to make, we start to think this is something to fear, rather than a rare event that almost never happens and isn't worth worrying about. In other words, reading five stories about the same event feels somewhat like five separate events, and that skews our perceptions.
We see the effects of this all the time.
It's strangers by whom we fear being murdered, kidnapped, raped and assaulted, when it's far more likely that any perpetrator of such offenses is a relative or a friend. We worry about airplane crashes and rampaging shooters instead of automobile crashes and domestic violence -- both of which are far more common and far, far more deadly.
Our greatest recent overreaction to a rare event was our response to the terrorist attacks of 9/11. I remember then-Attorney General John Ashcroft giving a speech in Minnesota -- where I live -- in 2003 in which he claimed that the fact there were no new terrorist attacks since 9/11 was proof that his policies were working. I remember thinking: "There were no terrorist attacks in the two years preceding 9/11, and you didn't have any policies. What does that prove?"
What it proves is that terrorist attacks are very rare, and perhaps our national response wasn't worth the enormous expense, loss of liberty, attacks on our Constitution and damage to our credibility on the world stage. Still, overreacting was the natural thing for us to do. Yes, it was security theater and not real security, but it made many of us feel safer.
The rarity of events such as the Aurora massacre doesn't mean we should ignore any lessons it might teach us. Because people overreact to rare events, they're useful catalysts for social introspection and policy change. The key here is to focus not on the details of the particular event but on the broader issues common to all similar events.
Installing metal detectors at movie theaters doesn't make sense -- there's no reason to think the next crazy gunman will choose a movie theater as his venue, and how effectively would a metal detector deter a lone gunman anyway? -- but understanding the reasons why the United States has so many gun deaths compared with other countries does. The particular motivations of alleged killer James Holmes aren't relevant -- the next gunman will have different motivations -- but the general state of mental health care in the United States is.
Even with this, the most important lesson of the Aurora massacre is how rare these events actually are. Our brains are primed to believe that movie theaters are more dangerous than they used to be, but they're not. The riskiest part of the evening is still the car ride to and from the movie theater, and even that's very safe.
But wear a seat belt all the same.
EDITED TO ADD: I almost added that Holmes wouldn't have been stopped by a metal detector. He walked into the theater unarmed and left through a back door, which he propped open so he could return armed. And while there was talk about installing metal detectors in movie theaters, I have not heard of any theater actually doing so. But AMC movie theaters have announced a "no masks or costumes policy" as a security measure.
Comparing violence by country:
AMC's "no masks or costumes" policy:
Normally, companies instruct their employees not to resist hijackers. But Hainan Airlines did the opposite; they rewarded crewmembers for resisting. "Two safety officers and the chief purser got cash and property worth 4m yuan ($628,500; £406,200) each. The rest got assets worth 2.5m yuan each." That's a lot of money, especially in China. I'm sure it will influence future decisions by crew, and even passengers, about resisting terrorist attacks.
Opaque plastic that surveillance cameras can see through:
An antidote to the American cycle of threat, fear, and overspending in response to terrorism is this article about Norway on the first anniversary of its terrorist massacre. Key excerpt here:
This is a really interesting research paper on implicit passwords: something your unconscious mind remembers but your conscious mind doesn't know.
The Slashdot post is a nice summary:
The system isn't very realistic -- people aren't going to spend 45 minutes learning their passwords and a few minutes authenticating themselves -- but I really like the direction this research is going.
Handcuffs pose a particular key management problem. Officers need to be able to unlock handcuffs locked by another officer, so they're all designed to be opened by a standard set of keys. This system only works if the bad guys can't get a copy of the key, and modern handcuff manufacturers go out of their way to make it hard for regular people to get copies of the key. At the recent HOPE conference, someone made copies of these keys using a 3D printer.
Interesting comment from my blog:
And a blog comment from the presenter:
Cybercriminals are using commercial spamflooding services to distract their victims during key moments of a cyberattack. Clever, but in retrospect kind of obvious.
I don't know the context, but these two comic strips sum up my latest book, Liars and Outliers, nicely.
There have been a few hoax bomb threats in Detroit recently (Windsor tunnel, US-Canada bridge, Tiger Stadium). The good news is that police learned; during the third one, they didn't close down the threatened location.
This TED talk trots out the usual fear-mongering that technology leads to terrorism. The facts are basically correct, but there are no counterbalancing facts, and the conclusions all one-sided. I'm not impressed with the speaker's crowdsourcing solution, either. Sure, crowdsourcing is a great tool for a lot of problems, but it's not the single thing that's going to protect us from technological crimes. If I didn't know better, I would say it was a propaganda video.
Hacking tool disguised as a power strip.
We already know you can wear fake irises to fool a scanner into thinking you're not you, but this is the first fake iris you can use for impersonation: to fool a scanner into thinking you're someone else.
Stratfor has an interesting article on terrorism and soft targets.
The new thing about the Aurora shooting wasn't the weaponry but the armor.
Wired has an interesting and comprehensive profile on Eugene Kaspersky. Especially note Kaspersky Lab's work to uncover U.S. cyberespionage against Iran, Kaspersky's relationship with Russia's state security services, and the story of the kidnapping of Kaspersky's son, Ivan.
Kaspersky responded (not kindly) to the article, and the author responded to the response.
The attack only works sometimes, but it does allow access to millions of hotel rooms worldwide that are secured by Onity brand locks. Basically, you can read the unit's key out of the power port on the bottom of the lock, and then feed it back to the lock to authenticate an open command using the same power port.
In a long article about insecurities in gun safes, there's this great paragraph: "Unfortunately, manufacturers and consumers are deceived and misled into a false sense of security by electronic credentials, codes, and biometrics. We have seen this often, even with high security locks. Our rule: electrons do not open doors; mechanical components do. If you can compromise the mechanisms then all the credentials, encryption, fingerprint readers, and other gizmos and gimmicks mean nothing." In other words, security is only as strong as the weakest link.
DefCon 19 talk on the security of gun safes.
The Verified Voting Foundation has released a comprehensive state-by-state report on electronic http://www.countingvotes.org/
Some things never change. Thirteen years ago, Mudge and I published a paper breaking Microsoft's PPTP protocol and the MS-CHAP authentication system. I haven't been paying attention, but I presume it's been fixed and improved over the years. Well, it's been broken again.
My old paper:
Interesting article on using risk-limiting auditing in determining if an election's results are likely to be valid. The risk, in this case, is in the chance of a false negative, and the election being deemed valid. The risk level determines the extent of the audit.
Ohio State University Law Professor Peter Swire testifies before Congress on the inadequacy of industry self-regulation to protect privacy.
Chinese gang sells fake professional certifications: "...the reason it could charge up to 10,000 yuan(£1,000) per certificate, was that it could hack the relevant government site and tamper with the back-end database to ensure that the fake cert's name and registration number appeared legitimate." The gang made £30M before being arrested.
This is the latest in the arms race between spoofing GPS signals and detecting spoofed GPS signals.
Unfortunately, the countermeasures all seem to be patent pending.
Rolling Stone magazine writes about computer security.
It's a virus that plays AC/DC, so it makes sense. Surreal, though.
Original post on the F-Secure blog:
Iran denies the story:
Sure, stories like this 11-year old bypassing airport security are great fun, but I don't think it's much of a security concern. Terrorists can't build a plot around random occasional security failures.
Some termites blow themselves up to expel invaders from their nest.
The U.S. and China are talking about cyberweapons. Stuart Baker calls them "proxy talks" because they're not government to government, but it's a start.
This is kind of a rambling essay on the need to spend more on infrastructure, but I was struck by this paragraph: "Here's a news flash: There are some events that no society can afford to be prepared for to the extent that we have come to expect. Some quite natural events -- hurricanes, earthquakes, tsunamis, derechos -- have such unimaginable power that the destruction they wreak will always take days, or weeks, or months to fix. No society can afford to harden the infrastructure that supports it to make that infrastructure immune to such destructive forces." Add terrorism to that list and it sounds like something I would say. Sometimes it makes more sense to spend money on mitigation than it does to spend it on prevention.
In Liars and Outliers, I talk a lot about social norms and when people follow them. A group of researchers used survival data from shipwrecks to measure it. Basically, "women and children" first is a lie; it's more like "every man for himself."
Good post on failures in password security, not because it picks on Tesco but because it's filled with good advice on how not to do it wrong.
I'm late writing about this one. Cryptocat is a web-based encrypted chat application. After Wired published a pretty fluffy profile on the program and its author, security researcher Chris Soghoian wrote an essay criticizing the unskeptical coverage. Ryan Singel, the editor (not the writer) of the Wired piece, responded by defending the original article and attacking Soghoian.
At this point, I would have considered writing a long essay explaining what's wrong with the whole concept behind Cryptocat, and echoing my complaints about the dangers of uncritically accepting the security claims of people and companies that write security software, but Patrick Ball did a great job:
CryptoCat is one of a whole class of applications that rely on what's called "host-based security". The most famous tool in this group is Hushmail, an encrypted e-mail service that takes the same approach. Unfortunately, these tools are subject to a well- known attack. I'll detail it below, but the short version is if you use one of these applications, your security depends entirely the security of the host. This means that in practice, CryptoCat is no more secure than Yahoo chat, and Hushmail is no more secure than Gmail. More generally, your security in a host-based encryption system is no better than having no crypto at all.
Sometimes it's nice to come in late.
A hacker can social-engineer his way into your cloud storage and delete everything you have.
It turns out, a billing address and the last four digits of a credit card number are the only two pieces of information anyone needs to get into your iCloud account. Once supplied, Apple will issue a temporary password, and that password grants access to iCloud.
Apple tech support confirmed to me twice over the weekend that all you need to access someone's AppleID is the associated e-mail address, a credit card number, the billing address, and the last four digits of a credit card on file.
Here's how a hacker gets that information.
First you call Amazon and tell them you are the account holder, and want to add a credit card number to the account. All you need is the name on the account, an associated e-mail address, and the billing address. Amazon then allows you to input a new credit card. (Wired used a bogus credit card number from a website that generates fake card numbers that conform with the industry's published self-check algorithm.) Then you hang up.
Next you call back, and tell Amazon that you've lost access to your account. Upon providing a name, billing address, and the new credit card number you gave the company on the prior call, Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account. This allows you to see all the credit cards on file for the account -- not the complete numbers, just the last four digits. But, as we know, Apple only needs those last four digits. We asked Amazon to comment on its security policy, but didn't have anything to share by press time.
And it's also worth noting that one wouldn't have to call Amazon to pull this off. Your pizza guy could do the same thing, for example. If you have an AppleID, every time you call Pizza Hut, you're giving the 16-year-old on the other end of the line all he needs to take over your entire digital life.
The victim here is a popular technology journalist, so he got a level of tech support that's not available to most of us. I believe this will increasingly become a problem, and that cloud providers will need better and more automated solutions.
Update: Apple has changed its policy and stopped taking phone-based password reset requests, pretty much as a result of this incident, and has beefed up security:
I will be speaking (four times) and doing a book signing at the DragonCon science fiction convention in Atlanta, GA, on August 31 to September 2.
News articles about me:
Video interviews with me:
Excellent blog post by Valerie Aurora about sexual harassment at the DefCon hackers conference. Aside from the fact that this is utterly reprehensible behavior by the perpetrators involved, this is a real problem for our community.
The response of "this is just what hacker culture is, and changing it will destroy hackerdom" is just plain wrong. When swaths of the population don't attend DefCon because they're not comfortable there or fear being assaulted, we all suffer. A lot.
Finally, everyone at DEFCON benefits from more women attending. Women "hackers" -- in the creative technologist sense -- are everywhere, and many of them are brilliant, interesting, and just plain good company (think Limor Fried, Jeri Ellsworth, and Angela Byron). Companies recruiting for talent get access to the full range of qualified applicants, not just the ones who can put up with a programmer atmosphere. We get more and better talks on a wider range of subjects. Conversations are more fun. Conferences and everyone at them loses when amazing women don't attend.
When you say, "Women shouldn't go to DEFCON if they don't like it, "you are saying that women shouldn't have all of the opportunities that come with attending DEFCON: jobs, education, networking, book contracts, speaking opportunities -- or else should be willing to undergo sexual harassment and assault to get access to them. Is that really what you believe?
And in case you're thinking this is just a bunch of awkward geeks trying to flirt, here are one person's (KC's) DefCon stories:
Like the man who drunkenly tried to lick my shoulder tattoo. Like the man who grabbed my hips while I was waiting for a drink at the EFF party. Like the man who tried to get me to show him my tits so he could punch a hole in a card that, when filled, would net him a favor from one of the official security staff (I do not have words for how slimy it is that the official security staff were in charge of what was essentially a competition to get women to show their boobs). Or lastly, the man who, without prompting, interrupted my conversation and asked me if I'd like to come back to his room for a "private pillowfight party." "You know," he said. "Just a bunch of girls having a pillowfight....fun!" When I asked him how many men would be standing around in a circle recording this event, he quickly assured me that "no one would be taking video! I swear!"
Aurora writes that DefCon is no different from other hacker cons. I had some conversations with people at DefCon this year to the contrary, saying that DefCon is worse than other hacker cons. We speculated about possible reasons: it's so large (13,000 people were at DefCon 20), it's in Las Vegas (with all the sexual context that implies), and it's nobody's home turf. I don't know. Certainly the problem is rampant in geek culture.
Aurora also mentions the "Red/Yellow Card project" by KC, another hacker: warning cards that can be handed out in response to harassing behavior. The cards are great, and a very hackerish sort of solution to the problem. She gave me a complete set -- there's also a green card for good behavior -- and I have been showing them to people since I returned. I haven't heard any stories about them being given out to harassers, but I suspect they would be more effective if they were given out by observers rather than by the harassed. (Bystanders play a large role in normalizing harassing behavior, and similarly play a large role preventing it.)
Of course, the countermove by harassers would be to collect the cards as kind of a game. Yes, that would reduce the sting of the cards. No, that doesn't make them a bad idea. Still, a better idea is a strong anti-harassment policy from the cons themselves.
Model con policy:
Police in Hastings, in the UK, outfitted mobile phones with tracking devices and left them in bars and restaurants, hoping to catch mobile phone thieves in the act. But no one stole them: "Nine premises were visited in total and officers were delighted that not one of the bait phones was 'stolen'. In fact, on nearly every occasion good hearted members of the public handed them to bar or security staff."
I'm not sure about the headline: "Operation Mobli deters mobile phone thieves in Hastings."
There are two things going on here. One, people are generally nice and will return property to its rightful owner. Two, it's hard for the average person to profit from a stolen cell phone. He already has a cell phone that's assigned to his phone number. He doesn't really know if he can sell a random phone, especially one assigned to the number of someone who had her phone stolen. Yes, professional phone thieves know what to do, but what's the odds that one of those is dining out in Hastings on a particular night?
I don't know if this is real or fantasy: "Within the next year or two, the U.S. Department of Homeland Security will instantly know *everything* about your body, clothes, and luggage with a new laser-based molecular scanner fired from 164 feet (50 meters) away. From traces of drugs or gun powder on your clothes to what you had for breakfast to the adrenaline level in your body -- agents will be able to get any information they want without even touching you."
The meta-point is less about this particular technology, and more about the arc of technological advancements in general. All sorts of remote surveillance technologies -- facial recognition, remote fingerprint recognition, RFID/Bluetooth/cell phone tracking, license plate tracking -- are becoming possible, cheaper, smaller, more reliable, etc. It's wholesale surveillance, something I wrote about back in 2004.
We're at a unique time in the history of surveillance: the cameras are everywhere, and we can still see them. Fifteen years ago, they weren't everywhere. Fifteen years from now, they'll be so small we won't be able to see them. Similarly, all the debates we've had about national ID cards will become moot as soon as these surveillance technologies are able to recognize us without us even knowing it.
Remote fingerprint recognition:
Me on wholesale surveillance:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Liars and Outliers," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2012 by Bruce Schneier.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..