Blog: August 2012 Archives
Shelly C. McArdle, Heather Rosoff, Richard S. John (2012), "The Dynamics of Evolving Beliefs, Concerns Emotions, and Behavioral Avoidance Following 9/11: A Longitudinal Analysis of Representative Archival Samples," Risk Analysis v. 32, pp. 744761.
Abstract: September 11 created a natural experiment that enables us to track the psychological effects of a large-scale terror event over time. The archival data came from 8,070 participants of 10 ABC and CBS News polls collected from September 2001 until September 2006. Six questions investigated emotional, behavioral, and cognitive responses to the events of September 11 over a five-year period. We found that heightened responses after September 11 dissipated and reached a plateau at various points in time over a five-year period. We also found that emotional, cognitive, and behavioral reactions were moderated by age, sex, political affiliation, and proximity to the attack. Both emotional and behavioral responses returned to a normal state after one year, whereas cognitively-based perceptions of risk were still diminishing as late as September 2006. These results provide insight into how individuals will perceive and respond to future similar attacks.
A reader sent me this photo of a shared lock. It's at the gate of a large ranch outside of Victoria, Texas. Multiple padlocks secure the device, but when a single padlock is removed, the center pin can be fully lifted and the gate can be opened. The point is to allow multiple entities (oil and gas, hunting parties, ranch supervisors, etc.) access without the issues of key distribution that would arise if it were just a single lock. On the other hand, the gate is only as secure as the weakest padlock.
EDITED TO ADD (9/14): A less elegant way to do the same thing.
A slightly different implementation of same idea: removal of any one lock allows locking bar to retract from pole and gate to open. And an interesting comment from someone who deals with this in his work.
In May, neuroscientist and popular author Sam Harris and I debated the issue of profiling Muslims at airport security. We each wrote essays, then went back and forth on the issue. I don't recommend reading the entire discussion; we spent 14,000 words talking past each other. But what's interesting is how our debate illustrates the differences between a security engineer and an intelligent layman. Harris was uninterested in the detailed analysis required to understand a security system and unwilling to accept that security engineering is a specialized discipline with a body of knowledge and relevant expertise. He trusted his intuition.
Many people have researched how intuition fails us in security: Paul Slovic and Bill Burns on risk perception, Daniel Kahneman on cognitive biases in general, Rick Walsh on folk computer-security models. I've written about the psychology of security, and Daniel Gartner has written more. Basically, our intuitions are based on things like antiquated fight-or-flight models, and these increasingly fail in our technological world.
This problem isn't unique to computer security, or even security in general. But this misperception about security matters now more than it ever has. We're no longer asking people to make security choices only for themselves and their businesses; we need them to make security choices as a matter of public policy. And getting it wrong has increasingly bad consequences.
Computers and the Internet have collided with public policy. The entertainment industry wants to enforce copyright. Internet companies want to continue freely spying on users. Law-enforcement wants its own laws imposed on the Internet: laws that make surveillance easier, prohibit anonymity, mandate the removal of objectionable images and texts, and require ISPs to retain data about their customers' Internet activities. Militaries want laws regarding cyber weapons, laws enabling wholesale surveillance, and laws mandating an Internet kill switch. "Security" is now a catch-all excuse for all sorts of authoritarianism, as well as for boondoggles and corporate profiteering.
Cory Doctorow recently spoke about the coming war on general-purpose computing. I talked about it in terms of the entertainment industry and Jonathan Zittrain discussed it more generally, but Doctorow sees it as a much broader issue. Preventing people from copying digital files is only the first skirmish; just wait until the DEA wants to prevent chemical printers from making certain drugs, or the FBI wants to prevent 3D printers from making guns.
I'm not here to debate the merits of any of these policies, but instead to point out that people will debate them. Elected officials will be expected to understand security implications, both good and bad, and will make laws based on that understanding. And if they aren't able to understand security engineering, or even accept that there is such a thing, the result will be ineffective and harmful policies.
So what do we do? We need to establish security engineering as a valid profession in the minds of the public and policy makers. This is less about certifications and (heaven forbid) licensing, and more about perception -- and cultivating a security mindset. Amateurs produce amateur security, which costs more in dollars, time, liberty, and dignity while giving us less -- or even no -- security. We need everyone to know that.
We also need to engage with real-world security problems, and apply our expertise to the variety of technical and socio-technical systems that affect broader society. Everything involves computers, and almost everything involves the Internet. More and more, computer security is security.
Finally, and perhaps most importantly, we need to learn how to talk about security engineering to a non-technical audience. We need to convince policy makers to follow a logical approach instead of an emotional one -- an approach that includes threat modeling, failure analysis, searching for unintended consequences, and everything else in an engineer's approach to design. Powerful lobbying forces are attempting to force security policies on society, largely for non-security reasons, and sometimes in secret. We need to stand up for security.
A shorter version of this essay appeared in the September/October 2012 issue of IEEE Security & Privacy.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
A surprisingly sensible list.
E. Why are you penalizing the 95% for the 5%? You don't do this in other areas of discipline at school. Even though you know some students will use their voices or bodies inappropriately in school, you don't ban everyone from speaking or moving. You know some students may show up drunk to the prom, yet you don't cancel the prom because of a few rule breakers. Instead, you assume that most students will act appropriately most of the time and then you enforce reasonable expectations and policies for the occasional few that don't. To use a historical analogy, it's the difference between DUI-style policies and flat-out Prohibition (which, if you recall, failed miserably). Just as you don't put entire schools on lockdown every time there's a fight in the cafeteria, you need to stop penalizing entire student bodies because of statistically-infrequent, worst-case scenarios.
G. The 'online predators will prey on your schoolchildren' argument is a false bogeyman, a scare tactic that is fed to us by the media, politicians, law enforcement, and computer security vendors. The number of reported incidents in the news of this occurring is zero.
H. Federal laws do not require your draconian filtering. You can't point the finger somewhere else. You have to own it yourself.
I. Students and teachers rise to the level of the expectations that you have for them. If you expect the worst, that's what you'll get.
J. Schools that 'loosen up' with students and teachers find that they have no more problems than they did before. And, often, they have fewer problems because folks aren't trying to get around the restrictions.
K. There's a difference between a teachable moment and a punishable moment. Lean toward the former as much as possible.
O. Schools with mindsets of enabling powerful student learning usually block much less than those that don't. Their first reaction is 'how can we make this work?' rather than 'we need to keep this out.'
The screaming fear in your stomach before you give a speech to 12 kids in the fifth grade is precisely the same fear a presidential candidate feels before the final debate. The fight-or-flight reflex that speeds up your heart when you're about to get a speeding ticket you don't deserve isn't very different than the chemical reaction in the brain of an accused (but innocent) murder suspect when the jury walks in.
Bigger stakes can't lead to more fear.
And, in an interesting glitch, more fear often tricks us into thinking we're dealing with bigger stakes.
Finally, someone takes a look at the $1 trillion number government officials are quoting as the cost of cybercrime. While it's a good figure to scare people, it doesn't have much of a basis in reality.
EDITED TO ADD (9/14): Older research debunking cybercrime surveys.
1. Probability neglect – people sometimes don’t consider the probability of the occurrence of an outcome, but focus on the consequences only.
2. Consequence neglect – just like probability neglect, sometimes individuals neglect the magnitude of outcomes.
3. Statistical neglect – instead of subjectively assessing small probabilities and continuously updating them, people choose to use rules-of-thumb (if any heuristics), which can introduce systematic biases in their decisions.
4. Solution neglect – choosing an optimal solution is not possible when one fails to consider all of the solutions.
5. External risk neglect – in making decisions, individuals or groups often consider the cost/benefits of decisions only for themselves, without including externalities, sometimes leading to significant negative outcomes for others.
Gallup has the results:
Despite recent negative press, a majority of Americans, 54%, think the U.S. Transportation Security Administration is doing either an excellent or a good job of handling security screening at airports. At the same time, 41% think TSA screening procedures are extremely or very effective at preventing acts of terrorism on U.S. airplanes, with most of the rest saying they are somewhat effective.
My first reaction was that people who don't fly -- and don't interact with the TSA -- are more likely to believe it is doing a good job. That's not true.
Just over half of Americans report having flown at least once in the past year. These fliers have a slightly better opinion of the job TSA is doing than those who haven't flown. Fifty-seven percent of those who have flown at least once and 57% of the smaller group who have flown at least three times have an excellent or good opinion of the TSA's job performance. That compares with 52% of those who have not flown in the past year.
There is little difference in opinions about the effectiveness of TSA's screening procedures by flying status; between 40% and 42% of non-fliers, as well as of those who have flown at least once and those who have flown at least three times, believe the procedures are at least very effective.
Younger Americans have significantly more positive opinions of the TSA than those who are older. These differences may partly reflect substantial differences in flying frequency, with 60% of 18- to 29-year-olds reporting having flown within the last year, compared with 33% of those 65 years and older.
Anyone want to try to explain these numbers?
Simson Garfinkel writes that the iPhone has such good security that the police can't use it for forensics anymore:
Technologies the company has adopted protect Apple customers' content so well that in many situations it's impossible for law enforcement to perform forensic examinations of devices seized from criminals. Most significant is the increasing use of encryption, which is beginning to cause problems for law enforcement agencies when they encounter systems with encrypted drives.
"I can tell you from the Department of Justice perspective, if that drive is encrypted, you're done," Ovie Carroll, director of the cyber-crime lab at the Computer Crime and Intellectual Property Section in the Department of Justice, said during his keynote address at the DFRWS computer forensics conference in Washington, D.C., last Monday. "When conducting criminal investigations, if you pull the power on a drive that is whole-disk encrypted you have lost any chance of recovering that data."
Yes, I believe that full-disk encryption -- whether Apple's FileVault or Microsoft's BitLocker (I don't know what the iOS system is called) -- is good; but its security is only as good as the user is at choosing a good password.
The iPhone always supported a PIN lock, but the PIN wasn't a deterrent to a serious attacker until the iPhone 3GS. Because those early phones didn't use their hardware to perform encryption, a skilled investigator could hack into the phone, dump its flash memory, and directly access the phone's address book, e-mail messages, and other information. But now, with Apple's more sophisticated approach to encryption, investigators who want to examine data on a phone have to try every possible PIN. Examiners perform these so-called brute-force attacks with special software, because the iPhone can be programmed to wipe itself if the wrong PIN is provided more than 10 times in a row. This software must be run on the iPhone itself, limiting the guessing speed to 80 milliseconds per PIN. Trying all four-digit PINs therefore requires no more than 800 seconds, a little more than 13 minutes. However, if the user chooses a six-digit PIN, the maximum time required would be 22 hours; a nine-digit PIN would require 2.5 years, and a 10-digit pin would take 25 years. That's good enough for most corporate secrets—and probably good enough for most criminals as well.
Leaving aside the user practice questions -- my guess is that very few users, even those with something to hide, use a ten-digit PIN -- could this possibly be true? In the introduction to Applied Cryptography, almost 20 years ago, I wrote: "There are two kinds of cryptography in this world: cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files."
Since then, I've learned two things: 1) there are a lot of gradients to kid sister cryptography, and 2) major government cryptography is very hard to get right. It's not the cryptography; it's everything around the cryptography. I said as much in the preface to Secrets and Lies in 2000:
Cryptography is a branch of mathematics. And like all mathematics, it involves numbers, equations, and logic. Security, palpable security that you or I might find useful in our lives, involves people: things people know, relationships between people, people and how they relate to machines. Digital security involves computers: complex, unstable, buggy computers.
Mathematics is perfect; reality is subjective. Mathematics is defined; computers are ornery. Mathematics is logical; people are erratic, capricious, and barely comprehensible.
If, in fact, we've finally achieved something resembling this level of security for our computers and handheld computing devices, this is something to celebrate.
But I'm skeptical.
Slashdot has a thread on the article.
This is an extraordinary (and gut-wrenching) first-person account of what it's like to staff an Israeli security checkpoint. It shows how power corrupts: how it's impossible to make humane decisions in such a circumstance.
Japanese researchers are attempting to film the elusive giant squid.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
This is pretty funny:
- Moving red laser beams scare away potential intruders
- Laser beams move along floor and wall 180 degrees
- Easy to install, 110v comes on automatically w/timer
Watch the video. This is not an alarm, and it doesn't do anything other than the laser light show. But, as the product advertisement says, "perception can be an excellent deterrent to crime." Although this only works if the product isn't very successful -- or widely known.
You must remember, though you will not understand, that all laws weaken in a small and hidden community where there is no public opinion. When a man is absolutely alone in a Station he runs a certain risk of falling into evil ways. This risk is multiplied by every addition to the population up to twelve -- the Jury number. After that, fear and consequent restraint begin, and human action becomes less grotesquely jerky.
Interesting commentary on how reputational pressure scales. If I had found this quote last year, I would have included it in my book.
This is an analysis of Apple's disk encryption program, FileVault 2, that first appeared in the Lion operating system. Short summary: they couldn't break it. (Presumably, the version in Mountain Lion isn't any different.)
Excellent blog post by Valerie Aurora about sexual harassment at the DefCon hackers conference. Aside from the fact that this is utterly reprehensible behavior by the perpetrators involved, this is a real problem for our community.
The response of "this is just what hacker culture is, and changing it will destroy hackerdom" is just plain wrong. When swaths of the population don't attend DefCon because they're not comfortable there or fear being assaulted, we all suffer. A lot.
Finally, everyone at DEFCON benefits from more women attending. Women "hackers" -- in the creative technologist sense -- are everywhere, and many of them are brilliant, interesting, and just plain good company (think Limor Fried, Jeri Ellsworth, and Angela Byron). Companies recruiting for talent get access to the full range of qualified applicants, not just the ones who can put up with a brogrammer atmosphere. We get more and better talks on a wider range of subjects. Conversations are more fun. Conferences and everyone at them loses when amazing women don't attend.
When you say, "Women shouldn't go to DEFCON if they don't like it," you are saying that women shouldn't have all of the opportunities that come with attending DEFCON: jobs, education, networking, book contracts, speaking opportunities -- or else should be willing to undergo sexual harassment and assault to get access to them. Is that really what you believe?
And in case you're thinking this is just a bunch of awkward geeks trying to flirt, here are one person's DefCon stories:
Like the man who drunkenly tried to lick my shoulder tattoo. Like the man who grabbed my hips while I was waiting for a drink at the EFF party. Like the man who tried to get me to show him my tits so he could punch a hole in a card that, when filled, would net him a favor from one of the official security staff (I do not have words for how slimy it is that the official security staff were in charge of what was essentially a competition to get women to show their boobs). Or lastly, the man who, without prompting, interrupted my conversation and asked me if I'd like to come back to his room for a "private pillowfight party." "You know," he said. "Just a bunch of girls having a pillowfight....fun!" When I asked him how many men would be standing around in a circle recording this event, he quickly assured me that "no one would be taking video! I swear!"
Aurora writes that DefCon is no different from other hacker cons. I had some conversations with people at DefCon this year to the contrary, saying that DefCon is worse than other hacker cons. We speculated about possible reasons: it's so large (13,000 people were at DefCon 20), it's in Las Vegas (with all the sexual context that implies), and it's nobody's home turf. I don't know. Certainly the problem is rampant in geek culture.
Aurora also mentions the "Red/Yellow Card project" by KC, another hacker: warning cards that can be handed out in response to harassing behavior. The cards are great, and a very hackerish sort of solution to the problem. She gave me a complete set -- there's also a green card for good behavior -- and I have been showing them to people since I returned. I haven't heard any stories about them being given out to harassers, but I suspect they would be more effective if they were given out by observers rather than by the harassed. (Bystanders play a large role in normalizing harassing behavior, and similarly play a large role preventing it.)
Of course, the countermove by harassers would be to collect the cards as kind of a game. Yes, that would reduce the sting of the cards. No, that doesn't make them a bad idea. Still, a better idea is a strong anti-harassment policy from the cons themselves. Here's a good model.
Liars and Outliers has been out since late February, and while it's selling great, I'd like it to sell better. So I have a special offer for my regular readers. People in the U.S. can buy a signed copy of the book for $11, Media Mail postage included. (Yes, I'm selling the book at a loss.) People in other countries can buy it for $26, postage included. This is significantly cheaper than Amazon's discount price.
My only request is that, after you read the book, you post a review about it somewhere. On your blog, on Amazon, on -- I suppose -- Twitter. Just let people know about it.
Order yours here. This price is only good for the first 100 people who respond, so please act quickly.
EDITED TO ADD (8/15): First 300 people; the response has been so overwhelming.
EDITED TO ADD (8/17): This offer has expired.
In Liars and Outliers, I talk a lot about social norms and when people follow them. This research uses survival data from shipwrecks to measure it.
The authors argue that shipwrecks can actually tell us a fair bit about human behavior, since everyone stuck on a sinking ship has to do a bit of cost-benefit analysis. People will weigh their options -- which will generally involve helping others at great risk to themselves -- amidst a backdrop of social norms and, at least in case of the Titanic, direct orders from authority figures. "This cost-benefit logic is fundamental in economic models of human behavior," the authors write, suggesting that a shipwreck could provide a real-world test of ideas derived from controlled experiments.
Eight ideas, to be precise. That's how many hypotheses the authors lay out, ranging from "women have a survival advantage in shipwrecks" to "women are more likely to survive on British ships, given the UK's strong sense of gentility." They tested them using a database of ship sinkings that encompasses over 15,000 passengers and crew, and provides information on everything from age and sex to whether the passenger had a first-class ticket.
For the most part, the lessons provided by the Titanic simply don't hold. Excluding the two disasters mentioned above, crew members had a survival rate of over 60 percent, far higher than any other group analyzed. (Although they didn't consistently survive well -- in about half the wrecks, there was no statistical difference between crew and passengers). Rather than going down with the ship, captains ended up coming in second, with just under half surviving. The authors offer a number of plausible reasons for crew survival, including better fitness, a thorough knowledge of the ship that's sinking, and better training for how to handle emergencies. In any case, however, they're not clearly or consistently sacrificing themselves to save their passengers.
At the other end of the spectrum, nearly half the children on the Titanic survived, but figures for the rest of the shipwrecks were down near 15 percent. About a quarter of women survived other sinkings, but roughly three times that made it through the Titanic alive. If you exclude the Titanic, female survival was 18 percent, or about half the rate at which males came through alive.
What about social factors? Having the captain order "women and children first" did boost female survival, but only by about 10 percentage points. Most of the other ideas didn't pan out. For example, the speed of sinking, which might give the crew more time to get vulnerable passengers off first, made no difference whatsoever to female survival. Neither did the length of voyage, which might give passengers more time to get to know both the boat and each other. The fraction of passengers that were female didn't seem to make a difference either.
One social factor that did play a role was price of ticket: "there is a class gradient in survival benefitting first class passengers." Another is the being on a British ship, where (except with the Titanic), women actually had lower rates of survival.
Paper here (behind a paywall):
Abstract: Since the sinking of the Titanic, there has been a widespread belief that the social norm of “women and children first” (WCF) give women a survival advantage over men in maritime disasters, and that captains and crew members give priority to passengers. We analyze a database of 18 maritime disasters spanning three centuries, covering the fate of over 15,000 individuals of more than 30 nationalities. Our results provide a unique picture of maritime disasters. Women have a distinct survival disadvantage compared with men. Captains and crew survive at a significantly higher rate than passengers. We also find that: the captain has the power to enforce normative behavior; there seems to be no association between duration of a disaster and the impact of social norms; women fare no better when they constitute a small share of the ship’s complement; the length of the voyage before the disaster appears to have no impact on women’s relative survival rate; the sex gap in survival rates has declined since World War I; and women have a larger disadvantage in British shipwrecks. Taken together, our findings show that human behavior in life-and-death situations is best captured by the expression "every man for himself."
I'm late writing about this one. Cryptocat is a web-based encrypted chat application. After Wired published a pretty fluffy profile on the program and its author, security researcher Chris Soghoian wrote an essay criticizing the unskeptical coverage. Ryan Singel, the editor (not the writer) of the Wired piece, responded by defending the original article and attacking Soghoian.
At this point, I would have considered writing a long essay explaining what's wrong with the whole concept behind Cryptocat, and echoing my complaints about the dangers of uncritically accepting the security claims of people and companies that write security software, but Patrick Ball did a great job:
CryptoCat is one of a whole class of applications that rely on what's called "host-based security". The most famous tool in this group is Hushmail, an encrypted e-mail service that takes the same approach. Unfortunately, these tools are subject to a well-known attack. I'll detail it below, but the short version is if you use one of these applications, your security depends entirely the security of the host. This means that in practice, CryptoCat is no more secure than Yahoo chat, and Hushmail is no more secure than Gmail. More generally, your security in a host-based encryption system is no better than having no crypto at all.
Sometimes it's nice to come in late.
EDITED TO ADD (8/14): As a result of this, CryptoCat is moving to a browser plug-in model.
This is kind of a rambling essay on the need to spend more on infrastructure, but I was struck by this paragraph:
Here's a news flash: There are some events that no society can afford to be prepared for to the extent that we have come to expect. Some quite natural events -- hurricanes, earthquakes, tsunamis, derechos -- have such unimaginable power that the destruction they wreak will always take days, or weeks, or months to fix. No society can afford to harden the infrastructure that supports it to make that infrastructure immune to such destructive forces.
Add terrorism to that list and it sounds like something I would say. Sometimes it makes more sense to spend money on mitigation than it does to spend it on prevention.
They were able to hack into government websites:
The gang’s USP, and the reason it could charge up to 10,000 yuan (£1,000) per certificate, was that it could hack the relevant government site and tamper with the back-end database to ensure that the fake cert’s name and registration number appeared legitimate.
The gang made £30M before being arrested.
A hacker can social-engineer his way into your cloud storage and delete everything you have.
It turns out, a billing address and the last four digits of a credit card number are the only two pieces of information anyone needs to get into your iCloud account. Once supplied, Apple will issue a temporary password, and that password grants access to iCloud.
Apple tech support confirmed to me twice over the weekend that all you need to access someone's AppleID is the associated e-mail address, a credit card number, the billing address, and the last four digits of a credit card on file.
Here's how a hacker gets that information.
First you call Amazon and tell them you are the account holder, and want to add a credit card number to the account. All you need is the name on the account, an associated e-mail address, and the billing address. Amazon then allows you to input a new credit card. (Wired used a bogus credit card number from a website that generates fake card numbers that conform with the industry's published self-check algorithm.) Then you hang up.
Next you call back, and tell Amazon that you've lost access to your account. Upon providing a name, billing address, and the new credit card number you gave the company on the prior call, Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account. This allows you to see all the credit cards on file for the account -- not the complete numbers, just the last four digits. But, as we know, Apple only needs those last four digits. We asked Amazon to comment on its security policy, but didn't have anything to share by press time.
And it's also worth noting that one wouldn't have to call Amazon to pull this off. Your pizza guy could do the same thing, for example. If you have an AppleID, every time you call Pizza Hut, you've giving the 16-year-old on the other end of the line all he needs to take over your entire digital life.
The victim here is a popular technology journalist, so he got a level of tech support that's not available to most of us. I believe this will increasingly become a problem, and that cloud providers will need better and more automated solutions.
EDITED TO ADD (8/13): Apple has changed its policy and stopped taking password reset requests over the phone, pretty much as a result of this incident.
EDITED TO ADD (8/17): A follow on story about how he recovered all of his data.
Ohio State University Law Professor Peter Swire testifies before Congress on the inadequacy of industry self-regulation to protect privacy.
Interesting article on using risk-limiting auditing in determining if an election's results are likely to be valid. The risk, in this case, is in the chance of a false negative, and the election being deemed valid. The risk level determines the extent of the audit.
Some things never change. Thirteen years ago, Mudge and I published a paper breaking Microsoft's PPTP protocol and the MS-CHAP authentication system. I haven't been paying attention, but I presume it's been fixed and improved over the years. Well, it's been broken again.
ChapCrack can take captured network traffic that contains a MS-CHAPv2 network handshake (PPTP VPN or WPA2 Enterprise handshake) and reduce the handshake's security to a single DES (Data Encryption Standard) key.
This DES key can then be submitted to CloudCracker.com -- a commercial online password cracking service that runs on a special FPGA cracking box developed by David Hulton of Pico Computing -- where it will be decrypted in under a day.
The CloudCracker output can then be used with ChapCrack to decrypt an entire session captured with WireShark or other similar network sniffing tools.
It seems that quantum computers might use superconducting quantum interference devices (SQUIDs).
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
In a long article about insecurities in gun safes, there's this great paragraph:
Unfortunately, manufacturers and consumers are deceived and misled into a false sense of security by electronic credentials, codes, and biometrics. We have seen this often, even with high security locks. Our rule: electrons do not open doors; mechanical components do. If you can compromise the mechanisms then all the credentials, encryption, fingerprint readers, and other gizmos and gimmicks mean nothing.
In other words, security is only as strong as the weakest link.
EDITED TO ADD (8/13): DefCon 19 talk on the security of gun safes.
Horrific events, such as the massacre in Aurora, can be catalysts for social and political change. Sometimes it seems that they're the only catalyst; recall how drastically our policies toward terrorism changed after 9/11 despite how moribund they were before.
The problem is that fear can cloud our reasoning, causing us to overreact and to overly focus on the specifics. And the key is to steer our desire for change in that time of fear.
Our brains aren't very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.
There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.
If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.
Novelty plus dread plus a good story equals overreaction.
And who are the major storytellers these days? Television and the Internet. So when news programs and sites endlessly repeat the story from Aurora, with interviews with those in the theater, interviews with the families, and commentary by anyone who has a point to make, we start to think this is something to fear, rather than a rare event that almost never happens and isn't worth worrying about. In other words, reading five stories about the same event feels somewhat like five separate events, and that skews our perceptions.
We see the effects of this all the time.
It's strangers by whom we fear being murdered, kidnapped, raped and assaulted, when it's far more likely that any perpetrator of such offenses is a relative or a friend. We worry about airplane crashes and rampaging shooters instead of automobile crashes and domestic violence -- both of which are far more common and far, far more deadly.
Our greatest recent overreaction to a rare event was our response to the terrorist attacks of 9/11. I remember then-Attorney General John Ashcroft giving a speech in Minnesota -- where I live -- in 2003 in which he claimed that the fact there were no new terrorist attacks since 9/11 was proof that his policies were working. I remember thinking: "There were no terrorist attacks in the two years preceding 9/11, and you didn't have any policies. What does that prove?"
What it proves is that terrorist attacks are very rare, and perhaps our national response wasn't worth the enormous expense, loss of liberty, attacks on our Constitution and damage to our credibility on the world stage. Still, overreacting was the natural thing for us to do. Yes, it was security theater and not real security, but it made many of us feel safer.
The rarity of events such as the Aurora massacre doesn't mean we should ignore any lessons it might teach us. Because people overreact to rare events, they're useful catalysts for social introspection and policy change. The key here is to focus not on the details of the particular event but on the broader issues common to all similar events.
Installing metal detectors at movie theaters doesn't make sense -- there's no reason to think the next crazy gunman will choose a movie theater as his venue, and how effectively would a metal detector deter a lone gunman anyway? -- but understanding the reasons why the United States has so many gun deaths compared with other countries does. The particular motivations of alleged killer James Holmes aren't relevant -- the next gunman will have different motivations -- but the general state of mental health care in the United States is.
Even with this, the most important lesson of the Aurora massacre is how rare these events actually are. Our brains are primed to believe that movie theaters are more dangerous than they used to be, but they're not. The riskiest part of the evening is still the car ride to and from the movie theater, and even that's very safe.
But wear a seat belt all the same.
EDITED TO ADD: I almost added that Holmes wouldn't have been stopped by a metal detector. He walked into the theater unarmed and left through a back door, which he propped open so he could return armed. And while there was talk about installing metal detectors in movie theaters, I have not heard of any theater actually doing so. But AMC movie theaters have announced a "no masks or costumes policy" as a security measure.
A year ago, EPIC sued the TSA over full body scanners (I was one of the plaintiffs), demanding that they follow their own rules and ask for public comment. The court agreed, and ordered the TSA to do that. In response, the TSA has done nothing. Now, a year later, the court has again ordered the TSA to answer EPIC's position.
This is an excellent time to add your name to the petition the TSA to do what they're supposed to do, and what the court ordered them to do: take public comments on full body scanners. The petition has almost 17,000 signatures. If we get 25,000 by August 9th, the government will respond. I doubt they'll capitulate, but it will be a press event that will put even more pressure on the TSA. So please sign the petition. (Here is my first post about it.)
The attack only works sometimes, but it does allow access to millions of hotel rooms worldwide that are secured by Onity brand locks. Basically, you can read the unit's key out of the power port on the bottom of the lock, and then feed it back to the lock to authenticate an open command using the same power port.
Wired has an interesting and comprehensive profile on Eugene Kaspersky. Especially note Kaspersky Lab's work to uncover US cyberespionage against Iran, Kaspersky's relationship with Russia's state security services, and the story of the kidnapping of Kaspersky's son, Ivan.
The new thing about the Aurora shooting wasn't the weaponry, but the armor:
What distinguished Holmes wasn't his offense. It was his defense. At Columbine, Harris and Klebold did their damage in T-shirts and cargo pants. Cho and Loughner wore sweatshirts. Hasan was gunned down in his Army uniform.
Holmes' outfit blew these jokers away. He wore a ballistic helmet, a ballistic vest, ballistic leggings, a throat protector, a groin protector, and tactical gloves. He was so well equipped that if anyone in that theater had tried what the National Rifle Association recommends -- drawing a firearm to stop the carnage -- that person would have been dead meat. Holmes didn't just kill a dozen people. He killed the NRA's answer to gun violence.
Essentially, Holmes has called the NRA's bluff. It may be true that the best way to stop a bad guy with a gun is a good guy with a gun. But the best way to stop a good guy with a gun is a bad guy with body armor. And judging from Holmes' vest receipt, he wasn't even buying the serious stuff.
The NRA bases its good-guy approach on a well-substantiated military doctrine: deterrence. By arming myself with a weapon that can hurt you, I discourage you from attacking me. For many years, this doctrine averted war between the United States and the Soviet Union. Each side feared mutually assured destruction. What broke the deadlock wasn't a weapon. It was a shield: strategic missile defense. The Soviets understood that a system capable of shooting down their nuclear missiles would, by removing their power to deter us, free us to attack. The best offense, it turns out, is a good defense.
That's what Holmes figured out. Defense, not offense, is the next stage of the gun-violence arms race. Equipping citizens with concealed weapons doesn't stop bad guys. It just pushes them to the next level. The next level is body armor. And unlike missile defense, which has proved to be complicated and disappointing, body armor is relatively simple.
EDITED TO ADD (8/2): Seems that the amount of body armor Holmes wore was exaggerated.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.