June 15, 2008
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0806.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- The War on Photography
- Crossing Borders with Laptops and PDAs
- E-Mail After the Rapture
- Fax Signatures
- The War on T-Shirts
- Schneier/BT News
- More on Airplane Seat Cameras
- How to Sell Security
- Comments from Readers
What is it with photographers these days? Are they really all terrorists, or does everyone just think they are?
Since 9/11, there has been an increasing war on photography. Photographers have been harassed, questioned, detained, arrested or worse, and declared to be unwelcome. We’ve been repeatedly told to watch out for photographers, especially suspicious ones. Clearly any terrorist is going to first photograph his target, so vigilance is required.
Except that it’s nonsense. The 9/11 terrorists didn’t photograph anything. Nor did the London transport bombers, the Madrid subway bombers, or the liquid bombers arrested in 2006. Timothy McVeigh didn’t photograph the Oklahoma City Federal Building. The Unabomber didn’t photograph anything; neither did shoe-bomber Richard Reid. Photographs aren’t being found amongst the papers of Palestinian suicide bombers. The IRA wasn’t known for its photography. Even those manufactured terrorist plots that the US government likes to talk about—the Ft. Dix terrorists, the JFK airport bombers, the Miami 7, the Lackawanna 6—no photography.
Given that real terrorists, and even wannabe terrorists, don’t seem to photograph anything, why is it such pervasive conventional wisdom that terrorists photograph their targets? Why are our fears so great that we have no choice but to be suspicious of any photographer?
Because it’s a movie-plot threat.
A movie-plot threat is a specific threat, vivid in our minds like the plot of a movie. You remember them from the months after the 9/11 attacks: anthrax spread from crop dusters, a contaminated milk supply, terrorist scuba divers armed with almanacs. Our imaginations run wild with detailed and specific threats, from the news, and from actual movies and television shows. These movie plots resonate in our minds and in the minds of others we talk to. And many of us get scared.
Terrorists taking pictures is a quintessential detail in any good movie. Of course it makes sense that terrorists will take pictures of their targets. They have to do reconnaissance, don’t they? We need 45 minutes of television action before the actual terrorist attack—90 minutes if it’s a movie—and a photography scene is just perfect. It’s our movie-plot terrorists that are photographers, even if the real-world ones are not.
The problem with movie-plot security is it only works if we guess the plot correctly. If we spend a zillion dollars defending Wimbledon and terrorists blow up a different sporting event, that’s money wasted. If we post guards all over the Underground and terrorists bomb a crowded shopping area, that’s also a waste. If we teach everyone to be alert for photographers, and terrorists don’t take photographs, we’ve wasted money and effort, and taught people to fear something they shouldn’t.
And even if terrorists did photograph their targets, the math doesn’t make sense. Billions of photographs are taken by honest people every year, 50 billion by amateurs alone in the US. And the national monuments you imagine terrorists taking photographs of are the same ones tourists like to take pictures of. If you see someone taking one of those photographs, the odds are infinitesimal that he’s a terrorist.
Of course, it’s far easier to explain the problem than it is to fix it. Because we’re a species of storytellers, we find movie-plot threats uniquely compelling. A single vivid scenario will do more to convince people that photographers might be terrorists than all the data I can muster to demonstrate that they’re not.
Fear aside, there aren’t many legal restrictions on what you can photograph from a public place that’s already in public view. If you’re harassed, it’s almost certainly a law enforcement official, public or private, acting way beyond his authority. There’s nothing in any post-9/11 law that restricts your right to photograph.
This is worth fighting. Search “photographer rights” on Google and download one of the several wallet documents that can help you if you get harassed; I found one for the UK, US, and Australia. Don’t cede your right to photograph in public. Don’t propagate the terrorist photographer story. Remind them that prohibiting photography was something we used to ridicule about the USSR. Eventually sanity will be restored, but it may take a while.
Incidents and anti-photography campaigns:
Fake terrorist plots in the US:
Data on photographs in the US:
A comment from someone who trains security guards:
This essay originally appeared in The Guardian:
Last month a US court ruled that border agents can search your laptop, or any other electronic device, when you’re entering the country. They can take your computer and download its entire contents, or keep it for several days. Customs and Border Patrol has not published any rules regarding this practice, and I and others have written a letter to Congress urging it to investigate and regulate this practice.
But the U.S. is not alone. British customs agents search laptops for pornography. And there are reports on the internet of this sort of thing happening at other borders, too. You might not like it, but it’s a fact. So how do you protect yourself?
Encrypting your entire hard drive, something you should certainly do for security in case your computer is lost or stolen, won’t work here. The border agent is likely to start this whole process with a “please type in your password”. Of course you can refuse, but the agent can search you further, detain you longer, refuse you entry into the country and otherwise ruin your day.
You’re going to have to hide your data. Set a portion of your hard drive to be encrypted with a different key – even if you also encrypt your entire hard drive – and keep your sensitive data there. Lots of programs allow you to do this. I use PGP Disk . TrueCrypt is also good, and free.
While customs agents might poke around on your laptop, they’re unlikely to find the encrypted partition. (You can make the icon invisible, for some added protection.) And if they download the contents of your hard drive to examine later, you won’t care.
Be sure to choose a strong encryption password. Details are too complicated for a quick tip, but basically anything easy to remember is easy to guess. Unfortunately, this isn’t a perfect solution. Your computer might have left a copy of the password on the disk somewhere, and (as I also describe at the above link) smart forensic software will find it.
So your best defense is to clean up your laptop. A customs agent can’t read what you don’t have. You don’t need five years’ worth of e-mail and client data. You don’t need your old love letters and those photos (you know the ones I’m talking about). Delete everything you don’t absolutely need. And use a secure file erasure program to do it. While you’re at it, delete your browser’s cookies, cache and browsing history. It’s nobody’s business what websites you’ve visited. And turn your computer off—don’t just put it to sleep—before you go through customs; that deletes other things. Think of all this as the last thing to do before you stow your electronic devices for landing. Some companies now give their employees forensically clean laptops for travel, and have them download any sensitive data over a virtual private network once they’ve entered the country. They send any work back the same way, and delete everything again before crossing the border to go home. This is a good idea if you can do it.
If you can’t, consider putting your sensitive data on a USB drive or even a camera memory card: even 16GB cards are reasonably priced these days. Encrypt it, of course, because it’s easy to lose something that small. Slip it in your pocket, and it’s likely to remain unnoticed even if the customs agent pokes through your laptop. If someone does discover it, you can try saying: “I don’t know what’s on there. My boss told me to give it to the head of the New York office.” If you’ve chosen a strong encryption password, you won’t care if he confiscates it.
Lastly, don’t forget your phone and PDA. Customs agents can search those too: e-mails, your phone book, your calendar. Unfortunately, there’s nothing you can do here except delete things.
I know this all sounds like work, and that it’s easier to just ignore everything here and hope you don’t get searched. Today, the odds are in your favor. But new forensic tools are making automatic searches easier and easier, and the recent US court ruling is likely to embolden other countries. It’s better to be safe than sorry.
Addendum: Many people have pointed out to me that I advise people to lie to a government agent. That is, of course, illegal in the U.S. and probably most other countries—and probably not the best advice for me to be on record as giving. So be sure you clear your story first with both your boss and the New York office.
This essay originally appeared in The Guardian:
Terrorists attacking via air conditioners:
There is a random-number bug in Debian Linux, and it’s been there since September 2006. It’s a big deal. Random numbers are used everywhere in cryptography, for both short- and long-term security. And, as we’ve seen here, security flaws in random number generators are really easy to accidentally create and really hard to discover after the fact. Back when the NSA was routinely weakening commercial cryptography, their favorite technique was reducing the entropy of the random number generator.
An airplane hijacker—a real one, someone with actual airplane hijacking experience—was working somewhere related to Heathrow Airport.
Airlines are profiting from the TSA rules on photo IDs. If a traveler’s ID doesn’t match his ticket, the airline charges him $100 to change the ticket. This makes absolutely no sense. If things were sensible, the TSA employee who checks the ticket against the ID would make the determination if the names were the same. Instead, the passenger is forced to go back to the airline who, for a fee, changes the name on the ticket to match the ID. This latter system is no more secure. If anything, it’s less secure. But rules are rules, so it’s what has to happen.
A related comic:
Spying on computer monitors off reflective objects:
Fascinating study, conducted by the Cultural Cognition Project at Yale Law School, on risk and culture:
There is a battle going on between BlackBerry and the Indian government over encryption keys and the ability to eavesdrop on e-mail traffic:
Great article on surveillance in China from Rolling Stone:
The UK wants to monitor all phone calls and e-mails—to prevent terrorism, of course.
A nasal spray of oxytocin increases trust for strangers. Although if you’re going to allow someone to spray something up your nose, you probably trust him already.
Here is the text and video of Dan Geer’s remarks at Source Boston 2008, basically a L0pht reunion with friends. He talks about security, monoculture, metrics, evolution, etc. Meandering, but interesting.
It’s not that we didn’t think it was possible to track people by their mobile phones; it was just a matter of when they started doing it.
Spray-on explosives detector:
Interesting stuff on the built-in Windows command-line security tools:
Nolan Bushnell claims that trusted computing will end piracy. Now that’s funny.
Jared Diamond writes about vengeance and human nature:
Bletchley Park may close due to lack of funds:
Electronic Crime Scene Investigation: A Guide for First Responders, Second Edition, National Institute of Justice, U.S. Department of Justice, April 2008.
This article claims that the Chinese People’s Liberation Army was behind, among other things, the August 2003 blackout. This is all so much nonsense I don’t even know where to begin. I’ve already written about the blackout: the computer failures were caused by Blaster. Of course, large-scale power outages are never one thing. They’re a small problem that cascades into series of ever-bigger problems. But the triggering problem were those power lines.
Debunking from Wired:
Me on the blackout:
This video is priceless. A Washington, DC, news crew goes down to Union Station to interview someone from Amtrak about people who have been stopped from taking pictures, even though there’s no policy against it. As the Amtrak spokesperson is explaining that there is no policy against photography, a guard comes up and tries to stop them from filming, saying it is against the rules.
The Center for American Progress published its paper on identification and identification technologies: “The ID Divide: Addressing the Challenges of Identification and Authentication in American Society.” I was one of the participants in the project that created this paper, and it’s worth reading. Among other things, the paper identifies six principles for identification systems: 1) achieve real security or other goals, 2) accuracy, 3) inclusion, 4) fairness and accuracy, 5) effective redress mechanisms, and 6) equitable financing for systems.
This is a clever micro-deposit scam: “Michael Largent, 22, of Plumas Lake, California, allegedly exploited a loophole in a common procedure both companies follow when a customer links his brokerage account to a bank account for the first time. To verify that the account number and routing information is correct, the brokerages automatically send small ‘micro-deposits’ of between two cents to one dollar to the account, and ask the customer to verify that they’ve received it. Largent allegedly used an automated script to open 58,000 online brokerage accounts, linking each of them to a handful of online bank accounts, and accumulating thousands of dollars in micro-deposits.
And this is a clever museum theft:
Researchers from the University of Washington have demonstrated how lousy the MPAA/RIAA/etc. tactics are by successfully framing printers on their network. These printers, which can’t download anything, received nine takedown notices:
Great fear-mongering product: a subway emergency kit.
Sikhs can carry knives on airplanes in India. How airport security is supposed to recognize a Sikh passenger is not explained.
Buses are having defenses installed against terrorists who want to reenact the movie Speed:
FakeTV is a burglary prevention device that simulates a television.
The TSA has a new photo ID requirement: people who refuse to show ID on principle will not be allowed to fly, but people who claim to have lost their ID will. I feel well-protected against terrorists who can’t lie.
I don’t think any further proof is needed that the ID requirement has nothing to do with security, and everything to do with control.
I can’t figure this story out. Kaspersky Lab is launching an international distributed effort to crack a 1024-bit RSA key used by the Gpcode Virus. From their website: “We estimate it would take around 15 million modern computers, running for about a year, to crack such a key.” What are they smoking at Kaspersky? We’ve never factored a 1024-bit number—at least, not outside any secret government agency—and it’s likely to require a lot more than 15 million computer years of work. The current factoring record is a 1023-bit number, but that was a special number that’s easier to factor than a product-of-two-primes number used in RSA. Breaking that Gpcode key will take a lot more mathematical prowess than you can reasonably expect to find by asking nicely on the Internet. You’ve got to understand the current best mathematical and computational optimizations of the Number Field Sieve, and cleverly distribute the parts that can be distributed. You can’t just post the products and hope for the best.
Top secret UK government al Qaeda documents left on a London train. Oops.
It’s easy to laugh at the You’ve Been Left Behind site, which purports to send automatic e-mails to your friends after the Rapture:
“The unsaved will be ‘left behind’ on earth to go through the ‘tribulation period’ after the “Rapture”…. We have made it possible for you to send them a letter of love and a plea to receive Christ one last time. You will also be able to give them some help in living out their remaining time. In the encrypted portion of your account you can give them access to your banking, brokerage, hidden valuables, and powers of attorneys’ (you won’t be needing them any more, and the gift will drive home the message of love). There won’t be any bodies, so probate court will take 7 years to clear your assets to your next of Kin. 7 years of course is all the time that will be left. So, basically the Government of the AntiChrist gets your stuff, unless you make it available in another way.”
But what if the creator of this site isn’t as scrupulous as he implies he is? What if he uses all of that account information, passwords, safe combinations, and whatever *before* any rapture? And even if he is an honest true believer, this seems like a mighty juicy target for any would-be identity thief.
And—if you’re curious—this is how the triggering mechanism works:
“We have set up a system to send documents by the email, to the addresses you provide, 6 days after the ‘Rapture’ of the Church. This occurs when 3 of our 5 team members scattered around the U.S fail to log in over a 3 day period. Another 3 days are given to fail safe any false triggering of the system.”
The site claims that the data can be encrypted, but it looks like the encryption key is stored on the server with the data.
Here’s a similar site, run by atheists so they can guarantee that they’ll be left behind to deliver all the messages:
Aren’t fax signatures the weirdest thing? It’s trivial to cut and paste—with real scissors and glue—anyone’s signature onto a document so that it’ll look real when faxed. There is so little security in fax signatures that it’s mind-boggling that anyone accepts them.
Yet people do, all the time. I’ve signed book contracts, credit card authorizations, nondisclosure agreements and all sorts of financial documents—all by fax. I even have a scanned file of my signature on my computer, so I can virtually cut and paste it into documents and fax them directly from my computer without ever having to print them out. What in the world is going on here?
And, more importantly, why are fax signatures still being used after years of experience? Why aren’t there many stories of signatures forged through the use of fax machines?
The answer comes from looking at fax signatures not as an isolated security measure, but in the context of the larger system. Fax signatures work because signed faxes exist within a broader communications context.
In a 2003 paper, “Economics, Psychology, and Sociology of Security,” Professor Andrew Odlyzko looks at fax signatures and concludes: “Although fax signatures have become widespread, their usage is restricted. They are not used for final contracts of substantial value, such as home purchases. That means that the insecurity of fax communications is not easy to exploit for large gain. Additional protection against abuse of fax insecurity is provided by the context in which faxes are used. There are records of phone calls that carry the faxes, paper trails inside enterprises and so on. Furthermore, unexpected large financial transfers trigger scrutiny. As a result, successful frauds are not easy to carry out by purely technical means.”
He’s right. Thinking back, there really aren’t ways in which a criminal could use a forged document sent by fax to defraud me. I suppose an unscrupulous consulting client could forge my signature on an non-disclosure agreement and then sue me, but that hardly seems worth the effort. And if my broker received a fax document from me authorizing a money transfer to a Nigerian bank account, he would certainly call me before completing it.
Credit card signatures aren’t verified in person, either—and I can already buy things over the phone with a credit card—so there are no new risks there, and Visa knows how to monitor transactions for fraud. Lots of companies accept purchase orders via fax, even for large amounts of stuff, but there’s a physical audit trail, and the goods are shipped to a physical address—probably one the seller has shipped to before. Signatures are kind of a business lubricant: mostly, they help move things along smoothly.
Except when they don’t.
On October 30, 2004, Tristian Wilson was released from a Memphis jail on the authority of a forged fax message. It wasn’t even a particularly good forgery. It wasn’t on the standard letterhead of the West Memphis Police Department. The name of the policeman who signed the fax was misspelled. And the time stamp on the top of the fax clearly showed that it was sent from a local McDonald’s.
The success of this hack has nothing to do with the fact that it was sent over by fax. It worked because the jail had lousy verification procedures. They didn’t notice any discrepancies in the fax. They didn’t notice the phone number from which the fax was sent. They didn’t call and verify that it was official. The jail was accustomed to getting release orders via fax, and just acted on this one without thinking. Would it have been any different had the forged release form been sent by mail or courier?
Yes, fax signatures always exist in context, but sometimes they are the linchpin within that context. If you can mimic enough of the context, or if those on the receiving end become complacent, you can get away with mischief.
Arguably, this is part of the security process. Signatures themselves are poorly defined. Sometimes a document is valid even if not signed: A person with both hands in a cast can still buy a house. Sometimes a document is invalid even if signed: The signer might be drunk, or have a gun pointed at his head. Or he might be a minor. Sometimes a valid signature isn’t enough; in the United States there is an entire infrastructure of “notary publics” who officially witness signed documents. When I started filing my tax returns electronically, I had to sign a document stating that I wouldn’t be signing my income tax documents. And banks don’t even bother verifying signatures on checks less than $30,000; it’s cheaper to deal with fraud after the fact than prevent it.
Over the course of centuries, business and legal systems have slowly sorted out what types of additional controls are required around signatures, and in which circumstances.
Those same systems will be able to sort out fax signatures, too, but it’ll be slow. And that’s where there will be potential problems. Already fax is a declining technology. In a few years it’ll be largely obsolete, replaced by PDFs sent over e-mail and other forms of electronic documentation. In the past, we’ve had time to figure out how to deal with new technologies. Now, by the time we institutionalize these measures, the technologies are likely to be obsolete.
What that means is people are likely to treat fax signatures—or whatever replaces them—exactly the same way as paper signatures. And sometimes that assumption will get them into trouble.
But it won’t cause social havoc. Wilson’s story is remarkable mostly because it’s so exceptional. And even he was rearrested at his home less than a week later. Fax signatures may be new, but fake signatures have always been a possibility. Our legal and business systems need to deal with the underlying problem—false authentication—rather than focus on the technology of the moment. Systems need to defend themselves against the possibility of fake signatures, regardless of how they arrive.
This essay originally appeared on Wired.com:
Another fake fax story: “Federal Jury Convicts N.Y. Attorney of Faking Judge’s Order.”
London Heathrow security stopped someone from boarding a plane for wearing a Transformers T-shirt showing a cartoon gun.
It’s easy to laugh and move on. How stupid can these people be, we wonder. But there’s a more important security lesson here. Security screening is hard, and every false threat the screeners watch out for make it more likely that real threats slip through. At a party the other night, someone told me about the time he accidentally brought a large knife through airport security. The screener pulled his bag aside, searched it, and pulled out a water bottle.
It’s not just the water bottles and the T-shirts and the gun jewelry—this kind of thing actually makes us all less safe.
Keeping gun jewelry off airplanes:
An audio of Schneier’s talk at the Weisman Art Museum in Minneapolis on March 27.
A video of Schneier’s talk at the Hack-in-the-Box conference in Dubai on April 16.
A Q&A from CSO Magazine:
An article on Schneier from The Star in Malaysia:
A tall tale about Schneier: “Bruce Schneier and the King of the Crabs.”
An amusing Schneier motivational poster:
Schneier is speaking at the Supernova conference in San Francisco on June 17:
Schneier is speaking at the Interdisciplinary Studies in Information Security at Monte Verita in Switzerland.
I already blogged this once: an airplane-seat camera system that tries to detect terrorists before they leap up and do whatever they were planning on doing. Amazingly enough, the EU is “testing” this system.
This pegs the stupid meter. All it will do is false alarm. No one has any idea what sorts of facial characteristics are unique to terrorists. And how in the world are they “testing” this system without any real terrorists? In any case, what happens when the alarm goes off? How exactly is a ten-second warning going to save people?
Sure, you can invent a terrorist tactic where a system like this, assuming it actually works, saves people—but that’s the very definition of a movie-plot threat. How about we spend this money on something that’s effective in more than just a few carefully chosen scenarios?
It’s a truism in sales that it’s easier to sell someone something he wants than a defense against something he wants to avoid. People are reluctant to buy insurance, or home security devices, or computer security anything. It’s not they don’t ever buy these things, but it’s an uphill struggle.
The reason is psychological. And it’s the same dynamic when it’s a security vendor trying to sell its products or services, a CIO trying to convince senior management to invest in security, or a security officer trying to implement a security policy with her company’s employees.
It’s also true that the better you understand your buyer, the better you can sell.
First, a bit about Prospect Theory, the underlying theory behind the newly popular field of behavioral economics. Prospect Theory was developed by Daniel Kahneman and Amos Tversky in 1979 (Kahneman went on to win a Nobel Prize for this and other similar work) to explain how people make trade-offs that involve risk. Before this work, economists had a model of “economic man,” a rational being who makes trade-offs based on some logical calculation. Kahneman and Tversky showed that real people are far more subtle and ornery.
Here’s an experiment that illustrates Prospect Theory. Take a roomful of subjects and divide them into two groups. Ask one group to choose between these two alternatives: a sure gain of $500 and a 50% chance of gaining $1,000. Ask the other group to choose between these two alternatives: a sure loss of $500 and a 50% chance of losing $1,000.
These two trade-offs are very similar, and traditional economics predicts that the whether you’re contemplating a gain or a loss doesn’t make a difference: People make trade-offs based on a straightforward calculation of the relative outcome. Some people prefer sure things and others prefer to take chances. Whether the outcome is a gain or a loss doesn’t affect the mathematics and therefore shouldn’t affect the results. This is traditional economics, and it’s called Utility Theory.
But Kahneman’s and Tversky’s experiments contradicted Utility Theory. When faced with a gain, about 85% of people chose the sure smaller gain over the risky larger gain. But when faced with a loss, about 70% chose the risky larger loss over the sure smaller loss.
This experiment, repeated again and again by many researchers, across ages, genders, cultures and even species, rocked economics, yielded the same result. Directly contradicting the traditional idea of “economic man,” Prospect Theory recognizes that people have subjective values for gains and losses. We have evolved a cognitive bias: a pair of heuristics. One, a sure gain is better than a chance at a greater gain, or “A bird in the hand is worth two in the bush.” And two, a sure loss is worse than a chance at a greater loss, or “Run away and live to fight another day.” Of course, these are not rigid rules. Only a fool would take a sure $100 over a 50% chance at $1,000,000. But all things being equal, we tend to be risk-averse when it comes to gains and risk-seeking when it comes to losses.
This cognitive bias is so powerful that it can lead to logically inconsistent results. Google the “Asian Disease Experiment” for an almost surreal example. Describing the same policy choice in different ways—either as “200 lives saved out of 600” or “400 lives lost out of 600″—yields wildly different risk reactions.
Evolutionarily, the bias makes sense. It’s a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses. Lions, for example, chase young or wounded wildebeests because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow. Similarly, it is better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. Because both can result in death, and the best option is to risk everything for the chance at no loss at all.
How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It’s a choice between a small sure loss—the cost of the security product—and a large risky loss: for example, the results of an attack on one’s network. Of course there’s a lot more to the sale. The buyer has to be convinced that the product works, and he has to understand the threats against him and the risk that something bad will happen. But all things being equal, buyers would rather take the chance that the attack won’t happen than suffer the sure loss that comes from purchasing the security product.
Security sellers know this, even if they don’t understand why, and are continually trying to frame their products in positive results. That’s why you see slogans with the basic message, “We take care of security so you can focus on your business,” or carefully crafted ROI models that demonstrate how profitable a security purchase can be. But these never seem to work. Security is fundamentally a negative sell.
One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs. And when people are truly scared, they’re willing to do almost anything to make that feeling go away; lots of other psychological research supports that. Any burglar alarm salesman will tell you that people buy only after they’ve been robbed, or after one of their neighbors has been robbed. And the fears stoked by 9/11, and the politics surrounding 9/11, have fueled an entire industry devoted to counterterrorism. When emotion takes over like that, people are much less likely to think rationally.
Though effective, fear mongering is not very ethical. The better solution is not to sell security directly, but to include it as part of a more general product or service. Your car comes with safety and security features built in; they’re not sold separately. Same with your house. And it should be the same with computers and networks. Vendors need to build security into the products and services that customers actually want. CIOs should include security as an integral part of everything they budget for. Security shouldn’t be a separate policy for employees to follow but part of overall IT policy.
Security is inherently about avoiding a negative, so you can never ignore the cognitive bias embedded so deeply in the human brain. But if you understand it, you have a better chance of overcoming it.
This essay originally appeared in CIO:
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is the Chief Security Technology Officer of BT (BT acquired Counterpane in 2006), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.
Copyright (c) 2008 by Bruce Schneier.