Schneier on Security
A blog covering security and security technology.
May 2008 Archives
Electronic Crime Scene Investigation: A Guide for First Responders, Second Edition, National Institute of Justice, U.S. Department of Justice, April 2008.
Mostly basic stuff.
But, despite an impressive contribution to the war effort, the Bletchley Park site, now a museum, faces a bleak future unless it can secure funding to keep its doors open and its numerous exhibits from rotting away.
Anybody out there want to help put together a major contribution?
EDITED TO ADD (5/30): Yes, I am willing to be a focal point for donations. But I'm hoping for some major donors.
Jared Diamond on vengeance and human nature:
This question of state government's recent origins, and, conversely, of its long failure to originate throughout most of human history, is a fundamental concern for social scientists. Until fifty-five hundred years ago, there were no state governments anywhere in the world. Even as late as 1492, all of North America, sub-Saharan Africa, Australia, New Guinea, and the Pacific islands, and most of Central and South America didn't have states and instead operated under simpler forms of societal organization (chiefdoms, tribes, and bands). Today, though, the whole world map is divided into states. Of course, most of that extension of state government has involved existing states from elsewhere imposing their government on stateless societies, as happened in New Guinea. But the first state in world history, at least, must have arisen de novo, and we now know that states arose independently in many parts of the world. How did it happen?
Ha ha ha ha. Famous last words from Atari founder Nolan Bushnell:
"There is a stealth encryption chip called a TPM that is going on the motherboards of most of the computers that are coming out now," he pointed out
"TPM" stands for "Trusted Platform Module." It's a chip that is probably already in your computer and may someday be used to enforce security: both your security, and the security of software and media companies against you. The system is complicated, and while it will prevent some attacks, there are lots of ways to hack it. (I've written about TPM here, and here when Microsoft called it Palladium. Ross Anderson has some good stuff here.)
William Trogler and his team at the University of California, San Diego, made a silafluorene-fluorene copolymer to identify nitrogen-containing explosives. It is the first of its kind to act as a switchable sensor with picogram (10-15g) detection limits, and is reported in the Royal Society of Chemistry's Journal of Materials Chemistry.
Not that we didn't think it was possible:
The surveillance mechanism works by monitoring the signals produced by mobile handsets and then locating the phone by triangulation measuring the phone’s distance from three receivers.
Seems to me that the point of sale is a pretty obvious place to match the location of an anonymous person with an identity.
At the end of the day, however, we are facing a much bigger, more metaphysical question than the ones I have so far posed. That I can pose many others is of no consequence; either you are sick of them by now or you are scribbling down your own as I speak. The bigger question is this -- how much security do we want?
Okay; this'll be fun. What's the most creative abuse for this that you can think of ?
Previous studies have shown that participants in "trust games" took greater risks with their money after inhaling the hormone via a nasal spray.
It's a truism in sales that it's easier to sell someone something he wants than a defense against something he wants to avoid. People are reluctant to buy insurance, or home security devices, or computer security anything. It's not they don't ever buy these things, but it's an uphill struggle.
The reason is psychological. And it's the same dynamic when it's a security vendor trying to sell its products or services, a CIO trying to convince senior management to invest in security, or a security officer trying to implement a security policy with her company's employees.
It's also true that the better you understand your buyer, the better you can sell.
First, a bit about Prospect Theory, the underlying theory behind the newly popular field of behavioral economics. Prospect Theory was developed by Daniel Kahneman and Amos Tversky in 1979 (Kahneman went on to win a Nobel Prize for this and other similar work) to explain how people make trade-offs that involve risk. Before this work, economists had a model of "economic man," a rational being who makes trade-offs based on some logical calculation. Kahneman and Tversky showed that real people are far more subtle and ornery.
Here's an experiment that illustrates Prospect Theory. Take a roomful of subjects and divide them into two groups. Ask one group to choose between these two alternatives: a sure gain of $500 and 50 percent chance of gaining $1,000. Ask the other group to choose between these two alternatives: a sure loss of $500 and a 50 percent chance of losing $1,000.
These two trade-offs are very similar, and traditional economics predicts that the whether you're contemplating a gain or a loss doesn't make a difference: People make trade-offs based on a straightforward calculation of the relative outcome. Some people prefer sure things and others prefer to take chances. Whether the outcome is a gain or a loss doesn't affect the mathematics and therefore shouldn't affect the results. This is traditional economics, and it's called Utility Theory.
But Kahneman's and Tversky's experiments contradicted Utility Theory. When faced with a gain, about 85 percent of people chose the sure smaller gain over the risky larger gain. But when faced with a loss, about 70 percent chose the risky larger loss over the sure smaller loss.
This experiment, repeated again and again by many researchers, across ages, genders, cultures and even species, rocked economics, yielded the same result. Directly contradicting the traditional idea of "economic man," Prospect Theory recognizes that people have subjective values for gains and losses. We have evolved a cognitive bias: a pair of heuristics. One, a sure gain is better than a chance at a greater gain, or "A bird in the hand is worth two in the bush." And two, a sure loss is worse than a chance at a greater loss, or "Run away and live to fight another day." Of course, these are not rigid rules. Only a fool would take a sure $100 over a 50 percent chance at $1,000,000. But all things being equal, we tend to be risk-averse when it comes to gains and risk-seeking when it comes to losses.
This cognitive bias is so powerful that it can lead to logically inconsistent results. Google the "Asian Disease Experiment" for an almost surreal example. Describing the same policy choice in different ways--either as "200 lives saved out of 600" or "400 lives lost out of 600"-- yields wildly different risk reactions.
Evolutionarily, the bias makes sense. It's a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses. Lions, for example, chase young or wounded wildebeests because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there's a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow. Similarly, it is better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor's edge between starvation and reproduction, any loss of food -- whether small or large -- can be equally bad. Because both can result in death, and the best option is to risk everything for the chance at no loss at all.
How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It's a choice between a small sure loss -- the cost of the security product -- and a large risky loss: for example, the results of an attack on one's network. Of course there's a lot more to the sale. The buyer has to be convinced that the product works, and he has to understand the threats against him and the risk that something bad will happen. But all things being equal, buyers would rather take the chance that the attack won't happen than suffer the sure loss that comes from purchasing the security product.
Security sellers know this, even if they don't understand why, and are continually trying to frame their products in positive results. That's why you see slogans with the basic message, "We take care of security so you can focus on your business," or carefully crafted ROI models that demonstrate how profitable a security purchase can be. But these never seem to work. Security is fundamentally a negative sell.
One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs. And when people are truly scared, they're willing to do almost anything to make that feeling go away; lots of other psychological research supports that. Any burglar alarm salesman will tell you that people buy only after they've been robbed, or after one of their neighbors has been robbed. And the fears stoked by 9/11, and the politics surrounding 9/11, have fueled an entire industry devoted to counterterrorism. When emotion takes over like that, people are much less likely to think rationally.
Though effective, fear mongering is not very ethical. The better solution is not to sell security directly, but to include it as part of a more general product or service. Your car comes with safety and security features built in; they're not sold separately. Same with your house. And it should be the same with computers and networks. Vendors need to build security into the products and services that customers actually want. CIOs should include security as an integral part of everything they budget for. Security shouldn't be a separate policy for employees to follow but part of overall IT policy.
Security is inherently about avoiding a negative, so you can never ignore the cognitive bias embedded so deeply in the human brain. But if you understand it, you have a better chance of overcoming it.
This essay originally appeared in CIO.
And a video from my talk at the Hack-in-the-Box conference in Dubai on April 16.
Great article from Rolling Stone.
RIM encrypts e-mail between BlackBerry devices and the server the server with 256-bit AES encryption. The Indian government doesn't like this at all; they want to snoop on the data. RIM's response was basically: That's not possible. The Indian government's counter was: Then we'll ban BlackBerries. After months of threats, it looks like RIM is giving in to Indian demands and handing over the encryption keys.
EDITED TO ADD (5/27): News:
BlackBerry vendor Research-In-Motion (RIM) said it cannot hand over the message encrytion key to the government as its security structure does not allow any ‘third party’ or even the company to read the information transferred over its network.
EDITED TO ADD (7/2): Looks like they have resolved the impasse.
The Second National Risk and Culture Study, conducted by the Cultural Cognition Project at Yale Law School.
And from the conclusion:
In the information age, we all have a data shadow.
We leave data everywhere we go. It's not just our bank accounts and stock portfolios, or our itemized bills, listing every credit card purchase and telephone call we make. It's automatic road-toll collection systems, supermarket affinity cards, ATMs and so on.
It's also our lives. Our love letters and friendly chat. Our personal e-mails and SMS messages. Our business plans, strategies and offhand conversations. Our political leanings and positions. And this is just the data we interact with. We all have shadow selves living in the data banks of hundreds of corporations' information brokers -- information about us that is both surprisingly personal and uncannily complete -- except for the errors that you can neither see nor correct.
What happens to our data happens to ourselves.
This shadow self doesn't just sit there: It's constantly touched. It's examined and judged. When we apply for a bank loan, it's our data that determines whether or not we get it. When we try to board an airplane, it's our data that determines how thoroughly we get searched -- or whether we get to board at all. If the government wants to investigate us, they're more likely to go through our data than they are to search our homes; for a lot of that data, they don't even need a warrant.
Who controls our data controls our lives.
It's true. Whoever controls our data can decide whether we can get a bank loan, on an airplane or into a country. Or what sort of discount we get from a merchant, or even how we're treated by customer support. A potential employer can, illegally in the U.S., examine our medical data and decide whether or not to offer us a job. The police can mine our data and decide whether or not we're a terrorist risk. If a criminal can get hold of enough of our data, he can open credit cards in our names, siphon money out of our investment accounts, even sell our property. Identity theft is the ultimate proof that control of our data means control of our life.
We need to take back our data.
Our data is a part of us. It's intimate and personal, and we have basic rights to it. It should be protected from unwanted touch.
We need a comprehensive data privacy law. This law should protect all information about us, and not be limited merely to financial or health information. It should limit others' ability to buy and sell our information without our knowledge and consent. It should allow us to see information about us held by others, and correct any inaccuracies we find. It should prevent the government from going after our information without judicial oversight. It should enforce data deletion, and limit data collection, where necessary. And we need more than token penalties for deliberate violations.
This is a tall order, and it will take years for us to get there. It's easy to do nothing and let the market take over. But as we see with things like grocery store club cards and click-through privacy policies on websites, most people either don't realize the extent their privacy is being violated or don't have any real choice. And businesses, of course, are more than happy to collect, buy, and sell our most intimate information. But the long-term effects of this on society are toxic; we give up control of ourselves.
This essay originally appeared on Wired.com.
EDITED TO ADD (5/21): A rebuttal.
At Saarland University, researchers trained a $500 telescope on a teapot near a computer monitor 5 meters away. The images are tiny but amazingly clear, professor Michael Backes told IDG.All it took was a $500 telescope trained on a reflective object in front of the monitor. For example, a teapot yielded readable images of 12 point Word documents from a distance of 5 meters (16 feet). From 10 meters, they were able to read 18 point fonts. With a $27,500 Dobson telescope, they could get the same quality of images at 30 meters.
Here's the paper:
Before 9/11, airlines and security personnel -- and I use the term "security personnel" loosely -- might have let a nickname or even a maiden name on a ticket slide. No longer. If you have the wrong name on your ticket, you're probably grounded. And there are two reasons for this: security and greed.
There are two things to get pissed off about here. One, the airlines profiting off a TSA rule. And two, a TSA rule that requires them to ignore what is obvious.
EDITED TO ADD (5/28): To add some more detail here, the rule makes absolutely no sense. If this were sensible, the TSA employee who checks the ticket against the ID would make the determination if the names were the same. Instead, the passenger is forced to go back to the airline who, for a fee, changes the name on the ticket to match the ID. This latter system is no more secure. If anything, it's less secure. But rules are rules, so it's what has to happen.
An airplane hijacker -- a real one, someone with actual airplane hijacking experience -- was working at Heathrow Airport.
EDITED TO ADD (5/19): Or maybe he wasn't working at the airport itself. Anyone have any more information?
This is a big deal:
On May 13th, 2008 the Debian project announced that Luciano Bello found an interesting vulnerability in the OpenSSL package they were distributing. The bug in question was caused by the removal of the following line of code from md_rand.cMD_Update(&m,buf,j); [ .. ] MD_Update(&m,buf,j); /* purify complains */
Random numbers are used everywhere in cryptography, for both short- and long-term security. And, as we've seen here, security flaws in random number generators are really easy to accidently create and really hard to discover after the fact. Back when the NSA was routinely weakening commercial cryptography, their favorite technique was reducing the entropy of the random number generator.
Only $15. Plus shipping, of course.
From the DHS and the FBI, a great movie-plot threat:
It is possible to introduce chemical or biological agents directly into external air-intakes or internal air-circulation systems. Unless the building has carbon filters (or the equivalent), volatile chemical agents would not be stopped and would enter the building untenanted.
I'm sure glad my government is working on this stuff.
Last month a US court ruled that border agents can search your laptop, or any other electronic device, when you're entering the country. They can take your computer and download its entire contents, or keep it for several days. Customs and Border Patrol has not published any rules regarding this practice, and I and others have written a letter to Congress urging it to investigate and regulate this practice.
But the US is not alone. British customs agents search laptops for pornography. And there are reports on the internet of this sort of thing happening at other borders, too. You might not like it, but it's a fact. So how do you protect yourself?
Encrypting your entire hard drive, something you should certainly do for security in case your computer is lost or stolen, won't work here. The border agent is likely to start this whole process with a "please type in your password". Of course you can refuse, but the agent can search you further, detain you longer, refuse you entry into the country and otherwise ruin your day.
You're going to have to hide your data. Set a portion of your hard drive to be encrypted with a different key - even if you also encrypt your entire hard drive - and keep your sensitive data there. Lots of programs allow you to do this. I use PGP Disk . TrueCrypt is also good, and free.
While customs agents might poke around on your laptop, they're unlikely to find the encrypted partition. (You can make the icon invisible, for some added protection.) And if they download the contents of your hard drive to examine later, you won't care.
Be sure to choose a strong encryption password. Details are too complicated for a quick tip, but basically anything easy to remember is easy to guess. (My advice is here.) Unfortunately, this isn't a perfect solution. Your computer might have left a copy of the password on the disk somewhere, and (as I also describe at the above link) smart forensic software will find it.
So your best defence is to clean up your laptop. A customs agent can't read what you don't have. You don't need five years' worth of email and client data. You don't need your old love letters and those photos (you know the ones I'm talking about). Delete everything you don't absolutely need. And use a secure file erasure program to do it. While you're at it, delete your browser's cookies, cache and browsing history. It's nobody's business what websites you've visited. And turn your computer off - don't just put it to sleep - before you go through customs; that deletes other things. Think of all this as the last thing to do before you stow your electronic devices for landing. Some companies now give their employees forensically clean laptops for travel, and have them download any sensitive data over a virtual private network once they've entered the country. They send any work back the same way, and delete everything again before crossing the border to go home. This is a good idea if you can do it.
If you can't, consider putting your sensitive data on a USB drive or even a camera memory card: even 16GB cards are reasonably priced these days. Encrypt it, of course, because it's easy to lose something that small. Slip it in your pocket, and it's likely to remain unnoticed even if the customs agent pokes through your laptop. If someone does discover it, you can try saying: "I don't know what's on there. My boss told me to give it to the head of the New York office." If you've chosen a strong encryption password, you won't care if he confiscates it.
Lastly, don't forget your phone and PDA. Customs agents can search those too: emails, your phone book, your calendar. Unfortunately, there's nothing you can do here except delete things.
I know this all sounds like work, and that it's easier to just ignore everything here and hope you don't get searched. Today, the odds are in your favour. But new forensic tools are making automatic searches easier and easier, and the recent US court ruling is likely to embolden other countries. It's better to be safe than sorry.
This essay originally appeared in The Guardian.
EDITED TO ADD (5/18): Many people have pointed out to me that I advise people to lie to a government agent. That is, of course, illegal in the U.S. and probably most other countries -- and probably not the best advice for me to be on record as giving. So be sure you clear your story first with both your boss and the New York office.
Ten years ago I started Crypto-Gram. It was a monthly newsletter written entirely by me. No guest columns. No advertising. Nothing but me writing about security, published the 15th of the month every month. Now, 120 issues later, none of that has changed.
I started Crypto-Gram because I had a lot to say about security, and book-length commentaries were too slow and too infrequent. Sure, I was writing the occasional column in the occasional magazine, but those were also too slow and infrequent. Crypto-Gram was supposed to be my personal voice on security, sent directly to those who wanted to read it.
I originally thought about charging for Crypto-Gram. I knew of several newsletters that funded themselves through subscription fees, and figured that a couple of hundred subscribers at $150 or so would sustain itself very nicely. I don't remember why I decided not to -- did someone convince me, or did I figure it out myself -- but it was easily the smartest decision I made about this newsletter. If I'd charged money for the thing, no one would have read it. Since I didn't, lots of people subscribed.
There were 457 subscribers by the end of the first day. After that, circulation climbed slowly and steadily. Here are the totals for May of each year:
Those numbers hide a lot of readers, like the tens of thousands that read Crypto-Gram via the Web. I also know of people that forward my newsletter to hundreds of others. There are many foreign translations that have their own subscription list. These days I estimate that I have about 25,000 newsletter readers not included in those numbers.
I have no idea where the initial batch of subscribers came from. Nor do I remember how people subscribed before the webpage form was done. I do remember my first big burst of subscribers, though. It was following my special issue after 9/11. I wrote something short for the September issue, but I found that I couldn't stop writing. Two weeks later, I published a special issue on the terrorist attacks. Readers forwarded that issue again and again, and I ended up with many new subscribers as a result.
Reader comments began earlier, in December 1998. I found I was getting some really intelligent comments from my readers -- especially those that disagreed with me -- and I wanted to publish some of them. Some of the disagreements were nasty. In October 1998, I started a column called "The Doghouse," where I made fun of snake-oil security products. Some of the companies didn't like being so characterized, and sent me threatening legal letters.
Turns out that publishing those sorts of threats as letters to Crypto-Gram was the best defense, even though my lawyers always discouraged it. None of these incidents ever went past the threatening stage, even though court papers were occasionally filed.
Over the years, Crypto-Gram's focus has changed. Initially, it was all cryptography. Then, more computer and network security. Then -- especially after 9/11 -- more general security: terrorism, airplanes, ID cards, voting machines, and so on. And now, more economics and psychology of security. My career has been a progression from the specific to the general, and Crypto-Gram has generalized to reflect that.
The next big change to Crypto-Gram came in October 2004. I had been reading about blogging, and wondered for several months if switching Crypto-Gram over to blog format was a good idea or not. Again, it was about speed and frequency. I found that others were commenting on security stories faster, and that by the time Crypto-Gram would come out, people had already linked to other stories. A blog would allow me to get my commentary out even faster, and to be part of the initial discussions.
I went back and forth. Several people advised me to change, that blogging was the format of the future. I was skeptical, preferring to push my newsletter into my readers' mailboxes every month. I sent a survey to 400 of my subscribers -- 200 random subscribers and 200 people who had subscribed within the past month -- asking. My eventual solution was the second smartest thing I did with this newsletter: to do both.
The Schneier on Security blog started out as Crypto-Gram entries, delivered daily. And the early blog entries looked a lot like Crypto-Gram articles, with links at the end. Over the following months I learned more about the blogging style, and the entries started looking more like blog entries. Now the blog is primary, and on the 15th of every month I take the previous month's blog entries and reconfigure them into Crypto-Gram format. Even today, most readers prefer to receive Crypto-Gram in their e-mail box every month -- even if they also read the blog online.
These days, I like both. I like the immediacy of the blog, and I like the e-mail format of Crypto-Gram. And even after ten years, I still like the writing.
People often ask me where I find the time to do all of that writing. It's an odd question for me, because it's what I enjoy doing. I find time at home, on airplanes, in hotel rooms, everywhere. Writing isn't a chore -- okay, maybe sometimes it is -- it's something that relaxes me. I enjoy putting my ideas down in a coherent narrative flow. And there's nothing that pleases me more than the fact that people read it.
The best fan mail I get from a reader says something like: "You changed the way I think." That's what I want to do. I want to change the way you think about security. I want to change the way you think about threats, and risk, and trade-offs, about security products and services, about security rhetoric in politics. It matters less if you agree with me or disagree, only that you're thinking differently.
Thank you. Thank you on this 10th anniversary issue. Thank you, long-time readers. Thank you, new readers. Thank you for continuing to read what I have to write. This is still a lot of fun -- and interesting and thought provoking -- for me. I hope it continues to be interesting, thought provoking, and fun for you.
On April 7 -- seven days late -- I announced the Third Annual Movie-Plot Threat Contest:
For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn't worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.
On May 7, I posted five semi-finalists out of the 327 blog comments:
Sadly, two of those five was above the 150-word limit. Out of the three remaining, I (with the help of my readers) have chosen a winner.
Presenting, the winner of the Third Annual Movie Plot Threat Contest, Aaron Massey:
Many Americans were shocked to hear the results of the research trials regarding heavy metals and toothpaste conducted by the New England Journal of Medicine, which FDA is only now attempting to confirm. This latest scare comes after hundreds of deaths were linked to toothpaste contaminated with diethylene glycol, a potentially dangerous chemical used in antifreeze.
Aaron wins, well, nothing really, except the fame and glory afforded by this blog. So give him some fame and glory. Congratulations.
The standard way to take control of someone else's computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it's still how most modern malware works.
Vulnerabilities are software mistakes--mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don't get patched, so the Internet is filled with known, exploitable vulnerabilities.
New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?
Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.
Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent--or protect against--those failures. Most software vulnerabilities don't ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.
People without the mindset sometimes think they can design security products, but they can't. And you see the results all over society--in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of "security" on their teams, but it wasn't someone who thought like an attacker.
This mindset is difficult to teach, and may be something you're born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities--again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others' algorithms and protocols. Good software security experts find vulnerabilities in others' code. Good airport security designers figure out new ways to subvert airport security. And so on.
This is so important that when someone shows me a security design by someone I don't know, my first question is, "What has the designer broken?" Anyone can design a security system that he cannot break. So when someone announces, "Here's my security system, and I can't break it," your first reaction should be, "Who are you?" If he's someone who has broken dozens of similar systems, his system is worth looking at. If he's never broken anything, the chance is zero that it will be any good.
Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible--and more and less legal--ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn't whether it's ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.
This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus's half here.
Actually, I think this is a fine idea -- as long as they only use computers that they legally own.
I have to admit, I'm kind of curious myself.
An intelligent personalized agent monitors, regulates, and advises a user in decision-making processes for efficiency or safety concerns. The agent monitors an environment and present characteristics of a user and analyzes such information in view of stored preferences specific to one of multiple profiles of the user. Based on the analysis, the agent can suggest or automatically implement a solution to a given issue or problem. In addition, the agent can identify another potential issue that requires attention and suggests or implements action accordingly. Furthermore, the agent can communicate with other users or devices by providing and acquiring information to assist in future decisions. All aspects of environment observation, decision assistance, and external communication can be flexibly limited or allowed as desired by the user.
Note that Bill Gates and Ray Ozzie are co-inventors.
Definitely a good way to look at it:
Fear, in other words, is a tax, and al-Qaeda and its ilk have done better at extracting it from Americans than the Internal Revenue Service. Think about the extra half-hour millions of airline passengers waste standing in security lines; the annual cost in lost work hours runs into the billions. Add to that the freight delays at borders, ports and airports, the cost of checking money transfers as well as goods in transit, the wages for beefed-up security forces around the world. And that doesn't even attempt to put a price tag on the compression of civil liberties or the loss of human dignity from being groped in full public view by Transportation Security Administration personnel at the airport or from having to walk barefoot through the metal detector, holding up your beltless pants. This global transaction tax represents the most significant victory of Terror International to date.
In Beyond Fear I wrote:
Security is a tax on the honest.
EDITED TO ADD (4/10): Link fixed.
Last month I gave a talk at InfoSecurity Europe in London. The title was "Reconceptualizing Security," or maybe "The Theater of Security," and it is a follow-on to my work on the psychology of security. I haven't yet written this work up, but you can listen to or watch my talk.
I don't know what I think of Sweet Dreams Security.
A handy guide:
A service called World Tracker lets you use data from cell phone towers and GPS systems to pinpoint anyone’s exact whereabouts, any time — as long as they’ve got their phone on them.
Excellent article, chronicling the surveillance debate from the mid 1980s until today. Don't expect good coverage of the current debate, however: the legality of the NSA's recent domestic eavesdropping program, and the legality of the assistance provided by the telcos.
Remember the two men who were exhibiting "unusual behavior" on a Washington-state ferry last summer?
The agency's Seattle field office, along with the Washington Joint Analytical Center, was still seeking the men's identities and whereabouts Wednesday as ferry service was temporarily shutdown when a suspicious package was found in a ferry bathroom and taken away by authorities.
Turns out they were tourists, not terrorists:
Turns out the men, both citizens of a European Union nation, were captivated by the car-carrying capacity of local ferries.
A month ago I announced the Third Annual Movie-Plot Threat Contest:
For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn't worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.
Submissions are in. The blog entry has 327 comments. I've read them all, and here are the semi-finalists:
Cast your vote; I'll announce the winner on the 15th.
Seems obvious to me:
"I reject the notion that Al Qaeda is waiting for 'the big one' or holding back an attack," Sheehan writes. "A terrorist cell capable of attacking doesn't sit and wait for some more opportune moment. It's not their style, nor is it in the best interest of their operational security. Delaying an attack gives law enforcement more time to detect a plot or penetrate the organization."
I've ordered Sheehan's book, Crush the Cell: How to Defeat Terrorism Without Terrorizing Ourselves.
Massive investment in CCTV cameras to prevent crime in the UK has failed to have a significant impact, despite billions of pounds spent on the new technology, a senior police officer piloting a new database has warned. Only 3% of street robberies in London were solved using CCTV images, despite the fact that Britain has more security cameras than any other country in Europe.
As many as 400 of the unaccounted for laptops belong to the department’s Anti-Terrorism Assistance Program, according to officials familiar with the findings.
Bet you anything those laptops weren't encrypted.
On April 27, 2007, Estonia was attacked in cyberspace. Following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial, the networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and -- in many cases -- shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.
It was hyped as the first cyberwar: Russia attacking Estonia in cyberspace. But nearly a year later, evidence that the Russian government was involved in the denial-of-service attacks still hasn't emerged. Though Russian hackers were indisputably the major instigators of the attack, the only individuals positively identified have been young ethnic Russians living inside Estonia, who were pissed off over the statue incident.
You know you've got a problem when you can't tell a hostile attack by another nation from bored kids with an axe to grind.
Separating cyberwar, cyberterrorism and cybercrime isn't easy; these days you need a scorecard to tell the difference. It's not just that it’s hard to trace people in cyberspace, it's that military and civilian attacks -- and defenses -- look the same.
The traditional term for technology the military shares with civilians is "dual use." Unlike hand grenades and tanks and missile targeting systems, dual-use technologies have both military and civilian applications. Dual-use technologies used to be exceptions; even things you'd expect to be dual use, like radar systems and toilets, were designed differently for the military. But today, almost all information technology is dual use. We both use the same operating systems, the same networking protocols, the same applications, and even the same security software.
And attack technologies are the same. The recent spurt of targeted hacks against U.S. military networks, commonly attributed to China, exploit the same vulnerabilities and use the same techniques as criminal attacks against corporate networks. Internet worms make the jump to classified military networks in less than 24 hours, even if those networks are physically separate. The Navy Cyber Defense Operations Command uses the same tools against the same threats as any large corporation.
Because attackers and defenders use the same IT technology, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the "equities issue," and it can be summarized as follows: When a military discovers a vulnerability in a dual-use technology, they can do one of two things. They can alert the manufacturer and fix the vulnerability, thereby protecting both the good guys and the bad guys. Or they can keep quiet about the vulnerability and not tell anyone, thereby leaving the good guys insecure but also leaving the bad guys insecure.
The equities issue has long been hotly debated inside the NSA. Basically, the NSA has two roles: eavesdrop on their stuff, and protect our stuff. When both sides use the same stuff, the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff.
In the 1980s and before, the tendency of the NSA was to keep vulnerabilities to themselves. In the 1990s, the tide shifted, and the NSA was starting to open up and help us all improve our security defense. But after the attacks of 9/11, the NSA shifted back to the attack: vulnerabilities were to be hoarded in secret. Slowly, things in the U.S. are shifting back again.
So now we're seeing the NSA help secure Windows Vista and releasing their own version of Linux. The DHS, meanwhile, is funding a project to secure popular open source software packages, and across the Atlantic the UK’s GCHQ is finding bugs in PGPDisk and reporting them back to the company. (NSA is rumored to be doing the same thing with BitLocker.)
I'm in favor of this trend, because my security improves for free. Whenever the NSA finds a security problem and gets the vendor to fix it, our security gets better. It's a side-benefit of dual-use technologies.
But I want governments to do more. I want them to use their buying power to improve my security. I want them to offer countrywide contracts for software, both security and non-security, that have explicit security requirements. If these contracts are big enough, companies will work to modify their products to meet those requirements. And again, we all benefit from the security improvements.
The only example of this model I know about is a U.S. government-wide procurement competition for full-disk encryption, but this can certainly be done with firewalls, intrusion detection systems, databases, networking hardware, even operating systems.
When it comes to IT technologies, the equities issue should be a no-brainer. The good uses of our common hardware, software, operating systems, network protocols, and everything else vastly outweigh the bad uses. It's time that the government used its immense knowledge and experience, as well as its buying power, to improve cybersecurity for all of us.
This essay originally appeared on Wired.com.
I just received the second edition of Ross Anderson's Security Engineering in the mail. It's beautiful.
This is the best book on the topic there is, and I recommend it to everyone working in this field -- and not just because I wrote the foreword. You can download the preface and six chapters. (You can also download the entire first edition.)
This isn't my Password Safe. This is PasswordSafe.com. Password Safe is an open-source application that lives on your computer and encrypts your passwords. PasswordSafe.com lets you store your passwords on their server. They promise not to look at them.
Can I trust PasswordSafe?
This week, on a writing blog called Elephant Words, every story is based on this squid image. Click forward on the blog entries to see the fiction.
(It is certainly colossal: 1,089 pounds and 26 feet long.)
EDITED TO ADD (5/9): More.
Two weeks ago I was interviewed on Dutch radio. The introduction and questions are in Dutch, but my answers are in English.
Three weeks ago I was interviewed on Anti War Radio. It was an odd interview, starting from my essay "Portrait of the Modern Terrorist as an Idiot" and then meandering into the role of government versus corporations in security.
I'm not trying to brag. It's just easier for me if these links are all in one place so I can search for them later.
In 1994, I published my second book, Protect Your Macintosh. You've probably never heard of it; it died a quiet and lonely death.
Going through some boxes, I found a dozen copies of the book: first and, I think, only printing. I'm willing to send one to anyone who wants one for $5 postage. (That's in the U.S. If you're elsewhere, we'll figure out postage.) Please let me know via e-mail if you're interested.
And I can assure you that, fourteen years later, there's absolutely nothing of practical value in the book. This offer should only interest collectors. And even them, not that much.
I also have seven copies of my third book, E-Mail Security, from 1995, which also has nothing in it of any practical value anymore. Again, $5 for postage.
EDITED TO ADD (5/3): Sold out; sorry.
If this weren't so sad, it would be funny:
The problem with federal air marshals (FAM) names matching those of suspected terrorists on the no-fly list has persisted for years, say air marshals familiar with the situation.
Seriously -- if these people can't get their names off the list, what hope do the rest of us have? Not that the no-fly list has any real value, anyway.
Snarky, but basically correct:
3. Male Family Members and Friends (Especially if they are drunk and you are young foreign born.)
A nice essay on security trade-offs:
The mismatch between the resources devoted to fighting organised crime compared with those directed towards counter-terrorism is unnerving. Government says that there are millions of pounds in police budgets that should be devoted to dealing with organised crime. In truth, only a handful of British police forces know how to tackle it. The ridiculous Victorian patchwork of shire constabularies means that most are too small to tackle serious criminality that doesn't recognise country, never mind county, borders.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.