Crypto-Gram

November 15, 2010

by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-1011.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively comment section. An RSS feed is available.


In this issue:


Crowdsourcing Surveillance

Internet Eyes is a U.K. startup designed to crowdsource digital surveillance. People pay a small fee to become a “Viewer.” Once they do, they can log onto the site and view live anonymous feeds from surveillance cameras at retail stores. If they notice someone shoplifting, they can alert the store owner. Viewers get rated on their ability to differentiate real shoplifting from false alarms, can win 1000 pounds if they detect the most shoplifting in some time interval, and otherwise get paid a wage that most likely won’t cover their initial fee.

Although the system has some nod towards privacy, groups like Privacy International oppose the system for fostering a culture of citizen spies. More fundamentally, though, I don’t think the system will work. Internet Eyes is primarily relying on voyeurism to compensate its Viewers. But most of what goes on in a retail store is incredibly boring. Some of it is actually voyeuristic, and very little of it is criminal. The incentives just aren’t there for Viewers to do more than peek, and there’s no obvious way to discouraging them from siding with the shoplifter and just watch the scenario unfold.

This isn’t the first time groups have tried to crowdsource surveillance camera monitoring. Texas’s Virtual Border Patrol tried the same thing: deputizing the general public to monitor the Texas-Mexico border. It ran out of money last year, and was widely criticized as a joke.

This system suffered the same problems as Internet Eyes—not enough incentive to do a good job, boredom because crime is the rare exception—as well as the fact that false alarms were very expensive to deal with.

Both of these systems remind me of the one time this idea was conceptualized correctly. Invented in 2003 by my friend and colleague Jay Walker, US HomeGuard also tried to crowdsource surveillance camera monitoring. But this system focused on one very specific security concern: people in no-mans areas. These are areas between fences at nuclear power plants or oil refineries, border zones, areas around dams and reservoirs, and so on: areas where there should never be anyone.

The idea is that people would register to become “spotters.” They would get paid a decent wage (that and patriotism was the incentive), receive a stream of still photos, and be asked a very simple question: “Is there a person or a vehicle in this picture?” If a spotter clicked “yes,” the photo—and the camera—would be referred to whatever professional response the camera owner had set up.

HomeGuard would monitor the monitors in two ways. One, by sending stored, known, photos to people regularly to verify that they were paying attention. And two, by sending live photos to multiple spotters and correlating the results, to many more monitors if a spotter claimed to have spotted a person or vehicle.

Just knowing that there’s a person or a vehicle in a no-mans-area is only the first step in a useful response, and HomeGuard envisioned a bunch of enhancements to the rest of that system. Flagged photos could be sent to the digital phones of patrolling guards, cameras could be controlled remotely by those guards, and speakers in the cameras could issue warnings. Remote citizen spotters were only useful for that first step, looking for a person or a vehicle in a photo that shouldn’t contain any. Only real guards at the site itself could tell an intruder from the occasional maintenance person.

Of course the system isn’t perfect. A would-be infiltrator could sneak past the spotters by holding a bush in front of him, or disguising himself as a vending machine. But it does fill in a gap in what fully automated systems can do, at least until image processing and artificial intelligence get significantly better.

HomeGuard never got off the ground. There was never any good data about whether spotters were more effective than motion sensors as a first level of defense. But more importantly, Walker says that the politics surrounding homeland security money post-9/11 was just too great to penetrate, and that as an outsider he couldn’t get his ideas heard. Today, probably, the patriotic fervor that gripped so many people post-9/11 has dampened, and he’d probably have to pay his spotters more than he envisioned seven years ago. Still, I thought it was a clever idea then and I still think it’s a clever idea—and it’s an example of how to do surveillance crowdsourcing correctly.

Making the system more general runs into all sorts of problems. An amateur can spot a person or vehicle pretty easily, but is much harder pressed to notice a shoplifter. The privacy implications of showing random people pictures of no-man’s-lands is minimal, while a busy store is another matter—stores have enough individuality to be identifiable, as do people. Public photo tagging will even allow the process to be automated. And, of course, the normalization of a spy-on-your-neighbor surveillance society where it’s perfectly reasonable to watch each other on cameras just in case one of us does something wrong.

This essay first appeared in ThreatPost.
http://threatpost.com/en_us/s/…
Internet Eyes:
http://interneteyes.co.uk/
http://www.bbc.co.uk/news/uk-11460897

Opposition to Internet Eyes:
http://www.disinfo.com/2010/10/…
Virtual Border Patrol:
http://homelandsecuritynewswire.com/…
http://thelede.blogs.nytimes.com/2009/03/26/…
http://immigrationclearinghouse.org/…
US HomeGuard:
http://www.csoonline.com/article/218490/…
http://www.wired.com/wired/archive/11.06/start.html?…
http://dissidentvoice.org/Articles6/…
http://www.wired.com/wired/archive/11.06/start.html?…

Disguising yourself as a vending machine:
http://laughlines.blogs.nytimes.com/2007/10/20/…


Internet Quarantines

Last month, Scott Charney of Microsoft proposed that infected computers be quarantined from the Internet. Using a public health model for Internet security, the idea is that infected computers spreading worms and viruses are a risk to the greater community and thus need to be isolated. Internet service providers would administer the quarantine, and would also clean up and update users’ computers so they could rejoin the greater Internet.

This isn’t a new idea. Already there are products that test computers trying to join private networks, and only allow them access if their security patches are up-to-date and their antivirus software certifies them as clean. Computers denied access are sometimes shunned to a limited-capability sub-network where all they can do is download and install the updates they need to regain access. This sort of system has been used with great success at universities and end-user-device-friendly corporate networks. They’re happy to let you log in with any device you want—this is the consumerization trend in action—as long as your security is up to snuff.

Charney’s idea is to do that on a larger scale. To implement it we have to deal with two problems. There’s the technical problem—making the quarantine work in the face of malware designed to evade it, and the social problem—ensuring that people don’t have their computers unduly quarantined. Understanding the problems requires us to understand quarantines in general.

Quarantines have been used to contain disease for millennia. In general several things need to be true for them to work. One, the thing being quarantined needs to be easily recognized. It’s easier to quarantine a disease if it has obvious physical characteristics: fever, boils, etc. If there aren’t any obvious physical effects, or if those effects don’t show up while the disease is contagious, a quarantine is much less effective.

Similarly, it’s easier to quarantine an infected computer if that infection is detectable. As Charney points out, his plan is only effective against worms and viruses that our security products recognize, not against those that are new and still undetectable.

Two, the separation has to be effective. The leper colonies on Molokai and Spinalonga both worked because it was hard for the quarantined to leave. Quarantined medieval cities worked less well because it was too easy to leave, or—when the diseases spread via rats or mosquitoes—because the quarantine was targeted at the wrong thing.

Computer quarantines have been generally effective because the users whose computers are being quarantined aren’t sophisticated enough to break out of the quarantine, and find it easier to update their software and rejoin the network legitimately.

Three, only a small section of the population must need to be quarantined. The solution works only if it’s a minority of the population that’s affected, either with physical diseases or computer diseases. If most people are infected, overall infection rates aren’t going to be slowed much by quarantining. Similarly, a quarantine that tries to isolate most of the Internet simply won’t work.

Fourth, the benefits must outweigh the costs. Medical quarantines are expensive to maintain, especially if people are being quarantined against their will. Determining who to quarantine is either expensive (if it’s done correctly) or arbitrary, authoritative and abuse-prone (if it’s done badly). It could even be both. The value to society must be worth it.

It’s the last point that Charney and others emphasize. If Internet worms were only damaging to the infected, we wouldn’t need a societally imposed quarantine like this. But they’re damaging to everyone else on the Internet, spreading and infecting others. At the same time, we can implement systems that quarantine cheaply. The value to society far outweighs the cost.

That makes sense, but once you move quarantines from isolated private networks to the general Internet, the nature of the threat changes. Imagine an intelligent and malicious infectious disease: That’s what malware is. The current crop of malware ignores quarantines; they’re few and far enough between not to affect their effectiveness.

If we tried to implement Internet-wide—or even countrywide—quarantining, worm-writers would start building in ways to break the quarantine. So instead of nontechnical users not bothering to break quarantines because they don’t know how, we’d have technically sophisticated virus-writers trying to break quarantines. Implementing the quarantine at the ISP level would help, and if the ISP monitored computer behavior, not just specific virus signatures, it would be somewhat effective even in the face of evasion tactics. But evasion would be possible, and we’d be stuck in another computer security arms race. This isn’t a reason to dismiss the proposal outright, but it is something we need to think about when weighing its potential effectiveness.

Additionally, there’s the problem of who gets to decide which computers to quarantine. It’s easy on a corporate or university network: the owners of the network get to decide. But the Internet doesn’t have that sort of hierarchical control, and denying people access without due process is fraught with danger. What are the appeal mechanisms? The audit mechanisms? Charney proposes that ISPs administer the quarantines, but there would have to be some central authority that decided what degree of infection would be sufficient to impose the quarantine. Although this is being presented as a wholly technical solution, it’s these social and political ramifications that are the most difficult to determine and the easiest to abuse.

Once we implement a mechanism for quarantining infected computers, we create the possibility of quarantining them in all sorts of other circumstances. Should we quarantine computers that don’t have their patches up to date, even if they’re uninfected? Might there be a legitimate reason for someone to avoid patching his computer? Should the government be able to quarantine someone for something he said in a chat room, or a series of search queries he made? I’m sure we don’t think it should, but what if that chat and those queries revolved around terrorism? Where’s the line?

Microsoft would certainly like to quarantine any computers it feels are not running legal copies of its operating system or applications software. The music and movie industry will want to quarantine anyone it decides is downloading or sharing pirated media files—they’re already pushing similar proposals.

A security measure designed to keep malicious worms from spreading over the Internet can quickly become an enforcement tool for corporate business models. Charney addresses the need to limit this kind of function creep, but I don’t think it will be easy to prevent; it’s an enforcement mechanism just begging to be used.

Once you start thinking about implementation of quarantine, all sorts of other social issues emerge. What do we do about people who need the Internet? Maybe VoIP is their only phone service. Maybe they have an Internet-enabled medical device. Maybe their business requires the Internet to run. The effects of quarantining these people would be considerable, even potentially life-threatening. Again, where’s the line?

What do we do if people feel they are quarantined unjustly? Or if they are using nonstandard software unfamiliar to the ISP? Is there an appeals process? Who administers it? Surely not a for-profit company.

Public health is the right way to look at this problem. This conversation—between the rights of the individual and the rights of society—is a valid one to have, and this solution is a good possibility to consider.

There are some applicable parallels. We require drivers to be licensed and cars to be inspected not because we worry about the danger of unlicensed drivers and uninspected cars to themselves, but because we worry about their danger to other drivers and pedestrians. The small number of parents who don’t vaccinate their kids have already caused minor outbreaks of whooping cough and measles among the greater population. We all suffer when someone on the Internet allows his computer to get infected. How we balance that with individuals’ rights to maintain their own computers as they see fit is a discussion we need to start having.

This essay previously appeared on Forbes.com.
http://www.forbes.com/2010/11/10/…

Charney’s proposal:
http://s.technet.com/b/microsoft_on_the_issues/…
http://www.bbc.co.uk/go/news/technology-11483008/…
http://news.cnet.com/8301-27080_3-10462649-245.html

Proposals to cut off file sharers:
http://news.bbc.co.uk/2/hi/7240234.stm
http://www.zeropaid.com/news/9114/…


News

Researchers are working on a way to fingerprint telephone calls. The system can be used to differentiate telephone calls from your bank from telephone calls from someone in Nigeria pretending to be from your bank. Unless your bank is outsourcing its customer support to Nigeria, of course.
http://www.theregister.co.uk/2010/10/06/…
http://www.gatech.edu/newsroom/release.html?nid=61428

Former Denver Broncos quarterback on hiding in plain sight.
http://sportsillustrated.cnn.com/vault/article/…
Was the software used in the Predator drones pirated?
http://www.fastcompany.com/1695219/…
http://www.theregister.co.uk/2010/09/24/cia_netezza/
The obvious joke is that this is what you get when you go with the low bidder, but it doesn’t have to be that way. And there’s nothing special about this being a government procurement; any bespoke IT procurement needs good contractual oversight.

I am the program chair for the next Workshop on the Economics of Information Security, WEIS 2011, which is to be held next June in Washington, DC. Submissions are due at the end of February. Please forward and repost the call for papers.
http://weis2011.econinfosec.org/
http://weis2011.econinfosec.org/cfpart.html

Electronic Car lock denial-of-service attack
https://www.schneier.com/blog/archives/2010/10/…

Security hole in FaceTime for Mac.
http://arstechnica.com/apple/news/2010/10/…
It’s been fixed.
http://www.electronista.com/articles/10/10/22/…
Here’s a long list of declassified NSA documents. These items are not online; they’re at the National Archives and Records Administration in College Park, MD. You can either ask for copies by mail under FOIA (at a 75 cents per page) or come in person. There, you can read and scan them for free, or photocopy them for about 20 cents a page.
http://www.nsa.gov/public_info/declass/entries.shtml

Seymour Hersh on cyberwar, from The New Yorker.
http://www.newyorker.com/reporting/2010/11/01/…

Firesheep is a new Firefox plugin that makes it easy for you to hijack other people’s social network connections. Basically, Facebook authenticates clients with cookies. If someone is using a public WiFi connection, the cookies are sniffable. Firesheep uses wincap to capture and display the authentication information for accounts it sees, allowing you to hijack the connection. To protect against this attack, you have to encrypt your entire session under TLS—not just the initial authentication. Or stop logging in to Facebook from public networks.
http://codebutler.github.com/firesheep/
http://codebutler.github.com/firesheep/tc12
http://techcrunch.com/2010/10/25/firesheep/
http://windowssecrets.com/2010/11/04/…
http://www.shortestpathfirst.net/2010/10/29/…
Old—but recently released—document discussing the bugging of the Russian embassy in 1940. The document also mentions bugging the embassies of France, Germany, Italy, and Japan.
http://www.scribd.com/doc/39557615/…
New Orleans is scrapping its surveillance cameras because they’re not worth it.
http://www.nola.com/politics/index.ssf/2010/10/…
http://topics.nola.com/tag/crime%20cameras/index.html

Good blog post on the militarization of the Internet.
http://scrawford.net//…

Halloween and the irrational fear of stranger danger:
http://online.wsj.com/article/…
Also this:
http://www.theatlantic.com/food/archive/2010/10/…
Wondermark comments:
http://wondermark.com/567/

This is an interesting paper about control fraud. It’s by William K. Black, the Executive Director of the Institute for Fraud Prevention. “Individual ‘control frauds’ cause greater losses than all other forms of property crime combined. They are financial super-predators.” Black is talking about control fraud by both heads of corporations and heads of state, so that’s almost certainly a true statement. His main point, though, is that our legal systems don’t do enough to discourage control fraud.
https://www.schneier.com/blog/archives/2010/11/…

Dan Geer on “Cybersecurity and National Policy.”
http://www.harvardnsj.com/2010/04/…

Last month the police arrested Farooque Ahmed for plotting a terrorist attack on the D.C. Metro system. However, it’s not clear how much of the plot was his idea and how much was the idea of some paid FBI informants.
http://www.salon.com/news/politics/war_room/2010/10/…
Of course, the police are now using this fake bomb plot to justify random bag searching in the Metro.
http://www.wtopnews.com/?nid=25&sid=2097181
It’s a dumb idea:
https://www.schneier.com/blog/archives/2005/07/…
This is the problem with thoughtcrime. Entrapment is much too easy.
https://www.schneier.com/blog/archives/2010/09/…
Much the same thing was written in The Economist blog.
http://www.economist.com/s/democracyinamerica/…
The business of botnets can be lucrative.
http://www.networkworld.com/news/2010/…
Paper on the market price of bots:
http://www.icsi.berkeley.edu/cgi-bin/pubs/…

Good article on security options for the Washington Monument. I like the suggestion of closing it until we’re ready to accept that there is always risk.
http://www.washingtonpost.com/wp-dyn/content/…
More information on the decision process:
http://www.washingtonpost.com/wp-dyn/content/…
“A Social Network Approach to Understanding an Insurgency”
http://www.netscience.usma.edu/publications/…

“Bulletproof” service providers: ISPs who are immune from takedown notices and offer services to illegitimate website providers.
http://krebsonsecurity.com/2010/11/…

Camouflaging test cars from competitors and the press:
http://www.nytimes.com/2010/11/07/automobiles/…

Long article on convicted hacker Albert Gonzalez from The New York Times Magazine.
http://www.nytimes.com/2010/11/14/magazine/…


Cargo Security

The New York Times writes: “Despite the increased scrutiny of people and luggage on passenger planes since 9/11, there are far fewer safeguards for packages and bundles, particularly when loaded on cargo-only planes.”

Well, of course. We’ve always known this. We’ve not worried about terrorism on cargo planes because it isn’t very terrorizing. Packages aren’t people. If a passenger plane blows up, it affects a couple of hundred people. If a cargo plane blows up, it just affects the crew.

Cargo that is loaded on to passenger planes should be subjected to the same level of security as passenger luggage. Cargo that is loaded onto cargo planes should be treated no differently from cargo loaded into ships, trains, trucks, and the trunks of cars.

Of course: now that the media is talking about cargo security, we have to “do something.” (Something must be done. This is something. Therefore, we must do it.) But if we’re so scared that we have to devote resources to this kind of terrorist threat, we’ve well and truly lost.

Also note: the plot—it’s still unclear how serious it was—wasn’t uncovered by any security screening, but by intelligence gathering. The Washington Post writes: “Intelligence officials were onto the suspected plot for days, officials said. The packages in England and Dubai were discovered after Saudi Arabian intelligence picked up information related to Yemen and passed it on to the U.S., two officials said.”

This is how you fight through terrorism: not by defending against specific threats, but through intelligence, investigation, and emergency response.

New York Times article:
http://www.nytimes.com/2010/10/30/us/30cargo.html

Washington Post article:
http://www.washingtonpost.com/wp-dyn/content/…
My essay on intelligence, investigation, and emergency response:
http://www.schneier.com/essay-292.html


Changes in Airplane Security

1. The TSA is banning toner cartridges over 16 ounces, because that’s what the Yemeni bombers used. There’s some impressive magical thinking going on here.

2. Because people need to remove their belts before going into full-body scanners, the TSA is making us remove our belts even when we’re not going through full-body scanners. European airports have made us remove our belts for years. My normal tactic is to pull my shirt tails out of my pants and over my belt. Then I flash my waist and tell them I’m not wearing a belt. It doesn’t set off the metal detector, so they don’t notice.

3. Now the terrorists have really affected me personally: they’re forcing us to turn off airplane WiFi. No, it’s not that the Yemeni package bombs had a WiFi triggering mechanism—they seem to have had a cell phone triggering mechanism, dubious at best—but we can *imagine* an Internet-based triggering mechanism. Put together a sloppy and unsuccessful package bomb with an imagined triggering mechanism, and you have a *new and dangerous threat* that—even though it was a threat ever since the first airplane got WiFi capability—must be immediately dealt with right now.

Please, let’s not ever tell the TSA about timers. Or altimeters.

Belts:
http://www.salon.com/technology/ask_the_pilot/2010/…
Toner cartridges:
http://www.msnbc.msn.com/id/40072889/ns/…

In-flight WiFi:
http://www.newscientist.com/article/dn19665
http://gizmodo.com/5679794/…
Using a cell phone to detonate a plane bomb:
http://www.wired.com/dangerroom/2010/11/…
While we’re talking about the TSA, be sure to opt out of the full-body scanners.
http://www.theatlantic.com/national/archive/2010/10/…
And remember your sense of humor when a TSA officer slips white powder into your suitcase and then threatens you with arrest.
http://www.thesmokinggun.com/documents/stupid/…


Young Man in “Old Man” Mask Boards Plane in Hong Kong

It’s kind of an amazing story. A young Asian man used a rubber mask to disguise himself as an old Caucasian man and, with a passport photo that matched his disguise, got through all customs and airport security checks and onto a plane to Canada.

The fact that this sort of thing happens occasionally doesn’t surprise me. It’s human nature that we miss this sort of thing. I wrote about it in Beyond Fear (pages 153-4):

No matter how much training they get, airport screeners routinely
miss guns and knives packed in carry-on luggage. In part, that’s
the result of human beings having developed the evolutionary
survival skill of pattern matching: the ability to pick out
patterns from masses of random visual data. Is that a ripe fruit
on that tree? Is that a lion stalking quietly through the grass?
We are so good at this that we see patterns in anything, even if
they’re not really there: faces in inkblots, images in clouds, and
trends in graphs of random data. Generating false positives helped
us stay alive; maybe that wasn’t a lion that your ancestor saw,
but it was better to be safe than sorry. Unfortunately, that
survival skill also has a failure mode. As talented as we are at
detecting patterns in random data, we are equally terrible at
detecting exceptions in uniform data. The quality-control
inspector at Spacely Sprockets, staring at a production line
filled with identical sprockets looking for the one that is
different, can’t do it. The brain quickly concludes that all the
sprockets are the same, so there’s no point paying attention. Each
new sprocket confirms the pattern. By the time an anomalous
sprocket rolls off the assembly line, the brain simply doesn’t
notice it. This psychological problem has been identified in
inspectors of all kinds; people can’t remain alert to rare events,
so they slip by.

A customs officer spends hours looking at people and comparing their faces with their passport photos. They do it on autopilot. Will they catch someone in a rubber mask that looks like their passport photo? Probably, but certainly not all the time.

And yes, this is a security risk, but it’s not a big one. Because while—occasionally—a gun can slip through a metal detector or a masked man can slip through customs, it doesn’t happen reliably. So the bad guys can’t build a plot around it.

http://www.cnn.com/2010/WORLD/americas/11/04/…
http://i2.cdn.turner.com/cnn/2010/images/11/04/…

Commentary from my blog about what actually happened:
https://www.schneier.com/blog/archives/2010/11/…
Beyond Fear:
http://www.schneier.com/book-beyondfear.html


Schneier News

I’m speaking at the 11th Annual Security Conference & Exhibition in Washington DC on Nov 16.
http://events.1105govinfo.com/events/…
I’m speaking at Paranoia 2010 in Oslo on Nov 23.
http://paranoia.watchcom.no/

I’m speaking at ClubHack 2010 in Pune, India on Dec 4.
http://clubhack.com/2010/

My TED talk. Okay, it’s not TED. It’s one of the independent regional TED events: TEDxPSU. My talk was “Reconceptualizing Security,” a condensation of the hour-long talk into 18 minutes.
http://www.youtube.com/watch?v=CGd_M_CpeDI

I was interviewed last week at RSA Europe.
https://365.rsaconference.com/community/connect/…


Kahn, Diffie, Clark, and Me at Bletchley Park

Last Saturday, I visited Bletchley Park to speak at the Annual ACCU Security Fundraising Conference. They had a stellar line of speakers this year, and I was pleased to be a part of the day.

Talk #1: “The Art of Forensic Warfare,” Andy Clark. Riffing on Sun Tzu’s “The Art of War,” Clark discussed the war—the back and forth—between cyber attackers and cyber forensics. This isn’t to say that we’re at war, but today’s attacker tactics are increasingly sophisticated and warlike. Additionally, the pace is greater, the scale of impact is greater, and the subjects of attack are broader. To defend ourselves, we need to be equally sophisticated and—possibly—more warlike.

Clark drew parallels from some of the chapters of Sun Tzu’s book combined with examples of the work at Bletchley Park. Laying plans: when faced with an attacker—especially one of unknown capabilities, tactics, and motives—it’s important to both plan ahead and plan for the unexpected. Attack by stratagem: increasingly, attackers are employing complex and long-term strategies; defenders need to do the same. Energy: attacks increasingly start off simple and get more complex over time; while it’s easier to defect primary attacks, secondary techniques tend to be more subtle and harder to detect. Terrain: modern attacks take place across a very broad range of terrain, including hardware, OSs, networks, communication protocols, and applications. The business environment under attack is another example of terrain, equally complex. The use of spies: not only human spies, but also keyloggers and other embedded eavesdropping malware. There’s a great World War II double-agent story about Eddie Chapman, codenamed ZIGZAG.

Talk #2: “How the Allies Suppressed the Second Greatest Secret of World War II,” David Kahn. This talk is from Kahn’s article of the same name, published in the Oct 2010 issue of “The Journal of Military History.” The greatest secret of World War II was the atom bomb; the second greatest secret was that the Allies were reading the German codes. But while there was a lot of public information in the years after World War II about Japanese codebreaking and its value, there was almost nothing about German codebreaking. Kahn discussed how this information was suppressed, and how historians writing World War II histories never figured it out. No one imagined as large and complex an operation as Bletchley Park; it was the first time in history that something like this had ever happened. Most of Kahn’s time was spent in a very interesting Q&A about the history of Bletchley Park and World War II codebreaking.

Talk #3: “DNSSec, A System for Improving Security of the Internet Domain Name System,” Whitfield Diffie. Whit talked about three watersheds in modern communications security. The first was the invention of the radio. Pre-radio, the most common communications security device was the code book. This was no longer enough when radio caused the amount of communications to explode. In response, inventors took the research in Vigenère ciphers and automated them. This automation led to an explosion of designs and an enormous increase in complexity—and the rise of modern cryptography.

The second watershed was shared computing. Before the 1960s, the security of computers was the physical security of computer rooms. Timesharing changed that. The result was computer security, a much harder problem than cryptography. Computer security is primarily the problem of writing good code. But writing good code is hard and expensive, so functional computer security is primarily the problem of dealing with code that isn’t good. Networking—and the Internet—isn’t just an expansion of computing capacity. The real difference is how cheap it is to set up communications connections. Setting up these connections requires naming: both IP addresses and domain names. Security, of course, is essential for this all to work; DNSSec is a critical part of that.

The third watershed is cloud computing, or whatever you want to call the general trend of outsourcing computation. Google is a good example. Every organization uses Google search all the time, which probably makes it the most valuable intelligence stream on the planet. How can you protect yourself? You can’t, just as you can’t whenever you hand over your data for storage or processing—you just have to trust your outsourcer. There are two solutions. The first is legal: an enforceable contract that protects you and your data. The second is technical, but mostly theoretical: homomorphic encryption that allows you to outsource computation of data without having to trust that outsourcer.

Diffie’s final point is that we’re entering an era of unprecedented surveillance possibilities. It doesn’t matter if people encrypt their communications, or if they encrypt their data in storage. As long as they have to give their data to other people for processing, it will be possible to eavesdrop on. Of course the methods will change, but the result will be an enormous trove of information about everybody.

Talk #4: “Reconceptualizing Security,” me. It was similar to previous essays and talks.

Annual ACCU Security Fundraising Conference:
http://www.bletchleypark.org.uk/calendar/…
Bletchley Park:
http://www.bletchleypark.org.uk/content/museum.rhtm

News coverage:
http://s.wsj.com/tech-europe/2010/11/06/…
The Art of War:
http://www.chinapage.com/sunzi-e.html

Eddie Chapman book:
http://www.amazon.com/exec/obidos/ASIN/0307353419/…

The Journal of Military History:
http://www.smh-hq.org/jmh/jmhvols/contents.html

Essay and video similar to my talk:
https://www.schneier.com/blog/archives/2008/04/…
http://www.youtube.com/watch?v=CGd_M_CpeDI


Changing Passwords

How often should you change your password? I get asked that question a lot, usually by people annoyed at their employer’s or bank’s password expiration policy: people who finally memorized their current password and are realizing they’ll have to write down their new password. How could that possibly be more secure, they want to know.

The answer depends on what the password is used for.

The downside of changing passwords is that it makes them harder to remember. And if you force people to change their passwords regularly, they’re more likely to choose easy-to-remember—and easy-to-guess—passwords than they are if they can use the same passwords for many years. So any password-changing policy needs to be chosen with that consideration in mind.

The primary reason to give an authentication credential—not just a password, but any authentication credential—an expiration date is to limit the amount of time a lost, stolen, or forged credential can be used by someone else. If a membership card expires after a year, then if someone steals that card he can at most get a year’s worth of benefit out of it. After that, it’s useless.

This becomes less important when the credential contains a biometric—even a photograph—or is verified online. It’s much less important for a credit card or passport to have an expiration date, now that they’re not so much bearer documents as just pointers to a database. If, for example, the credit card database knows when a card is no longer valid, there’s no reason to put an expiration date on the card. But the expiration date does mean that a forgery is only good for a limited length of time.

Passwords are no different. If a hacker gets your password either by guessing or stealing it, he can access your network as long as your password is valid. If you have to update your password every quarter, that significantly limits the utility of that password to the attacker.

At least, that’s the traditional theory. It assumes a passive attacker, one who will eavesdrop over time without alerting you that he’s there. In many cases today, though, that assumption no longer holds. An attacker who gets the password to your bank account by guessing or stealing it isn’t going to eavesdrop. He’s going to transfer money out of your account—and then you’re going to notice. In this case, it doesn’t make a lot of sense to change your password regularly—but it’s vital to change it immediately after the fraud occurs.

Someone committing espionage in a private network is more likely to be stealthy. But he’s also not likely to rely on the user credential he guessed and stole; he’s going to install backdoor access or create his own account. Here again, forcing network users to regularly change their passwords is less important than forcing everyone to change their passwords immediately after the spy is detected and removed—you don’t want him getting in again.

Social networking sites are somewhere in the middle. Most of the criminal attacks against Facebook users use the accounts for fraud. “Help! I’m in London and my wallet was stolen. Please wire money to this account. Thank you.” Changing passwords periodically doesn’t help against this attack, although—of course—change your password as soon as you regain control of your account. But if your kid sister has your password—or the tabloid press, if you’re that kind of celebrity—they’re going to listen in until you change it. And you might not find out about it for months.

So in general: you don’t need to regularly change the password to your computer or online financial accounts (including the accounts at retail sites); definitely not for low-security accounts. You should change your corporate login password occasionally, and you need to take a good hard look at your friends, relatives, and paparazzi before deciding how often to change your Facebook password. But if you break up with someone you’ve shared a computer with, change them all.

Two final points. One, this advice is for login passwords. There’s no reason to change any password that is a key to an encrypted file. Just keep the same password as long as you keep the file, unless you suspect it’s been compromised. And two, it’s far more important to choose a good password for the sites that matter—don’t worry about sites you don’t care about that nonetheless demand that you register and choose a password—in the first place than it is to change it. So if you have to worry about something, worry about that. And write your passwords down, or use a program like Password Safe.

This essay originally appeared on DarkReading.com.
http://www.darkreading.com/blog/archives/2010/11/…

Choosing good passwords:
https://www.schneier.com/blog/archives/2007/01/…

Password Safe:
http://www.schneier.com/passsafe.html

Mircosoft Research says the same thing:
http://www.pcmag.com/article2/0,2817,2362692,00.asp

“The Security of Modern Password Expiration: An Algorithmic Framework and Empirical Analysis.”
http://www.cs.unc.edu/~yinqian/papers/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2010 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.