December 15, 2010
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1012.html>. These same essays and news items appear in the "Schneier on Security" blog at <http://www.schneier.com/blog>, along with a lively comment section. An RSS feed is available.
In this issue:
A short history of airport security: We screen for guns and bombs, so the terrorists use box cutters. We confiscate box cutters and corkscrews, so they put explosives in their sneakers. We screen footwear, so they try to use liquids. We confiscate liquids, so they put PETN bombs in their underwear. We roll out full-body scanners, even though they wouldn't have caught the Underwear Bomber, so they put a bomb in a printer cartridge. We ban printer cartridges over 16 ounces -- the level of magical thinking here is amazing -- and they're going to do something else.
This is a stupid game, and we should stop playing it.
It's not even a fair game. It's not that the terrorist picks an attack and we pick a defense, and we see who wins. It's that we pick a defense, and then the terrorists look at our defense and pick an attack designed to get around it. Our security measures only work if we happen to guess the plot correctly. If we get it wrong, we've wasted our money. This isn't security; it's security theater.
There are two basic kinds of terrorists. The are the sloppy planners, like the guy who crashed his plane into the Internal Revenue Service building in Austin. He's going to be sloppy and stupid, and even pre-9/11 airplane security is going to catch him. The second is the well-planned, well-financed, and much rarer sort of plot. Do you really expect the T.S.A. screeners, who are busy confiscating water bottles and making people take off their belts -- and now doing uncomfortable pat-downs -- to stop them?
Of course not. Airport security is the last line of defense, and it's not a very good one. What works is investigation and intelligence: security that works regardless of the terrorist tactic or target. Yes, the target matters too; all this airport security is only effective if the terrorists target airports. If they decide to bomb crowded shopping malls instead, we've wasted our money.
That being said, airplanes require a special level of security for several reasons: they're a favored terrorist target; their failure characteristics mean more deaths than a comparable bomb on a bus or train; they tend to be national symbols; and they often fly to foreign countries where terrorists can operate with more impunity.
But all that can be handled with pre-9/11 security. Exactly two things have made airplane travel safer since 9/11: reinforcing the cockpit door, and convincing passengers they need to fight back. Everything else has been a waste of money. Add screening of checked bags and airport workers and we're done. Take all the rest of the money and spend it on investigation and intelligence.
Immediately after the Christmas Day Underwear Bomber's plot failed, Homeland Security Secretary Janet Napolitano called airplane security a success. She was pilloried in the press and quickly backpedaled, but I think it was one of the most sensible things said on the subject. Plane lands safely, terrorist in custody, nobody injured except the terrorist: what more do people want out of a security success?
Look at what succeeded. Because even pre-9/11 security screened for obvious bombs, Abdulmutallab had to construct a far less reliable bomb than he would have otherwise. Instead of using a timer or a plunger or a reliable detonation mechanism, as would any commercial user of PETN, Abdulmutallab had to resort to an ad hoc and much more inefficient detonation mechanism involving a syringe, 20 minutes in the lavatory, and setting his pants on fire. As a result, his actions came to the notice of the other passengers, who subdued him.
Neither the full-body scanners or the enhanced pat-downs are making anyone safer. They're more a result of politicians and government appointees capitulating to a public that demands that "something must be done," even when nothing should be done; and a government bureaucracy that is more concerned about the security of their careers if they fail to secure against the last attack than what happens if they fail anticipate the next one.
Why terrorist attacks are so rare:
"Stop the Panic on Air Security":
This essay first appeared on the New York Times "Room for Debate" blog:
Organizers of National Opt Out Day, the Wednesday before Thanksgiving when air travelers were urged to opt out of the full-body scanners at security checkpoints and instead submit to full-body patdowns -- were outfoxed by the TSA. The government pre-empted the protest by turning off the machines in most airports during the Thanksgiving weekend. Everyone went through the metal detectors, just as before.
Now that Thanksgiving is over, the machines are back on and the "enhanced" pat-downs have resumed. I suspect that more people would prefer to have naked images of themselves seen by TSA agents in another room, than have themselves intimately touched by a TSA agent right in front of them.
But now, the TSA is in a bind. Regardless of whatever lobbying came before, or whatever former DHS officials had a financial interest in these scanners, the TSA has spent billions on those scanners, claiming they're essential. But because people can opt out, the alternate manual method must be equally effective; otherwise, the terrorists could just opt out. If they make the pat-downs less invasive, it would be the same as admitting the scanners aren't essential. Senior officials would get fired over that.
So not counting inconsequential modifications to demonstrate they're "listening," the pat-downs will continue. And they'll continue for everyone: children, abuse survivors, rape survivors, urostomy bag wearers, people in wheelchairs. It has to be that way; otherwise, the terrorists could simply adapt. They'd hide their explosives on their children or in their urostomy bags. They'd recruit rape survivors, abuse survivors, or seniors. They'd dress as pilots. They'd sneak their PETN through airport security using the very type of person who isn't being screened.
And PETN is what the TSA is looking for these days. That's pentaerythritol tetranitrate, the plastic explosive that both the Shoe Bomber and the Underwear Bomber attempted but failed to detonate. It's what was mailed from Yemen. It's in Iraq and Afghanistan. Guns and traditional bombs are passé; PETN is the terrorist tool of the future.
The problem is that no scanners or puffers can detect PETN; only swabs and dogs work. What the TSA hopes is that they will detect the bulge if someone is hiding a wad of it on their person. But they won't catch PETN hidden in a body cavity. That doesn't have to be as gross as you're imagining; you can hide PETN in your mouth. A terrorist can go through the scanners a dozen times with bits in his mouth each time, and assemble a bigger bomb on the other side. Or he can roll it thin enough to be part of a garment, and sneak it through that way. These tricks aren't new. In the days after the Underwear Bomber was stopped, a scanner manufacturer admitted that the machines might not have caught him.
So what's next? Strip searches? Body cavity searches? TSA Administrator John Pistole said there would be no body cavity searches for now, but his reasons make no sense. He said that the case widely reported as being a body cavity bomb might not actually have been. While that appears to be true, what does that have to do with future bombs? He also said that even body cavity bombs would need "external initiators" that the TSA would be able to detect.
Do you think for a minute that the TSA can detect these external initiators? Do you think that if a terrorist took a laptop -- or better yet, a less-common piece of electronics gear -- and removed the insides and replaced them with a timer, a pressure sensor, a simple contact switch, or a radio frequency switch, the TSA guy behind the X-ray machine monitor would detect it? How about if those components were distributed over a few trips through airport security? On the other hand, if we believe the TSA can magically detect these external initiators so effectively that they make body-cavity searches unnecessary, why do we need the full-body scanners?
Either PETN is a danger that must be searched for, or it isn't. Pistole was being either ignorant or evasive.
Once again, the TSA is covering their own asses by implementing security-theater measures to prevent the previous attack while ignoring any threats of future attacks. It's the same thinking that caused them to ban box cutters after 9/11, screen shoes after Richard Reid, limit liquids after that London gang, and -- I kid you not -- ban printer cartridges over 16 ounces after they were used to house package bombs from Yemen. They act like the terrorists are incapable of thinking creatively, while the terrorists repeatedly demonstrate that can always come up with a new approach that circumvents the old measures.
On the plus side, PETN is very hard to get to explode. The pre-9/11 screening procedures, looking for obvious guns and bombs, forced the terrorists to build inefficient fusing mechanisms. We saw this when Abdulmutallab, the Underwear Bomber, used bottles of liquid and a syringe and 20 minutes in the bathroom to assemble his device, then set his pants on fire -- and still failed to ignite his PETN-filled underwear. And when he failed, the passengers quickly subdued him.
The truth is that exactly two things have made air travel safer since 9/11: reinforcing cockpit doors and convincing passengers they need to fight back. The TSA should continue to screen checked luggage. They should start screening airport workers. And then they should return airport security to pre-9/11 levels and let the rest of their budget be used for better purposes. Investigation and intelligence is how we're going to prevent terrorism, on airplanes and elsewhere. It's how we caught the liquid bombers. It's how we found the Yemeni printer-cartridge bombs. And it's our best chance at stopping the next serious plot.
Because if a group of well-planned and well-funded terrorist plotters makes it to the airport, the chance is pretty low that those blue-shirted crotch-groping water-bottle-confiscating TSA agents are going to catch them. The agents are trying to do a good job, but the deck is so stacked against them that their job is impossible. Airport security is the last line of defense, and it's not a very good one.
We have a job here, too, and it's to be indomitable in the face of terrorism. The goal of terrorism is to terrorize us: to make us afraid, and make our government do exactly what the TSA is doing. When we react out of fear, the terrorists succeed even when their plots fail. But if we carry on as before, the terrorists fail -- even when their plots succeed.
National Opt Out Day:
TSA pre-empted National Opt Out Day:
John Pistole interview:
Saudi butt bomb:
The TSA banning printer cartridges over 16 ounces:
Investigation and intelligence:
"Our Reaction is the Real Security Failure":
This essay originally appeared on The Atlantic website.
New biometric: eye movements instead of eye structures.
The U.S. government receives a lot of unsolicited terrorism tips. Adding them all up, it "receives between 8,000 and 10,000 pieces of information per day, fingering just as many different people as potential threats. They also get information about 40 supposed plots against the United States or its allies daily."
Excellent essay on airplane terrorism twenty years ago.
I collected lots and lots of links from the first few days of the TSA backscatter X-ray backlash.
Another piece of the Stuxnet puzzle:
How to spoof your location on Facebook with your BlackBerry.
Interesting story of the withdrawal of the A5/2 encryption algorithm from GSM phones.
Causing terror on the cheap: turns out this has been bin Ladin's plan all along.
From a study on zoo security: "Among other measures, the scientists recommend not allowing animals to walk freely within the zoo grounds, and ensuring there is a physical barrier marking the zoo boundaries, and preventing individuals from escaping through drains, sewers or any other channels." Isn't all that sort of obvious?
I agree with Glenn Greenwald. I don't know if Mohamed Osman Mohamud is an actual terrorist that the FBI arrested, or if it's another case of entrapment.
Jeffrey Rosen opines on the constitutionality of full-body scanners.
Teen risk reduction strategies on social networking sites: super-logoff and wall scrubbing.
The U.S. Federal Trade Commission released its privacy report: "Protecting Consumer Privacy in an Era of Rapid Change." Among other things, they're recommending a "Do Not Track" mechanism to govern consumer information collection.
How the State of New Jersey is storing road salt needs to be kept secret from the terrorists. This seems not to be a joke.
Sex attack caught on CCTV camera:
Interesting profile of Evan Kohlmann.
Realistic facemasks are getting too realistic.
Open-source digital forensics.
Securing the Washington Monument from terrorism has turned out to be a surprisingly difficult job. The concrete fence around the building protects it from attacking vehicles, but there's no visually appealing way to house the airport-level security mechanisms the National Park Service has decided are a must for visitors. It is considering several options, but I think we should close the monument entirely. Let it stand, empty and inaccessible, as a monument to our fears.
An empty Washington Monument would serve as a constant reminder to those on Capitol Hill that they are afraid of the terrorists and what they could do. They're afraid that by speaking honestly about the impossibility of attaining absolute security or the inevitability of terrorism -- or that some American ideals are worth maintaining even in the face of adversity -- they will be branded as "soft on terror." And they're afraid that Americans would vote them out of office if another attack occurred. Perhaps they're right, but what has happened to leaders who aren't afraid? What has happened to "the only thing we have to fear is fear itself"?
An empty Washington Monument would symbolize our lawmakers' inability to take that kind of stand -- and their inability to truly lead.
Some of them call terrorism an "existential threat" against our nation. It's not. Even the events of 9/11, as horrific as they were, didn't make an existential dent in our nation. Automobile-related fatalities -- at 42,000 per year, more deaths each month, on average, than 9/11 -- aren't, either. It's our reaction to terrorism that threatens our nation, not terrorism itself. The empty monument would symbolize the empty rhetoric of those leaders who preach fear and then use that fear for their own political ends.
The day after Umar Farouk Abdulmutallab failed to blow up a Northwest jet with a bomb hidden in his underwear, Homeland Security Secretary Janet Napolitano said "The system worked." I agreed. Plane lands safely, terrorist in custody, nobody injured except the terrorist. Seems like a working system to me. The empty monument would represent the politicians and press who pilloried her for her comment, and Napolitano herself, for backing down.
The empty monument would symbolize our war on the unexpected, -- our overreaction to anything different or unusual -- our harassment of photographers, and our probing of airline passengers. It would symbolize our "show me your papers" society, rife with ID checks and security cameras. As long as we're willing to sacrifice essential liberties for a little temporary safety, we should keep the Washington Monument empty.
Terrorism isn't a crime against people or property. It's a crime against our minds, using the death of innocents and destruction of property to make us fearful. Terrorists use the media to magnify their actions and further spread fear. And when we react out of fear, when we change our policy to make our country less open, the terrorists succeed -- even if their attacks fail. But when we refuse to be terrorized, when we're indomitable in the face of terror, the terrorists fail -- even if their attacks succeed.
We can reopen the monument when every foiled or failed terrorist plot causes us to praise our security, instead of redoubling it. When the occasional terrorist attack succeeds, as it inevitably will, we accept it, as we accept the murder rate and automobile-related death rate; and redouble our efforts to remain a free and open society.
The grand reopening of the Washington Monument will not occur when we've won the war on terror, because that will never happen. It won't even occur when we've defeated al Qaeda. Militant Islamic terrorism has fractured into small, elusive groups. We can reopen the Washington Monument when we've defeated our fears, when we've come to accept that placing safety above all other virtues cedes too much power to government and that liberty is worth the risks, and that the price of freedom is accepting the possibility of crime.
I would proudly climb to the top of a monument to those ideals.
Washington Monument security options:
I don't have a lot to say about WikiLeaks, but I do want to make a few points.
1. Encryption isn't the issue here. Of course the cables were encrypted, for transmission. Then they were received and decrypted, and -- so it seems -- put into an archive on SIPRNet, where lots of people had access to them.
2. Secrets are only as secure as the least trusted person who knows them. The more people who know a secret, the more likely it is to be made public.
3. I'm not surprised that these cables were available to so many people. We know that access control is hard, and that it's impossible to know beforehand what information someone will need to do their job. What is surprising is that there wasn't any audit logs kept about who accessed all these cables. That seems like a no-brainer.
4. This has little to do with WikiLeaks. WikiLeaks is just a website. The real story is that "least trusted person" who decided to violate his security clearance and make these cables public. In the 1970s he would have mailed them to a newspaper. Today he uses WikiLeaks. Tomorrow he will have his choice of a dozen similar websites. If WikiLeaks didn't exist, he could have put them up on BitTorrent.
5. I think the government is learning what the music and movie industries were forced to learn years ago: it's easy to copy and distribute digital files. That's what's different between the 1970s and today. Amassing and releasing that many documents was hard in the paper and photocopier era; it's trivial in the Internet era. And just as the music and movie industries are going to have to change their business models for the Internet era, governments are going to have to change their secrecy models. I don't know what those new models will be, but they will be different.
Role-based access control:
The world is gearing up for cyberwar. The U.S. Cyber Command became operational in November. NATO has enshrined cyber security among its new strategic priorities. The head of Britain's armed forces said recently that boosting cyber capability is now a huge priority for the UK. And we know China is already engaged in broad cyber espionage attacks against the west. So how can we control a burgeoning cyber arms race?
We may already have seen early versions of cyberwars in Estonia and Georgia, possibly perpetrated by Russia. It's hard to know for certain, not only because such attacks are often impossible to trace, but because we have no clear definitions of what a cyberwar actually is.
Do the 2007 attacks against Estonia, traced to a young Russian man living in Tallinn and no one else, count? What about a virus from an unknown origin, possibly targeted at an Iranian nuclear complex? Or espionage from within China, but not specifically directed by its government? To such questions one must add even more basic issues, like when a cyberwar is understood to have begun, and how it ends. When even cyber security experts can't answer these questions, it's hard to expect much from policymakers.
We can set parameters. It is obviously not an act of war just to develop digital weapons targeting another country. Using cyber attacks to spy on another nation is a grey area, which gets greyer still when a country penetrates information networks, just to see if it can do so. Penetrating such networks and leaving a back door open, or even leaving logic bombs behind to be used later, is a harder case -- yet the US and China are doing this to each other right now.
And what about when one country deliberately damages the economy of another, as one of the WikiLeaks cables shows that a member of China's politburo did against Google in January 2010? Definitions and rules are hard not just because the tools of war have changed, but because cyberspace puts them into the hands of a broader group of people. Previously only the military had weapons. Now anyone with sufficient computer skills can take matters into their own hands.
There are more basic problems too. When a nation is attacked in a regular conflict, a variety of military and civil institutions respond. The legal framework for this depends on two things: the attacker and the motive. But when you're attacked on the internet, those are precisely the two things you don't know. We don't know if Georgia was attacked by the Russian government, or just some hackers living in Russia. In spite of much speculation, we don't know the origin, or target, of Stuxnet. We don't even know if last July 4's attacks against US and South Korean computers originated in North Korea, China, England, or Florida.
When you don't know, it's easy to get it wrong; and to retaliate against the wrong target, or for the wrong reason. That means it is easy for things to get out of hand. So while it is legitimate for nations to build offensive and defensive cyberwar capabilities we also need to think now about what can be done to limit the risk of cyberwar.
A first step would be a hotline between the world's cyber commands, modeled after similar hotlines among nuclear commands. This would at least allow governments to talk to each other, rather than guess where an attack came from. More difficult, but more important, are new cyberwar treaties. These could stipulate a no first use policy, outlaw unaimed weapons, or mandate weapons that self-destruct at the end of hostilities. The Geneva Conventions need to be updated too.
Cyber weapons beg to be used, so limits on stockpiles, and restrictions on tactics, are a logical end point. International banking, for instance, could be declared off-limits. Whatever the specifics, such agreements are badly needed. Enforcement will be difficult, but that's not a reason not to try. It's not too late to reverse the cyber arms race currently under way. Otherwise, it is only a matter of time before something big happens: perhaps by the rash actions of a low level military officer, perhaps by a non-state actor, perhaps by accident. And if the target nation retaliates, we could actually find ourselves in a cyberwar.
This essay was originally published in the Financial Times (free registration required for access, or search on Google News).
In November, I gave a talk on cyberwar and cyberconflict at the Institute for International and European Affairs in Dublin. Here's the video.
I was interviewed about full body scanners in -- of all places -- Popular Mechanics.
Yesterday, NIST announced the five hash functions to advance to the third (and final) round in the SHA-3 selection process: BLAKE, Grostl, JH, Keccak, and Skein. Not really a surprise; my predictions -- which I did not publish -- listed ECHO instead of JH, but correctly identified the other four. (Most of the predictions I saw guessed BLAKE, Grostl, Keccak, and Skein, but differed on the fifth.)
NIST will publish a report that explains its rationale for selecting the five it did.
Next is the Third SHA-3 Candidate Conference, which will probably be held in March 2012 in Washington, DC, in conjunction with FSE 2012. NIST will then pick a single algorithm to become SHA-3.
More information about Skein and the SHA-3 selection process.
Version 1.3 of the Skein paper, which discusses the new constant to defeat the Khovratovich-Nikolie-Rechberger attack.
A new analysis of Skein.
And if you ordered a Skein polo shirt in September, they've been shipped.
In 2003, a group of security experts -- myself included -- published a paper saying that 1) software monocultures are dangerous and 2) Microsoft, being the largest creator of monocultures out there, is the most dangerous. Marcus Ranum responded with an essay that basically said we were full of it. Now, eight years later, Marcus and I thought it would be interesting to revisit the debate.
The basic problem with a monoculture is that it's all vulnerable to the same attack. The Irish Potato Famine of 1845-9 is perhaps the most famous monoculture-related disaster. The Irish planted only one variety of potato, and the genetically identical potatoes succumbed to a rot caused by Phytophthora infestans. Compare that with the diversity of potatoes traditionally grown in South America, each one adapted to the particular soil and climate of its home, and you can see the security value in heterogeneity.
Similar risks exist in networked computer systems. If everyone is using the same operating system or the same applications software or the same networking protocol, and a security vulnerability is discovered in that OS or software or protocol, a single exploit can affect everyone. This is the problem of large-scale Internet worms: many have affected millions of computers on the Internet.
If our networking environment weren't homogeneous, a single worm couldn't do so much damage. We'd be more like South America's potato crop than Ireland's. Conclusion: monoculture is bad; embrace diversity or die along with everyone else.
This analysis makes sense as far as it goes, but suffers from three basic flaws. The first is the assumption that our IT monoculture is as simple as the potato's. When the particularly virulent Storm worm hit, it only affected from 1-10 million of its billion-plus possible victims. Why? Because some computers were running updated antivirus software, or were within locked-down networks, or whatever. Two computers might be running the same OS or applications software, but they'll be inside different networks with different firewalls and IDSs and router policies, they'll have different antivirus programs and different patch levels and different configurations, and they'll be in different parts of the Internet connected to different servers running different services. As Marcus pointed out back in 2003, they'll be a little bit different themselves. That's one of the reasons large-scale Internet worms don't infect everyone -- as well as the network's ability to quickly develop and deploy patches, new antivirus signatures, new IPS signatures, and so on.
The second flaw in the monoculture analysis is that it downplays the cost of diversity. Sure, it would be great if a corporate IT department ran half Windows and half Linux, or half Apache and half Microsoft IIS, but doing so would require more expertise and cost more money. It wouldn't cost twice the expertise and money -- there is some overlap -- but there are significant economies of scale that result from everyone using the same software and configuration. A single operating system locked down by experts is far more secure than two operating systems configured by sysadmins who aren't so expert. Sometimes, as Mark Twain said: "Put all your eggs in one basket, and then guard that basket!"
The third flaw is that you can only get a limited amount of diversity by using two operating systems, or routers from three vendors. South American potato diversity comes from hundreds of different varieties. Genetic diversity comes from millions of different genomes. In monoculture terms, two is little better than one. Even worse, since a network's security is primarily the minimum of the security of its components, a diverse network is less secure because it is vulnerable to attacks against any of its heterogeneous components.
Some monoculture is necessary in computer networks. As long as we have to talk to each other, we're all going to have to use TCP/IP, HTML, PDF, and all sorts of other standards and protocols that guarantee interoperability. Yes, there will be different implementations of the same protocol -- and this is a good thing -- but that won't protect you completely. You can't be too different from everyone else on the Internet, because if you were, you couldn't be on the Internet.
Species basically have two options for propagating their genes: the lobster strategy and the avian strategy. Lobsters lay 5,000 to 40,000 eggs at a time, and essentially ignore them. Only a minuscule percentage of the hatchlings live to be four weeks old, but that's sufficient to ensure gene propagation; from every 50,000 eggs, an average of two lobsters is expected to survive to legal size. Conversely, birds produce only a few eggs at a time, then spend a lot of effort ensuring that most of the hatchlings survive. In ecology, this is known as r/K selection theory. In either case, each of those offspring varies slightly genetically, so if a new threat arises, some of them will be more likely to survive. But even so, extinctions happen regularly on our planet; neither strategy is foolproof.
Our IT infrastructure is a lot more like a bird than a lobster. Yes, monoculture is dangerous and diversity is important. But investing time and effort in ensuring our current infrastructure's survival is even more important.
Ranum's original rebuttal:
This essay was originally published in Information Security, and is the first half of a point/counterpoint with Marcus Ranum. You can read his response there as well.
A recent essay reminded me of an older essay, both by people who write student term papers for hire.
There are several services that do automatic plagiarism detection -- basically, comparing phrases from the paper with general writings on the Internet and even caches of previously written papers -- but detecting this kind of custom plagiarism work is much harder.
I can think of three ways to deal with this:
1. Require all writing to be done in person, and proctored. Obviously this won't work for larger pieces of writing like theses.
2. Semantic analysis in an attempt to fingerprint writing styles. It's by no means perfect, but it is possible to detect if a piece of writing looks nothing like a student's normal writing style.
3. In-person quizzes on the writing. If a professor sits down with the student and asks detailed questions about the writing, he can pretty quickly determine if the student understands what he claims to have written.
The real issue is proof. Most colleges and universities are unwilling to pursue this without solid proof -- the lawsuit risk is just too great -- and in these cases the only real proof is self-incrimination.
Fundamentally, this is a problem of misplaced economic incentives. As long as the academic credential is worth more to a student than the knowledge gained in getting that credential, there will be an incentive to cheat.
Related note: anyone remember my personal experience with plagiarism from 2005?
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2010 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.