Crypto-Gram

July 15, 2010

by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-1007.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively comment section. An RSS feed is available.


In this issue:


The Threat of Cyberwar Has Been Grossly Exaggerated

There’s a power struggle going on in the U.S. government right now.

It’s about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.

“The United States is fighting a cyberwar today, and we are losing,” said former NSA director—and current cyberwar contractor—Mike McConnell. “Cyber 9/11 has happened over the last ten years, but it happened slowly so we don’t see it,” said former National Cyber Security Division director Amit Yoran. Richard Clarke, whom Yoran replaced, wrote an entire book hyping the threat of cyberwar.

General Keith Alexander, the current commander of the U.S. Cyber Command, hypes it every chance he gets. This isn’t just rhetoric of a few over-eager government officials and headline writers; the entire national debate on cyberwar is plagued with exaggerations and hyperbole.

Googling those names and terms—as well as “cyber Pearl Harbor,” “cyber Katrina,” and even “cyber Armageddon”—gives some idea how pervasive these memes are. Prefix “cyber” to something scary, and you end up with something really scary.

Cyberspace has all sorts of threats, day in and day out. Cybercrime is by far the largest: fraud, through identity theft and other means, extortion, and so on. Cyber-espionage is another, both government- and corporate-sponsored. Traditional hacking, without a profit motive, is still a threat. So is cyber-activism: people, most often kids, playing politics by attacking government and corporate websites and networks.

These threats cover a wide variety of perpetrators, motivations, tactics, and goals. You can see this variety in what the media has mislabeled as “cyberwar.” The attacks against Estonian websites in 2007 were simple hacking attacks by ethnic Russians angry at anti-Russian policies; these were denial-of-service attacks, a normal risk in cyberspace and hardly unprecedented.

A real-world comparison might be if an army invaded a country, then all got in line in front of people at the DMV so they couldn’t renew their licenses. If that’s what war looks like in the 21st century, we have little to fear.

Similar attacks against Georgia, which accompanied an actual Russian invasion, were also probably the responsibility of citizen activists or organized crime. A series of power blackouts in Brazil was caused by criminal extortionists—or was it sooty insulators? China is engaging in espionage, not war, in cyberspace. And so on.

One problem is that there’s no clear definition of “cyberwar.” What does it look like? How does it start? When is it over? Even cybersecurity experts don’t know the answers to these questions, and it’s dangerous to broadly apply the term “war” unless we know a war is going on.

Yet recent news articles have claimed that China declared cyberwar on Google, that Germany attacked China, and that a group of young hackers declared cyberwar on Australia. (Yes, cyberwar is so easy that even kids can do it.) Clearly we’re not talking about real war here, but a rhetorical war: like the war on terror.

We have a variety of institutions that can defend us when attacked: the police, the military, the Department of Homeland Security, various commercial products and services, and our own personal or corporate lawyers. The legal framework for any particular attack depends on two things: the attacker and the motive. Those are precisely the two things you don’t know when you’re being attacked on the Internet. We saw this on July 4 last year, when U.S. and South Korean websites were attacked by unknown perpetrators from North Korea—or perhaps England. Or was it Florida?

We surely need to improve our cybersecurity. But words have meaning, and metaphors matter. There’s a power struggle going on for control of our nation’s cybersecurity strategy, and the NSA and DoD are winning. If we frame the debate in terms of war, if we accept the military’s expansive cyberspace definition of “war,” we feed our fears.

We reinforce the notion that we’re helpless—what person or organization can defend itself in a war?—and others need to protect us. We invite the military to take over security, and to ignore the limits on power that often get jettisoned during wartime.

If, on the other hand, we use the more measured language of cybercrime, we change the debate. Crime fighting requires both resolve and resources, but it’s done within the context of normal life. We willingly give our police extraordinary powers of investigation and arrest, but we temper these powers with a judicial system and legal protections for citizens.

We need to be prepared for war, and a Cyber Command is just as vital as an Army or a Strategic Air Command. And because kid hackers and cyber-warriors use the same tactics, the defenses we build against crime and espionage will also protect us from more concerted attacks. But we’re not fighting a cyberwar now, and the risks of a cyberwar are no greater than the risks of a ground invasion. We need peacetime cyber-security, administered within the myriad structure of public and private security institutions we already have.

This essay previously appeared on CNN.com.
http://www.cnn.com/2010/OPINION/07/07/…

Hyperbole:
http://www.washingtonpost.com/wp-dyn/content/…
http://www.wired.com/threatlevel/2009/06/cyberthreat/
http://www.amazon.com/exec/obidos/ASIN/0061962236/…
http://www.wired.com/dangerroom/2010/04/…
http://www.computerworld.com/s/article/9174682/…
http://www.wired.com/dangerroom/2010/04/…
http://www.salon.com/news/opinion/glenn_greenwald/…
http://www.guardian.co.uk/technology/2010/mar/04/…
http://www.wired.com/threatlevel/2008/01/…
http://www.businessweek.com/the_thread/techbeat/…
http://www.wired.com/threatlevel/2009/04/…
http://www.computerworld.com/s/article/9173967/…
http://thehill.com/opinion/op-ed/…
http://techcrunch.com/2007/10/18/…
http://news.softpedia.com/news/…
http://www.independent.co.uk/news/world/australasia/…
http://www.schneier.com/essay-280.html
http://www.wired.com/dangerroom/2010/05/…
Cyberattacks:
http://www.wired.com/threatlevel/2007/08/…
http://www.csoonline.com/article/443579/…
http://www.csoonline.com/article/499778/…
http://www.cbsnews.com/stories/2009/11/06/60minutes/…
http://www.wired.com/threatlevel/2009/11/…
http://www.schneier.com/essay-227.html

Good article:
http://www.economist.com/node/16481504?…

Earlier this month, I participated in a debate: “The Cyberwar Threat has been Grossly Exaggerated.” Marc Rotenberg of EPIC and I were for the motion; Mike McConnell and Jonathan Zittrain were against. We lost.

We lost fair and square, for a bunch of reasons—we didn’t present our case very well, Jonathan Zittrain is a way better debater than we were—but basically the vote came down to the definition of “cyberwar.” If you believed in an expansive definition of cyberwar, one that encompassed a lot more types of attacks than traditional war, then you voted against the motion. If you believed in a limited definition of cyberwar, one that is a subset of traditional war, then you voted for it.

http://intelligencesquaredus.org/index.php/…
http://intelligencesquaredus.org/wp-content/uploads/…
http://www.vimeo.com/12464156
http://finance.yahoo.com/news/…
http://www.npr.org/templates/story/story.php?…
http://www.circleid.com/posts/…
http://www.businesswire.ca/portal/site/ca-fr/…
http://jldugger.livejournal.com/38537.html
http://www.darkreading.com/security/vulnerabilities/…
http://www.theatlantic.com/science/archive/2010/06/…
Last month the Senate Homeland Security Committee held hearings on “Protecting Cyberspace as a National Asset: Comprehensive Legislation for the 21st Century.” Unfortunately, the DHS is getting hammered at these hearings, and the NSA is consolidating its power.
http://hsgac.senate.gov/public/index.cfm?…
North Korea was probably not responsible for last year’s cyberattacks. Good thing we didn’t retaliate.
http://www.networkworld.com/news/2010/…
http://www.scmagazineus.com/…


Internet Kill Switch

Last month, Sen. Joe Lieberman, I-Conn., introduced a bill that might—we’re not really sure—give the president the authority to shut down all or portions of the Internet in the event of an emergency. It’s not a new idea. Sens. Jay Rockefeller, D-W.Va., and Olympia Snowe, R-Maine, proposed the same thing last year, and some argue that the president can already do something like this. If this or a similar bill ever passes, the details will change considerably and repeatedly. So let’s talk about the idea of an Internet kill switch in general.

It’s a bad one.

Security is always a trade-off: costs versus benefits. So the first question to ask is: What are the benefits? There is only one possible use of this sort of capability, and that is in the face of a warfare-caliber enemy attack. It’s the primary reason lawmakers are considering giving the president a kill switch. They know that shutting off the Internet, or even isolating the U.S. from the rest of the world, would cause damage, but they envision a scenario where not doing so would cause even more.

That reasoning is based on several flawed assumptions.

The first flawed assumption is that cyberspace has traditional borders, and we could somehow isolate ourselves from the rest of the world using an electronic Maginot Line. We can’t.

Yes, we can cut off almost all international connectivity, but there are lots of ways to get out onto the Internet: satellite phones, obscure ISPs in Canada and Mexico, long-distance phone calls to Asia.

The Internet is the largest communications system mankind has ever created, and it works because it is distributed. There is no central authority. No nation is in charge. Plugging all the holes isn’t possible.

Even if the president ordered all U.S. Internet companies to block, say, all packets coming from China, or restrict non-military communications, or just shut down access in the greater New York area, it wouldn’t work. You can’t figure out what packets do just by looking at them; if you could, defending against worms and viruses would be much easier.

And packets that come with return addresses are easy to spoof. Remember the cyberattack July 4, 2009, that probably came from North Korea, but might have come from England, or maybe Florida? On the Internet, disguising traffic is easy. And foreign cyberattackers could always have dial-up accounts via U.S. phone numbers and make long-distance calls to do their misdeeds.

The second flawed assumption is that we can predict the effects of such a shutdown. The Internet is the most complex machine mankind has ever built, and shutting down portions of it would have all sorts of unforeseen ancillary effects.

Would ATMs work? What about the stock exchanges? Which emergency services would fail? Would trucks and trains be able to route their cargo? Would airlines be able to route their passengers? How much of the military’s logistical system would fail?

That’s to say nothing of the variety of corporations that rely on the Internet to function, let alone the millions of Americans who would need to use it to communicate with their loved ones in a time of crisis.

Even worse, these effects would spill over internationally. The Internet is international in complex and surprising ways, and it would be impossible to ensure that the effects of a shutdown stayed domestic and didn’t cause similar disasters in countries we’re friendly with.

The third flawed assumption is that we could build this capability securely. We can’t.

Once we engineered a selective shutdown switch into the Internet, and implemented a way to do what Internet engineers have spent decades making sure never happens, we would have created an enormous security vulnerability. We would make the job of any would-be terrorist intent on bringing down the Internet much easier.

Computer and network security is hard, and every Internet system we’ve ever created has security vulnerabilities. It would be folly to think this one wouldn’t as well. And given how unlikely the risk is, any actual shutdown would be far more likely to be a result of an unfortunate error or a malicious hacker than of a presidential order.

But the main problem with an Internet kill switch is that it’s too coarse a hammer.

Yes, the bad guys use the Internet to communicate, and they can use it to attack us. But the good guys use it, too, and the good guys far outnumber the bad guys.

Shutting the Internet down, either the whole thing or just a part of it, even in the face of a foreign military attack would do far more damage than it could possibly prevent. And it would hurt others whom we don’t want to hurt.

For years we’ve been bombarded with scare stories about terrorists wanting to shut the Internet down. They’re mostly fairy tales, but they’re scary precisely because the Internet is so critical to so many things.

Why would we want to terrorize our own population by doing exactly what we don’t want anyone else to do? And a national emergency is precisely the worst time to do it.

Just implementing the capability would be very expensive; I would rather see that money going toward securing our nation’s critical infrastructure from attack.

Defending his proposal, Sen. Lieberman pointed out that China has this capability. It’s debatable whether or not it actually does, but it’s actively pursuing the capability because the country cares less about its citizens.

Here in the U.S., it is both wrong and dangerous to give the president the power and ability to commit Internet suicide and terrorize Americans in this way.

This essay was originally published on AOL.com News.
http://www.aolnews.com/opinion/article/…
http://www.opencongress.org/bill/111-s3480/show
http://www.pcmag.com/article2/0,2817,2365393,00.asp
http://www.networkworld.com/columnists/2009/…
http://www.engadget.com/2010/06/24/…
Text of bill:
http://www.govtrack.us/congress/billtext.xpd?…


News

Dating recordings by power line fluctuations:
http://www.theregister.co.uk/2010/06/01/enf_met_police/

In at least three U.S. states, it is illegal to film an active duty policeman:
https://www.schneier.com/blog/archives/2010/06/…

Doesn’t the DHS have anything else to do than patrol the U.S./Canada border?
http://www.americanthinker.com//2010/06/…
Hot dog security:
https://www.schneier.com/blog/archives/2010/06/…

The Atlantic on stupid terrorists:
http://www.theatlantic.com/magazine/archive/2010/05/…
Reminds me of my own “Portrait of the Modern Terrorist as an Idiot”:
http://www.schneier.com/essay-174.html

Security risks of remote printing to an e-mail address:
https://www.schneier.com/blog/archives/2010/06/…

AT&T’s iPad security breach:
https://www.schneier.com/blog/archives/2010/06/…

Cheating on tests, by the teachers:
http://www.nytimes.com/2010/06/11/education/…

Buying an ATM skimmer:
http://krebsonsecurity.com/2010/06/…
http://krebsonsecurity.com/2010/06/…
The New York Times Room for Debate blog did the topic: “Do We Tolerate Too Many Traffic Deaths?”
http://roomfordebate.blogs.nytimes.com/2010/05/27/…
In an article on using terahertz rays (is that different from terahertz radar?) to detect biological agents, we find this quote: “High-tech, low-tech, we can’t afford to overlook any possibility in dealing with mass casualty events…. You need multiple methods of detection and response. Terrorism comes in many forms; you have to see, smell, taste and analyze everything.” He’s got it completely backwards. I think we can easily afford not to do what he’s saying, and can’t afford to do it. The technology to detect traces of chemical and biological agents is neat, though. And I am very much in favor of research along these lines.
http://www.globalsecuritynewswire.org/gsn/…

Popsicle machines as a security threat:
https://www.schneier.com/blog/archives/2010/06/…

Long, but interesting, profile of WikiLeaks’s Julian Assange.
http://www.newyorker.com/reporting/2010/06/07/…
http://www.guardian.co.uk/media/2010/jun/21/…
http://www.abc.net.au/tv/bigideas/stories/2010/06/…
http://www.huffingtonpost.com/2010/06/11/…

This is only peripherally related, but Bradley Manning—an American soldier—has been arrested for leaking classified documents to WikiLeaks.
http://www.wired.com/threatlevel/2010/06/leak/
http://www.csmonitor.com/USA/Military/2010/0607/…
http://www.huffingtonpost.com/2010/06/07/…
http://news.bbc.co.uk/2/hi/technology/10265430.stm
http://www.washingtonian.com/articles/people/…
http://www.theatlanticwire.com/opinions/view/…
http://abcnews.go.com/Politics/Media/…
http://motherjones.com/mojo/2010/06/…
http://en.wikinews.org/wiki/…
The TacSat-3 “hyperspectral” spy satellite is operational.
http://www.theregister.co.uk/2010/06/11/…

Security trade-offs in crayfish:
http://www.sciencedaily.com/releases/2010/06/…
It’s not that this surprises anyone, it’s that researchers can now try and figure out the exact brain processes that enable the crayfish to make these decisions.

Hacker scare story: “10 Everyday Items Hackers Are Targeting Right Now.”
http://www.foxnews.com/scitech/2010/06/11/…
And Richard Clarke thinks hackers can set your printer on fire.
http://www.amazon.com/exec/obidos/ASIN/0061962236/…

This rant about baby terrorists, from Congressman Louie Gohmert of Texas, is about as dumb as it gets:
https://www.schneier.com/blog/archives/2010/06/…

Space terrorism? Yes, space terrorism. This article, by someone at the European Space Policy Institute, hypes a terrorist threat I’ve never seen hyped before. The author waves a bunch of scare stories around, and then concludes that “the threat of ‘Space Terrorism’ is both real and latent,” then talks about countermeasures. Certainly securing our satellites is a good idea, but this is just silly.
http://www.espi.or.at/images/stories/dokumente/…
Cryptography success story from Brazil. The moral, of course, is to choose a strong key and to encrypt the entire drive, not just key files.
http://www.theregister.co.uk/2010/06/28/…

Cryptography failure story: by Russian spies. “Ricci said the steganographic program was activated by pressing control-alt-E and then typing in a 27-character password, which the FBI found written down on a piece of paper during one of its searches.”
http://news.cnet.com/8301-13578_3-20009101-38.html
http://www.computerworld.com/s/article/9178762/…
http://www.darkreading.com/insiderthreat/security/…
Vigilant citizens: 1950 vs. today:
https://www.schneier.com/blog/archives/2010/07/…

Secret stash: hiding objects in everyday objects.
http://yitingcheng.webs.com/psecretstash2010.htm

Tracking location based on water isotope ratios:
http://news.sciencemag.org/sciencenow/2010/06/…
http://pubs.acs.org/stoken/presspac/presspac/full/…

From the National Academies in 2009: “Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities.” It’s 390 pages.
http://books.nap.edu/openbook.php?…

“Don’t Commit Crime”: the sign is from a gas station in the U.K.
https://www.schneier.com/blog/archives/2010/07/…

This is a really interesting philosophical essay: “Does Surveillance Make Us Morally Better?”
http://www.philosophynow.org/issue79/79westacott.htm

Long and interesting article on the Toronto 18, a terrorist cell arrested in 2006. Lots of stuff I had not read before.
http://www3.thestar.com/static/toronto18/index.html

The measures used to prevent cheating during college tests remind me of casino security measures.
http://www.nytimes.com/2010/07/06/education/…

TSA blocks access to websites with “controversial opinions.” I wonder if my blog counts.
http://www.cbsnews.com/…
The TSA reversed itself. Or, at least, they now claim that isn’t what they meant.
http://www.cbsnews.com/…

Serial killers are now terrorists. Try to keep up.
https://www.schneier.com/blog/archives/2010/07/…

The Chaocipher is a mechanical encryption algorithm invented in 1918. No one was able to reverse-engineer the algorithm, given sets of plaintexts and ciphertexts—at least, nobody publicly. On the other hand, I don’t know how many people tried, or even knew about the algorithm. I’d never heard of it before now. Anyway, for the first time, the algorithm has been revealed. Of course, it’s not able to stand up to computer cryptanalysis.
http://www.ciphermysteries.com/2010/07/03/…
http://www.mountainvistasoft.com/chaocipher/…

Hemingway authentication scheme from 1955, intended as humor:
https://www.schneier.com/blog/archives/2010/07/…

On an Android phone, it’s easy to access someone else’s voicemail by spoofing the caller ID. This isn’t new; what is new is that many people now have easy access to caller ID spoofing. The spoofing only works for voicemail accounts that don’t have a password set up, but AT&T has no password as the default.
http://news.slashdot.org/story/10/06/29/1840241/…

Burglar detection through video analytics:
https://www.schneier.com/blog/archives/2010/07/…

Random numbers from quantum noise. It’s not that we need more ways to get random numbers, but the research is interesting.
http://www.technologyreview.com//arxiv/25355/?…

I don’t think it’s a good idea to give Russian intelligence the source code to Windows 7.
http://www.zdnet.co.uk/news/security/2010/07/08/…


Third SHB Workshop

Last month I attended SHB 2010, the Third Interdisciplinary Workshop on Security and Human Behaviour, at Cambridge University. This is a two-day gathering of computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others—all of whom are studying the human side of security—organized by Ross Anderson, Alessandro Acquisti, and me.

SHB 2010:
http://www.cl.cam.ac.uk/~rja14/shb10/

The program:
http://www.cl.cam.ac.uk/~rja14/shb10/schedule10.html

Ross Anderson’s summaries of the talks and discussions:
http://www.lightbluetouchpaper.org/2010/06/28/…
The first SHB workshop:
https://www.schneier.com/blog/archives/2008/06/…

The second SHB workshop:
https://www.schneier.com/blog/archives/2009/06/…


Schneier News

None this month. Summers are always slow.


Data at Rest vs. Data in Motion

For a while now, I’ve pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.

Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper. Even when cryptography was used to protect stored data—data at rest—it was viewed as a form of communication. In “Applied Cryptography,” I described encrypting stored data in this way: “a stored message is a way for someone to communicate with himself through time.” Data storage was just a subset of data communication.

In modern networks, the difference is much more profound. Communications are immediate and instantaneous. Encryption keys can be ephemeral, and systems like the STU-III telephone can be designed such that encryption keys are created at the beginning of a call and destroyed as soon as the call is completed. Data storage, on the other hand, occurs over time. Any encryption keys must exist as long as the encrypted data exists. And storing those keys becomes as important as storing the unencrypted data was. In a way, encryption doesn’t reduce the number of secrets that must be stored securely; it just makes them much smaller.

Historically, the reason key management worked for stored data was that the key could be stored in a secure location: the human brain. People would remember keys and, barring physical and emotional attacks on the people themselves, would not divulge them. In a sense, the keys were stored in a “computer” that was not attached to any network. And there they were safe.

This whole model falls apart on the Internet. Much of the data stored on the Internet is only peripherally intended for use by people; it’s primarily intended for use by other computers. And therein lies the problem. Keys can no longer be stored in people’s brains. They need to be stored on the same computer, or at least the network, that the data resides on. And that is much riskier.

Let’s take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn’t make any sense. The whole point of storing credit card numbers on a website is so it’s accessible—so each time I buy something, I don’t have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.

The same reasoning holds true elsewhere on the Internet as well. Much of the Internet’s infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography.

Cryptography has inherent mathematical properties that greatly favor the defender. Adding a single bit to the length of a key adds only a slight amount of work for the defender, but doubles the amount of work the attacker has to do. Doubling the key length doubles the amount of work the defender has to do (if that—I’m being approximate here), but increases the attacker’s workload exponentially. For many years, we have exploited that mathematical imbalance.

Computer security is much more balanced. There’ll be a new attack, and a new defense, and a new attack, and a new defense. It’s an arms race between attacker and defender. And it’s a very fast arms race. New vulnerabilities are discovered all the time. The balance can tip from defender to attacker overnight, and back again the night after. Computer security defenses are inherently very fragile.

Unfortunately, this is the model we’re stuck with. No matter how good the cryptography is, there is some other way to break into the system. Recall how the FBI read the PGP-encrypted email of a suspected Mafia boss several years ago. They didn’t try to break PGP; they simply installed a keyboard sniffer on the target’s computer. Notice that SSL- and TLS-encrypted web communications are increasingly irrelevant in protecting credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

On the Internet, communications security is much less important than the security of the endpoints. And increasingly, we can’t rely on cryptography to solve our security problems.

This essay originally appeared on DarkReading. I wrote it in 2006, but lost it on my computer for four years. I hate it when that happens.
http://www.darkreading.com/blog/archives/2010/06/…

As several readers have pointed out, I overstated my case when I said that encrypting credit card databases, or any database in constant use, is useless. In fact, there is value in encrypting those databases, especially if the encryption appliance is separate from the database server. In this case, the attacker has to steal both the encryption key and the database. That’s a harder hacking problem, and this is why credit card database encryption is mandated within the PCI security standard. Given how good encryption performance is these days, it’s a smart idea. But while encryption makes it harder to steal the data, it is only harder in a computer security sense, not in a cryptography sense.”


Reading Me

The number of different ways to read my essays, commentaries, and links has grown recently. Here’s the rundown:

You can read my writings daily on my blog.
http://www.schneier.com/

These are reprinted on my Facebook page.
http://www.facebook.com/bruce.schneier

They are also reprinted on my LiveJournal feed.
http://syndicated.livejournal.com/bruce_schneier/

You can follow them on Twitter.
http://twitter.com/schneierblog/

You can subscribe to the RSS feed:
http://www.schneier.com//index.rdf

Or you can subscribe to the alternative RSS feed, if you prefer excerpts instead of full text:
http://www.schneier.com//index.xml

Finally, you can read the same writing aggregated once a month and e-mailed directly to you: Crypto-Gram.
http://www.schneier.com/crypto-gram.html

I think that about covers it for useful distribution formats right now.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2010 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.