December 15, 2012
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1212.html>. These same essays and news items appear in the "Schneier on Security" blog at <http://www.schneier.com/blog>, along with a lively comment section. An RSS feed is available.
In this issue:
- Feudal Security
- E-Mail Security in the Wake of Petraeus
- Squids on the "Economist" Cover
- Schneier News
- IT for Oppression
- Dictators Shutting Down the Internet
- Book Review: "Against Security"
It's a feudal world out there.
Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether... for Facebook.
These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them -- or to a particular one we don't like. Or we can spread our allegiance around. But either way, it's becoming increasingly difficult to not pledge allegiance to at least one of them.
Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.
Of course, I'm romanticizing here; European history was never this simple, and the description is based on stories of that time, but that's the general model.
And it's this model that's starting to permeate computer security today.
Traditional computer security centered around users. Users had to purchase and install anti-virus software and firewalls, ensure their operating system and network were configured properly, update their software, and generally manage their own security.
This model is breaking, largely due to two developments:
1. New Internet-enabled devices where the vendor maintains more control over the hardware and software than we do -- like the iPhone and Kindle; and
2. Services where the host maintains our data for us -- like Flickr and Hotmail.
Now, we users must trust the security of these hardware manufacturers, software vendors, and cloud providers.
We choose to do it because of the convenience, redundancy, automation, and shareability. We like it when we can access our e-mail anywhere, from any computer. We like it when we can restore our contact lists after we've lost our phones. We want our calendar entries to automatically appear on all of our devices. These cloud storage sites do a better job of backing up our photos and files than we would manage by ourselves; Apple does a great job keeping malware out of its iPhone apps store.
In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won't be exposed to hackers, criminals, and malware. We trust that governments won't be allowed to illegally spy on us.
Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don't know what sort of security methods they're using, or how they're configured. We mostly can't install our own security products on iPhones or Android phones; we certainly can't install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates -- iPhone, for example -- but we rarely know what they're about or whether they'll break anything else. (On the Kindle, we don't even have that freedom.)
I'm not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.
Feudalism is good for the individual, for small startups, and for medium-sized businesses that can't afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.
For large organizations, however, it's more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They've been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don't allow vassals to audit them, even if those vassals are themselves large and powerful.
Yet feudal security isn't without its risks.
Our lords can make mistakes with security, as recently happened with Apple, Facebook, and Photobucket. They can act arbitrarily and capriciously, as Amazon did when it cut off a Kindle user for living in the wrong country. They tether us like serfs; just try to take data from one digital lord to another.
Ultimately, they will always act in their own self-interest, as companies do when they mine our data in order to sell more advertising and make more money. These companies own us, so they can sell us off -- again, like serfs -- to rival lords...or turn us in to the authorities.
Historically, early feudal arrangements were ad hoc, and the more powerful party would often simply renege on his part of the bargain. Eventually, the arrangements were formalized and standardized: both parties had rights and privileges (things they could do) as well as protections (things they couldn't do to each other).
Today's internet feudalism, however, is ad hoc and one-sided. We give companies our data and trust them with our security, but we receive very few assurances of protection in return, and those companies have very few restrictions on what they can do.
This needs to change. There should be limitations on what cloud vendors can do with our data; rights, like the requirement that they delete our data when we want them to; and liabilities when vendors mishandle our data.
Like everything else in security, it's a trade-off. We need to balance that trade-off. In Europe, it was the rise of the centralized state and the rule of law that undermined the ad hoc feudal system; it provided more security and stability for both lords and vassals. But these days, government has largely abdicated its role in cyberspace, and the result is a return to the feudal relationships of yore.
Perhaps instead of hoping that our Internet-era lords will be sufficiently clever and benevolent -- or putting our faith in the Robin Hoods who block phone surveillance and circumvent DRM systems -- it's time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals (and the lords as well). Otherwise, we really are just serfs.
A version of this essay was originally published on Wired.com.
Safety in Apps:
Companies getting security wrong:
I've been reading lots of articles discussing how little e-mail and Internet privacy we actually have in the U.S. Here's one:
The FBI obliged -- apparently obtaining subpoenas for Internet Protocol logs, which allowed them to connect the sender's anonymous Google Mail account to others accessed from the same computers, accounts that belonged to Petraeus biographer Paula Broadwell. The bureau could then subpoena guest records from hotels, tracking the WiFi networks, and confirm that they matched Broadwell's travel history. None of this would have required judicial approval -- let alone a Fourth Amendment search warrant based on probable cause.
While we don't know the investigators' other methods, the FBI has an impressive arsenal of tools to track Broadwell's digital footprints -- all without a warrant. On a mere showing of "relevance," they can obtain a court order for cell phone location records, providing a detailed history of her movements, as well as all people she called. Little wonder that law enforcement requests to cell providers have exploded -- with a staggering 1.3 million demands for user data just last year, according to major carriers.
An order under this same weak standard could reveal all her e-mail correspondents and Web surfing activity. With the rapid decline of data storage costs, an ever larger treasure trove is routinely retained for ever longer time periods by phone and Internet companies.
Had the FBI chosen to pursue this investigation as a counterintelligence inquiry rather than a cyberstalking case, much of that data could have been obtained without even a subpoena. National Security Letters, secret tools for obtaining sensitive financial and telecommunications records, require only the say-so of an FBI field office chief.
While the details of this investigation that have leaked thus far provide us all a fascinating glimpse into the usually sensitive methods used by FBI agents, this should also serve as a warning, by demonstrating the extent to which the government can pierce the veil of communications anonymity without ever having to obtain a search warrant or other court order from a neutral judge.
The guest lists from hotels, IP login records, as well as the creative request to email providers for "information about other accounts that have logged in from this IP address" are all forms of data that the government can obtain with a subpoena. There is no independent review, no check against abuse, and further, the target of the subpoena will often never learn that the government obtained data (unless charges are filed, or, as in this particular case, government officials eagerly leak details of the investigation to the press). Unfortunately, our existing surveillance laws really only protect the "what" being communicated; the government's powers to determine "who" communicated remain largely unchecked.
The EFF tries to explain the relevant laws. Summary: they're confusing, and they don't protect us very much.
My favorite quote is from the "New York Times":
Marc Rotenberg, executive director of the Electronic Privacy Information Center in Washington, said the chain of unexpected disclosures was not unusual in computer-centric cases.
"It's a particular problem with cyberinvestigations -- they rapidly become open-ended because there's such a huge quantity of information available and it's so easily searchable," he said, adding, "If the C.I.A. director can get caught, it's pretty much open season on everyone else."
And a day later:
"If the director of central intelligence isn't able to successfully keep his emails private, what chance do I have?" said Kurt Opsahl, a senior staff attorney at the Electronic Frontier Foundation, a digital-liberties advocacy group.
In more words:
But there's another, more important lesson to be gleaned from this tale of a biographer run amok. Broadwell's debacle confirms something that some privacy experts have been warning about for years: Government surveillance of ordinary citizens is now cheaper and easier than ever before. Without needing to go before a judge, the government can gather vast amounts of information about us with minimal expenditure of manpower. We used to be able to count on a certain amount of privacy protection simply because invading our privacy was hard work. That is no longer the case. Our always-on, Internet-connected, cellphone-enabled lives are an open door to Big Brother.
Remember that this problem is bigger than Petraeus. The FBI goes after electronic records all the time:
In Google's semi-annual transparency report released Tuesday, the company stated that it received 20,938 requests from governments around the world for its users' private data in the first six months of 2012. Nearly 8,000 of those requests came from the U.S. government, and 7,172 of them were fulfilled to some degree, an increase of 26% from the prior six months, according to Google's stats.
So what's the answer? Would they have been safe if they'd used Tor or a regular old VPN? Silent Circle? Something else? A "New York Times" article attempts to give advice; this is the article's most important caveat:
DON'T MESS UP. It is hard to pull off one of these steps, let alone all of them all the time. It takes just one mistake -- forgetting to use Tor, leaving your encryption keys where someone can find them, connecting to an airport Wi-Fi just once -- to ruin you.
"Robust tools for privacy and anonymity exist, but they are not integrated in a way that makes them easy to use," Mr. Blaze warned. "We've all made the mistake of accidentally hitting 'Reply All.' Well, if you're trying to hide your e-mails or account or I.P. address, there are a thousand other mistakes you can make."
In the end, Mr. Kaminsky noted, if the F.B.I. is after your e-mails, it will find a way to read them. In that case, any attempt to stand in its way may just lull you into a false sense of security.
Some people think that if something is difficult to do, "it has security benefits, but that's all fake -- everything is logged," said Mr. Kaminsky. "The reality is if you don't want something to show up on the front page of "The New York Times," then don't say it."
The real answer is to rein in the FBI, of course:
If we don't take steps to rein in the burgeoning surveillance state now, there's no guarantee we'll even be aware of the ways in which control is exercised through this information architecture. We will all remain exposed but the extent of our exposure, and the potential damage done to democracy, is likely to remain invisible.
More in the "Financial Times":
"Hopefully this [case] will be a wake-up call for Congress that the Stored Communications Act is old and busted," Mr Fakhoury says.
I don't see any chance of that happening anytime soon.
Blog entry, with embedded URLs:
Research into one VM stealing crypto keys from another VM running on the same hardware.
Jamming 4G cell networks is easy.
I noticed this security theater quote in an article about how increased security and a general risk aversion is harming US diplomatic missions: "Barbara Bodine, who was the U.S. ambassador to Yemen during the Qaeda bombing of the U.S.S. Cole in 2000, told me she believes that much of the security American diplomats are forced to travel with is counterproductive. 'There's this idea that if we just throw more security guys at the problem, it will go away,' she said. 'These huge convoys they force you to travel in, with a bristling personal security detail, give you the illusion of security, not real security. They just draw a lot of attention and make you a target. It's better to fly under the radar.'" It's a good article overall.
Can anyone make heads or tails of this story? Anonymous claims that it sabotaged Karl Rove's secret software designed to hack the Ohio vote.
Remember that Ohio was not the deciding state in the election. Neither was Florida or Virginia. It was Colorado. So even if there was this magic election-stealing software running in Ohio, it wouldn't have made any difference.
For my part, I'd like a little -- you know -- evidence.
Great story about the decryption of a secret society's documents from the 1740s.
Good article on the different ways the police can eavesdrop on cell phone calls.
"Recommendations to Prevent Catastrophic Threats," by the Federation of American Scientists, 9 November 2012. It's twelve specific sets of recommendations for twelve specific threats.
The Naval Postgraduate School's Center for Homeland Defense and Security is running its sixth annual essay competition. There are cash prizes.
Info on previous years:
Stewart Baker, Orin Kerr, and Eugene Volokh on the legality of hackback.
Some of the confetti at the Macy's Thanksgiving Day Parade in New York consisted of confidential documents from the Nassau County Police Department, shredded sideways.
Good article on the psychology of IT security trade-offs. I agree with the conclusion that the solution isn't to convince people to make better choices, but to change the IT architecture so that it's easier to make better choices.
Amusing post on the plausibility of the evil plans from the various James Bond movies.
There's a whole book on this.
Here's an interview with the author:
Advances in attacking ATMs: cash traps and card traps are the new thing.
Another historical cipher, this one from the 1600s, has been cracked:
I usually don't think of pairing comedy with cryptography, but: "Robin Ince and Brian Cox are joined on stage by comedian Dave Gorman, author and Enigma Machine owner Simon Singh and Bletchley Park enthusiast Dr Sue Black as they discuss secret science, code-breaking and the extraordinary achievements of the team working at Bletchley during WW II."
I have no idea if this $3,000 ATM skimmer is real. If I had to guess, I would say no.
Yet another way two-factor authentication has been bypassed:
"The National Cyber Security Framework Manual" is now available as a free PDF download. It's by the NATO Cooperative Cyber Defense Center of Excellence in Tallinn. A paper copy will be published in January.
Information about drone flights over the U.S.
Forensic advances in detecting edited audio.
QR code scams:
How censorship works in North Korea:
Ross Anderson recalls the history of security economics:
Four squids on the cover of this week's "Economist" represent the four massive (and intrusive) data-driven Internet giants: Google, Facebook, Apple, and Amazon.
Interestingly, these are the same four companies I've been listing as the new corporate threat to the Internet.
The first of three pillars propping up this outside threat are big data collectors, which in addition to Apple and Google, Schneier identified as Amazon and Facebook. (Notice Microsoft didn't make the cut.) The goal of their data collection is for marketers to be able to make snap decisions about the product tastes, credit worthiness, and employment suitability of millions of people. Often, this information is fed into systems maintained by governments.
Notice that Microsoft didn't make the "Economist's" cut either.
I gave that talk at the RSA Conference in February of this year.
My RSA talk:
I've been thinking a lot about how information technology, and the Internet in particular, is becoming a tool for oppressive governments. As Evgeny Morozov describes in his great book "The Net Delusion: The Dark Side of Internet Freedom", repressive regimes all over the world are using the Internet to more efficiently implement surveillance, censorship, and propaganda. And they're getting really good at it.
For a lot of us who imagined that the Internet would spark an inevitable wave of Internet freedom, this has come as a bit of a surprise. But it turns out that information technology is not just a tool for freedom-fighting rebels under oppressive governments, it's also a tool for those oppressive governments. Basically, IT magnifies power; the more power you have, the more it can be magnified in IT.
I think we got this wrong -- anyone remember John Perry Barlow's 1996 manifesto? -- because, like most technologies, IT technologies are first used by the more agile individuals and groups outside the formal power structures. In the same way criminals can make use of a technological innovation faster than the police can, dissidents in countries all over the world were able to make use of Internet technologies faster than governments could. Unfortunately, and inevitably, governments have caught up.
This is the "security gap" I talk about in the closing chapters of "Liars and Outliers."
I thought about all these things as I read this article on how the Syrian government hacked into the computers of dissidents:
The cyberwar in Syria began with a feint. On Feb. 8, 2011, just as the Arab Spring was reaching a crescendo, the government in Damascus suddenly reversed a long-standing ban on websites such as Facebook, Twitter, YouTube, and the Arabic version of Wikipedia. It was an odd move for a regime known for heavy-handed censorship; before the uprising, police regularly arrested bloggers and raided Internet cafes. And it came at an odd time. Less than a month earlier demonstrators in Tunisia, organizing themselves using social networking services, forced their president to flee the country after 23 years in office. Protesters in Egypt used the same tools to stage protests that ultimately led to the end of Hosni Mubarak's 30-year rule. The outgoing regimes in both countries deployed riot police and thugs and tried desperately to block the websites and accounts affiliated with the revolutionaries. For a time, Egypt turned off the Internet altogether.
Syria, however, seemed to be taking the opposite tack. Just as protesters were casting about for the means with which to organize and broadcast their messages, the government appeared to be handing them the keys.
The first documented attack in the Syrian cyberwar took place in early May 2011, some two months after the start of the uprising. It was a clumsy one. Users who tried to access Facebook in Syria were presented with a fake security certificate that triggered a warning on most browsers. People who ignored it and logged in would be giving up their user name and password, and with them, their private messages and contacts.
I dislike this being called a "cyberwar," but that's my only complaint with the article.
There are no easy solutions here, especially because technologies that defend against one of those three things -- surveillance, censorship, and propaganda -- often make one of the others easier. But this is an important problem to solve if we want the Internet to be a vehicle of freedom and not control.
The Net Delusion: The Dark Side of Internet Freedom:
This is a good 90-minute talk about how governments have tried to block Tor:
Excellent article: "How to Shut Down Internets."
First, he describes what just happened in Syria. Then:
Egypt turned off the internet by using the Border Gateway Protocol trick, and also by switching off DNS. This has a similar effect to throwing bleach over a map. The location of every street and house in the country is blotted out. All the Egyptian ISPs were, and probably still are, government licensees. It took nothing but a short series of phone calls to effect the shutdown.
There are two reasons why these shutdowns happen in this manner. The first is that these governments wish to black out activities like, say, indiscriminate slaughter. That much is obvious. The second is sometimes not so obvious. These governments intend to turn the internet back on. Deep down, they believe they will be in their seats the next month and have the power to turn it back on. They believe they will win. It is the arrogance of power: they take their future for granted, and need only hide from the world the corpses it will be built on.
Cory Doctorow asks: "Why would a basket-case dictator even allow his citizenry to access the Internet in the first place?" and "Why not shut down the Internet the instant trouble breaks out?" The reason is that the Internet is a valuable tool for social control. Dictators can use the Internet for surveillance and propaganda as well as censorship, and they only resort to extreme censorship when the value of that outweighs the value of doing all three in some sort of totalitarian balance.
Related: an article on the countries most vulnerable to an Internet shutdown, based on their connectivity architecture.
Against Security: How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger, by Harvey Molotch, Princeton University Press, 278 pages, $35
Security is both a feeling and a reality, and the two are different things. People can feel secure when they're actually not, and they can be secure even when they believe otherwise.
This discord explains much of what passes for our national discourse on security policy. Security measures often are nothing more than security theater, making people feel safer without actually increasing their protection.
A lot of psychological research has tried to make sense out of security, fear, risk, and safety. But however fascinating the academic literature is, it often misses the broader social dynamics. New York University's Harvey Molotch helpfully brings a sociologist's perspective to the subject in his new book "Against Security."
Molotch delves deeply into a few examples and uses them to derive general principles. He starts "Against Security" with a mundane topic: the security of public restrooms. It's a setting he knows better than most, having authored "Toilet: The Public Restroom and the Politics of Sharing" (New York University Press) in 2010. It turns out the toilet is not a bad place to begin a discussion of the sociology of security.
People fear various things in public restrooms: crime, disease, embarrassment. Different cultures either ignore those fears or address them in culture-specific ways. Many public lavatories, for example, have no-touch flushing mechanisms, no-touch sinks, no-touch towel dispensers, and even no-touch doors, while some Japanese commodes play prerecorded sounds of water running, to better disguise the embarrassing tinkle.
Restrooms have also been places where, historically and in some locations, people could do drugs or engage in gay sex. Sen. Larry Craig (R-Idaho) was arrested in 2007 for soliciting sex in the bathroom at the Minneapolis-St. Paul International Airport, suggesting that such behavior is not a thing of the past. To combat these risks, the managers of some bathrooms -- men's rooms in American bus stations, in particular -- have taken to removing the doors from the toilet stalls, forcing everyone to defecate in public to ensure that no one does anything untoward (or unsafe) behind closed doors.
Subsequent chapters discuss security in subways, at airports, and on airplanes; at Ground Zero in lower Manhattan; and after Hurricane Katrina in New Orleans. Each of these chapters is an interesting sociological discussion of both the feeling and reality of security, and all of them make for fascinating reading. Molotch has clearly done his homework, conducting interviews on the ground, asking questions designed to elicit surprising information.
Molotch demonstrates how complex and interdependent the factors that comprise security are. Sometimes we implement security measures against one threat, only to magnify another. He points out that more people have died in car crashes since 9/11 because they were afraid to fly -- or because they didn't want to deal with airport security -- than died during the terrorist attacks. Or to take a more prosaic example, special "high-entry" subway turnstiles make it much harder for people to sneak in for a free ride but also make platform evacuations much slower in the case of an emergency.
The common thread in "Against Security" is that effective security comes less from the top down and more from the bottom up. Molotch's subtitle telegraphs this conclusion: "How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger." It's the word *ambiguous* that's important here. When we don't know what sort of threats we want to defend against, it makes sense to give the people closest to whatever is happening the authority and the flexibility to do what is necessary. In many of Molotch's anecdotes and examples, the authority figure -- a subway train driver, a policeman -- has to break existing rules to provide the security needed in a particular situation. Many security failures are exacerbated by a reflexive adherence to regulations.
Molotch is absolutely right to home in on this kind of individual initiative and resilience as a critical source of true security. Current U.S. security policy is overly focused on specific threats. We defend individual buildings and monuments. We defend airplanes against certain terrorist tactics: shoe bombs, liquid bombs, underwear bombs. These measures have limited value because the number of potential terrorist tactics and targets is much greater than the ones we have recently observed. Does it really make sense to spend a gazillion dollars just to force terrorists to switch tactics? Or drive to a different target? In the face of modern society's ambiguous dangers, it is flexibility that makes security effective.
We get much more bang for our security dollar by not trying to guess what terrorists are going to do next. Investigation, intelligence, and emergency response are where we should be spending our money. That doesn't mean mass surveillance of everyone or the entrapment of incompetent terrorist wannabes; it means tracking down leads -- the sort of thing that caught the 2006 U.K. liquid bombers. They chose their tactic specifically to evade established airport security at the time, but they were arrested in their London apartments well before they got to the airport on the strength of other kinds of intelligence.
In his review of "Against Security" in "Times Higher Education," aviation security expert Omar Malik takes issue with the book's seeming trivialization of the airplane threat and Molotch's failure to discuss terrorist tactics. "Nor does he touch on the multitude of objects and materials that can be turned into weapons," Malik laments. But this is precisely the point. Our fears of terrorism are wildly out of proportion to the actual threat, and an analysis of various movie-plot threats does nothing to make us safer.
In addition to urging people to be more reasonable about potential threats, Molotch makes a strong case for optimism and kindness. Treating every air traveler as a potential terrorist and every Hurricane Katrina refugee as a potential looter is dehumanizing. Molotch argues that we do better as a society when we trust and respect people more. Yes, the occasional bad thing will happen, but 1) it happens less often, and is less damaging, than you probably think, and 2) individuals naturally organize to defend each other. This is what happened during the evacuation of the Twin Towers and in the aftermath of Katrina before official security took over. Those in charge often do a worse job than the common people on the ground.
While that message will please skeptics of authority, Molotch sees a role for government as well. In fact, many of his lessons are primarily aimed at government agencies, to help them design and implement more effective security systems. His final chapter is invaluable on that score, discussing how we should focus on nurturing the good in most people -- by giving them the ability and freedom to self-organize in the event of a security disaster, for example -- rather than focusing solely on the evil of the very few. It is a hopeful yet realistic message for an irrationally anxious time. Whether those government agencies will listen is another question entirely.
Amazon link to book:
This review was originally published at reason.com.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Liars and Outliers," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2012 by Bruce Schneier.