Blog: December 2012 Archives

Terms of Service as a Security Threat

After the Instagram debacle, where it changed its terms of service to give itself greater rights over user photos and reversed itself after a user backlash, it’s worth thinking about the security threat stemming from terms of service in general.

As cloud computing becomes the norm, as Internet security becomes more feudal, these terms of service agreements define what our service providers can do, both with the data we post and with the information they gather about how we use their service. The agreements are very one-sided—most of the time, we’re not even paying customers of these providers—and can change without warning. And, of course, none of us ever read them.

Here’s one example. Prezi is a really cool presentation system. While you can run presentations locally, it’s basically cloud-based. Earlier this year, I was at a CISO Summit in Prague, and one of the roundtable discussions centered around services like Prezi. CISOs were worried that sensitive company information was leaking out of the company and being stored insecurely in the cloud. My guess is that they would have been much more worried if they read Prezi’s terms of use:

With respect to Public User Content, you hereby do and shall grant to Prezi (and its successors, assigns, and third party service providers) a worldwide, non-exclusive, perpetual, irrevocable, royalty-free, fully paid, sublicensable, and transferable license to use, reproduce, modify, create derivative works from, distribute, publicly display, publicly perform, and otherwise exploit the content on and in connection with the manufacture, sale, promotion, marketing and distribution of products sold on, or in association with, the Service, or for purposes of providing you with the Service and promoting the same, in any medium and by any means currently existing or yet to be devised.

With respect to Private User Content, you hereby do and shall grant to Prezi (and its successors, assigns, and third party service providers) a worldwide, non-exclusive, perpetual, irrevocable, royalty-free, fully paid, sublicensable, and transferable license to use, reproduce, modify, create derivative works from, distribute, publicly display, publicly perform, and otherwise exploit the content solely for purposes of providing you with the Service.

Those paragraphs sure sound like Prezi can do anything it wants, including start a competing business, with any presentation I post to its site. (Note that Prezi’s human readable—but not legally correct—terms of use document makes no mention of this.) Yes, I know Prezi doesn’t currently intend to do that, but things change, companies fail, assets get bought, and what matters in the end is what the agreement says.

I don’t mean to pick on Prezi; it’s just an example. How many other of these Trojan horses are hiding in commonly used cloud provider agreements: both from providers that companies decide to use as a matter of policy, and providers that company employees use in violation of policy, for reasons of convenience?

Posted on December 31, 2012 at 6:44 AM47 Comments

I Seem to Be a Verb

From “The Insider’s TSA Dictionary“:

Bruce Schneiered: (V, ints) When a passenger uses logic in order to confound and perplex an officer into submission. Ex: “A TSA officer took my Swiss army knife, but let my scissors go. I then asked him wouldn’t it be more dangerous if I were to make my scissors into two blades, or to go into the bathroom on the secure side and sharpen my grandmother’s walking stick with one of the scissor blades into a terror spear. Then after I pointed out that all of our bodies contain a lot more than 3.4 ounces of liquids, the TSA guy got all pissed and asked me if I wanted to fly today. I totally Schneirered [sic] his ass.”

Supposedly the site is by a former TSA employee. I have no idea if that’s true.

Posted on December 28, 2012 at 12:34 PM19 Comments

Becoming a Police Informant in Exchange for a Lighter Sentence

Fascinating article.

Snitching has become so commonplace that in the past five years at least 48,895 federal convicts—one of every eight—had their prison sentences reduced in exchange for helping government investigators, a USA TODAY examination of hundreds of thousands of court cases found. The deals can chop a decade or more off of their sentences.

How often informants pay to acquire information from brokers such as Watkins is impossible to know, in part because judges routinely seal court records that could identify them. It almost certainly represents an extreme result of a system that puts strong pressure on defendants to cooperate. Still, Watkins’ case is at least the fourth such scheme to be uncovered in Atlanta alone over the past 20 years.

Those schemes are generally illegal because the people who buy information usually lie to federal agents about where they got it. They also show how staggeringly valuable good information has become—­ prices ran into tens of thousands of dollars, or up to $250,000 in one case, court records show.

There are all sorts of complexities and unintended consequences in this system. This is just a small part of it:

The risks are obvious. If the government rewards paid-for information, wealthy defendants could potentially buy early freedom. Because such a system further muddies the question of how informants—already widely viewed as untrustworthy ­—know what they claim to know, “individual cases can be undermined and the system itself is compromised,” U.S. Justice Department lawyers said in a 2010 court filing.

Plea bargaining is illegal in many countries precisely because of the perverse incentives it sets up. I talk about this more in Liars and Outliers.

Posted on December 28, 2012 at 6:37 AM17 Comments

Breaking Hard-Disk Encryption

The newly announced ElcomSoft Forensic Disk Decryptor can decrypt BitLocker, PGP, and TrueCrypt. And it’s only $300. How does it work?

Elcomsoft Forensic Disk Decryptor acquires the necessary decryption keys by analyzing memory dumps and/or hibernation files obtained from the target PC. You’ll thus need to get a memory dump from a running PC (locked or unlocked) with encrypted volumes mounted, via a standard forensic product or via a FireWire attack. Alternatively, decryption keys can also be derived from hibernation files if a target PC is turned off.

This isn’t new. I wrote about AccessData doing the same thing in 2007:

Even so, none of this might actually matter. AccessData sells another program, Forensic Toolkit, that, among other things, scans a hard drive for every printable character string. It looks in documents, in the Registry, in e-mail, in swap files, in deleted space on the hard drive … everywhere. And it creates a dictionary from that, and feeds it into PRTK.

And PRTK breaks more than 50 percent of passwords from this dictionary alone.

It’s getting harder and harder to maintain good file security.

Posted on December 27, 2012 at 1:02 PM56 Comments

Public Shaming as a Security Measure

In Liars and Outliers, I talk a lot about the more social forms of security. One of them is reputational. This post is about that squishy sociological security measure: public shaming as a way to punish bigotry (and, by extension, to reduce the incidence of bigotry).

It’s a pretty rambling post, first listing some of the public shaming sites, then trying to figure out whether they’re a good idea or not, and finally coming to the conclusion that shaming doesn’t do very much good and—in many cases—unjustly rewards the shamer.

I disagree with a lot of this. I do agree with:

I do think that shame has a role in the way we control our social norms. Shame is a powerful tool, and it’s something that we use to keep our own actions in check all the time. The source of that shame varies immensely. Maybe we are shamed before God, or our parents, or our boss.

But I disagree with the author’s insistence that “shame, ultimately, has to come from ourselves. We cannot be forced to feel shame.” While technically it’s true, operationally it’s not. Shame comes from others’ reactions to our actions. Yes, we feel it inside—but it originates from out lifelong inculcation into the norms of our social group. And throughout the history of our species, social groups have used shame to effectively punish those who violate social norms. No one wants a bad reputation.

It’s also true that we all have defenses against shame. One of them is to have an alternate social group for whom the shameful behavior is not shameful at all. Another is to simply not care what the group thinks. But none of this makes shame a less valuable tool of societal pressure.

Like all forms of security that society uses to control its members, shame is both useful and valuable. And I’m sure it is effective against bigotry. It might not be obvious how to deploy it effectively in the international and sometimes anonymous world of the Internet, but that’s another discussion entirely.

Posted on December 27, 2012 at 6:21 AM32 Comments

Hackers Use Backdoor to Break System

Industrial control system comes with a backdoor:

Although the system was password protected in general, the backdoor through the IP address apparently required no password and allowed direct access to the control system. “[Th]e published backdoor URL provided the same level of access to the company’s control system as the password-protected administrator login,” said the memo.

The security of this backdoor is secrecy. Of course, that never lasts:

Hackers broke into the industrial control system of a New Jersey air conditioning company earlier this year, using a backdoor vulnerability in the system, according to an FBI memo made public this week.

Posted on December 26, 2012 at 6:05 AM10 Comments

Phishing via Twitter

Interesting firsthand phishing story:

A few nights ago, I got a Twitter direct message (DM) from a friend saying that someone was saying nasty things about me, with a link. The link was a shortened (t.co) link, so it was hard to see exactly what it pointed to. I followed the link on my cell phone, and got to a website that certainly looked legit, and I was foolish enough to login. Pwnd. A few minutes later, my Twitter account was spewing tweetspam about the latest pseudo-scientific weight loss fad.

Posted on December 24, 2012 at 6:31 AM25 Comments

This Week's Overreactions

Schools go into lockdown over a thermometer, a car backfiring, a bank robbery a few blocks away, a student alone in a gym, a neighbor on the street, and some vague unfounded rumors. And one high-school kid was arrested for drawing pictures of guns. Everywhere else, post-traumatic stupidity syndrome. (It’s not a new phrase—Google shows hits back to 2001—but it’s new to me. It reminds me of this.) I think of it as: “Something must be done. This is something. Therefore, we must do it.”

I’m not going to write about the Newtown school massacre. I wrote this earlier this year after the Aurora shooting, which was a rewrite of this about the 2007 Virginia Tech shootings. I feel as if I’m endlessly repeating myself. This essay, also from 2007, on the anti-terrorism “War on the Unexpected,” is also relevant. Just remember, we’re the safest we’ve been in 40 years.

Posted on December 21, 2012 at 12:12 PM54 Comments

Amazon Replacement-Order Scam

Clever:

Chris Cardinal discovered someone running such a scam on Amazon using his account: the scammer contacted Amazon pretending to be Chris, supplying his billing address (this is often easy to guess by digging into things like public phone books, credit reports, or domain registration records). Then the scammer secured the order numbers of items Chris recently bought on Amazon. In a separate transaction, the scammer reported that the items were never delivered and requested replacement items to be sent to a remailer/freight forwarder in Portland.

The scam hinged on the fact that Gmail addresses are “dot-blind” (foo@gmail.com is the same as f.oo@gmail.com), but Amazon treats them as separate addresses. This let the scammer run support chats and other Amazon transactions that weren’t immediately apparent to Chris.

Details here:

If you’ve used Amazon.com at all, you’ll notice something very quickly: they require your password. For pretty much anything. Want to change an address? Password. Add a billing method? Password. Check your order history? Password. Amazon is essentially very secure as a web property. But as you can see from my chat transcript above, the CSR team falls like dominoes with just a few simple data points and a little bit of authoritative prying.

[…]

It’s clear that there’s a scam going on and it’s probably going largely unnoticed. It doesn’t cost the end user anything, except perhaps suspicion if they ever have a legitimate fraud complaint. But it’s also highlighting that Amazon is entirely too lax with their customer support team. I was told by my rep earlier today that all you need is the name, email address, and billing address and they pretty much can let you do what you need to do. They’re unable to add payment methods or place new orders, or review existing payment methods, but they are able to read back order numbers and process refund/replacement requests.

There’s a great deal of potential for fraud here. For one thing, it would be dirt simple for me to get and receive a second camera for free. That’s the sort of thing you’re really only going to be able to pull off once a year or so, but still, they sent it basically no questions asked. (It was delivered Fedex Smartpost, which means handed off to the USPS, so perhaps the lack of tracking custody contributes to their willingness to push the replacement.) Why Amazon’s reps were willing to assign the replacement shipment to a different address is beyond me. I was told it’s policy to only issue them to the original address, but some clever social engineering (“I’m visiting family in Oregon, can you ship it there?”, for instance) will get around that.

EDITED TO ADD (1/14): Comments from the original author of the piece.

Posted on December 21, 2012 at 6:20 AM21 Comments

China Now Blocking Encryption

The “Great Firewall of China” is now able to detect and block encryption:

A number of companies providing “virtual private network” (VPN) services to users in China say the new system is able to “learn, discover and block” the encrypted communications methods used by a number of different VPN systems.

China Unicom, one of the biggest telecoms providers in the country, is now killing connections where a VPN is detected, according to one company with a number of users in China.

EDITED TO ADD (1/14): Some interesting blog comments from an American living and working in China.

Posted on December 20, 2012 at 6:32 AM54 Comments

Information-Age Law Enforcement Techniques

This is an interesting blog post:

Buried inside a recent United Nations Office on Drugs and Crime report titled Use of Internet for Terrorist Purposes one can carve out details and examples of law enforcement electronic surveillance techniques that are normally kept secret.

[…]

Point 280: International members of the guerilla group Revolutionary Armed Forces of Colombia (FARC) communicated with their counterparts hiding messages inside images with steganography and sending the emails disguised as spam, deleting Internet browsing cache afterwards to make sure that the authorities would not get hold of the data. Spanish and Colombian authorities cooperated to break the encryption keys and successfully deciphered the messages.

[…]

Point 198: It explains how an investigator can circumvent Truecrypt plausible deniability feature (hidden container), advising computer forensics investigators to take into consideration during the computer analysis to check if there is any missing volume of data.

[…]

Point 210: Explains how Remote Administration Trojans (RATs) can be introduced into a suspects computer to collect data or control his computer and it makes reference to hardware and software keyloggers as well as packet sniffers.

There’s more at the above link. Here’s the final report.

Posted on December 19, 2012 at 6:47 AM30 Comments

Book Review: Against Security

Against Security: How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger, by Harvey Molotch, Princeton University Press, 278 pages, $35.

Security is both a feeling and a reality, and the two are different things. People can feel secure when they’re actually not, and they can be secure even when they believe otherwise.

This discord explains much of what passes for our national discourse on security policy. Security measures often are nothing more than security theater, making people feel safer without actually increasing their protection.

A lot of psychological research has tried to make sense out of security, fear, risk, and safety. But however fascinating the academic literature is, it often misses the broader social dynamics. New York University’s Harvey Molotch helpfully brings a sociologist’s perspective to the subject in his new book Against Security.

Molotch delves deeply into a few examples and uses them to derive general principles. He starts Against Security with a mundane topic: the security of public restrooms. It’s a setting he knows better than most, having authored Toilet: The Public Restroom and the Politics of Sharing (New York University Press) in 2010. It turns out the toilet is not a bad place to begin a discussion of the sociology of security.

People fear various things in public restrooms: crime, disease, embarrassment. Different cultures either ignore those fears or address them in culture-specific ways. Many public lavatories, for example, have no-touch flushing mechanisms, no-touch sinks, no-touch towel dispensers, and even no-touch doors, while some Japanese commodes play prerecorded sounds of water running, to better disguise the embarrassing tinkle.

Restrooms have also been places where, historically and in some locations, people could do drugs or engage in gay sex. Sen. Larry Craig (R-Idaho) was arrested in 2007 for soliciting sex in the bathroom at the Minneapolis-St. Paul International Airport, suggesting that such behavior is not a thing of the past. To combat these risks, the managers of some bathrooms—men’s rooms in American bus stations, in particular—have taken to removing the doors from the toilet stalls, forcing everyone to defecate in public to ensure that no one does anything untoward (or unsafe) behind closed doors.

Subsequent chapters discuss security in subways, at airports, and on airplanes; at Ground Zero in lower Manhattan; and after Hurricane Katrina in New Orleans. Each of these chapters is an interesting sociological discussion of both the feeling and reality of security, and all of them make for fascinating reading. Molotch has clearly done his homework, conducting interviews on the ground, asking questions designed to elicit surprising information.

Molotch demonstrates how complex and interdependent the factors that comprise security are. Sometimes we implement security measures against one threat, only to magnify another. He points out that more people have died in car crashes since 9/11 because they were afraid to fly—or because they didn’t want to deal with airport security—than died during the terrorist attacks. Or to take a more prosaic example, special “high-entry” subway turn­stiles make it much harder for people to sneak in for a free ride but also make platform evacuations much slower in the case of an emergency.

The common thread in Against Security is that effective security comes less from the top down and more from the bottom up. Molotch’s subtitle telegraphs this conclusion: “How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger.” It’s the word ambiguous that’s important here. When we don’t know what sort of threats we want to defend against, it makes sense to give the people closest to whatever is happening the authority and the flexibility to do what is necessary. In many of Molotch’s anecdotes and examples, the authority figure—a subway train driver, a policeman—has to break existing rules to provide the security needed in a particular situation. Many security failures are exacerbated by a reflexive adherence to regulations.

Molotch is absolutely right to home in on this kind of individual initiative and resilience as a critical source of true security. Current U.S. security policy is overly focused on specific threats. We defend individual buildings and monuments. We defend airplanes against certain terrorist tactics: shoe bombs, liquid bombs, underwear bombs. These measures have limited value because the number of potential terrorist tactics and targets is much greater than the ones we have recently observed. Does it really make sense to spend a gazillion dollars just to force terrorists to switch tactics? Or drive to a different target? In the face of modern society’s ambiguous dangers, it is flexibility that makes security effective.

We get much more bang for our security dollar by not trying to guess what terrorists are going to do next. Investigation, intelligence, and emergency response are where we should be spending our money. That doesn’t mean mass surveillance of everyone or the entrapment of incompetent terrorist wannabes; it means tracking down leads—the sort of thing that caught the 2006 U.K. liquid bombers. They chose their tactic specifically to evade established airport security at the time, but they were arrested in their London apartments well before they got to the airport on the strength of other kinds of intelligence.

In his review of Against Security in Times Higher Education, aviation security expert Omar Malik takes issue with the book’s seeming trivialization of the airplane threat and Molotch’s failure to discuss terrorist tactics. “Nor does he touch on the multitude of objects and materials that can be turned into weapons,” Malik laments. But this is precisely the point. Our fears of terrorism are wildly out of proportion to the actual threat, and an analysis of various movie-plot threats does nothing to make us safer.

In addition to urging people to be more reasonable about potential threats, Molotch makes a strong case for optimism and kindness. Treating every air traveler as a potential terrorist and every Hurricane Katrina refugee as a potential looter is dehumanizing. Molotch argues that we do better as a society when we trust and respect people more. Yes, the occasional bad thing will happen, but 1) it happens less often, and is less damaging, than you probably think, and 2) individuals naturally organize to defend each other. This is what happened during the evacuation of the Twin Towers and in the aftermath of Katrina before official security took over. Those in charge often do a worse job than the common people on the ground.

While that message will please skeptics of authority, Molotch sees a role for government as well. In fact, many of his lessons are primarily aimed at government agencies, to help them design and implement more effective security systems. His final chapter is invaluable on that score, discussing how we should focus on nurturing the good in most people—by giving them the ability and freedom to self-organize in the event of a security disaster, for example—rather than focusing solely on the evil of the very few. It is a hopeful yet realistic message for an irrationally anxious time. Whether those government agencies will listen is another question entirely.

This review was originally published at reason.com.

Posted on December 14, 2012 at 12:24 PM12 Comments

Detecting Edited Audio

Interesting development in forensic analysis:

Comparing the unique pattern of the frequencies on an audio recording with a database that has been logging these changes for 24 hours a day, 365 days a year provides a digital watermark: a date and time stamp on the recording.

Philip Harrison, from JP French Associates, another forensic audio laboratory that has been logging the hum for several years, says: “Even if [the hum] is picked up at a very low level that you cannot hear, we can extract this information.”

[…]

It is a technique known as Electric Network Frequency (ENF) analysis, and it is helping forensic scientists to separate genuine, unedited recordings from those that have been tampered with.

Dr Harrison said: “We can extract [the hum] and compare it with the database – if it is a continuous recording, it will all match up nicely.

“If we’ve got some breaks in the recording, if it’s been stopped and started, the profiles won’t match or there will be a section missing. Or if it has come from two different recordings looking as if it is one, we’ll have two different profiles within that one recording.”

Posted on December 12, 2012 at 12:59 PM32 Comments

The National Cyber Security Framework Manual

This book is available as a free pdf download:

The National Cyber Security Framework Manual provides detailed background information and in-depth theoretical frameworks to help the reader understand the various facets of National Cyber Security, according to different levels of public policy formulation. The four levels of government—political, strategic, operational and tactical/technical—each have their own perspectives on National Cyber Security, and each is addressed in individual sections within the Manual. Additionally, the Manual gives examples of relevant institutions in National Cyber Security, from top-level policy coordination bodies down to cyber crisis management structures and similar institutions.

It’s by the NATO Cooperative Cyber Defense Center of Excellence in Tallinn. A paper copy will be published in January.

Posted on December 11, 2012 at 1:03 PM0 Comments

Dictators Shutting Down the Internet

Excellent article: “How to Shut Down Internets.”

First, he describes what just happened in Syria. Then:

Egypt turned off the internet by using the Border Gateway Protocol trick, and also by switching off DNS. This has a similar effect to throwing bleach over a map. The location of every street and house in the country is blotted out. All the Egyptian ISPs were, and probably still are, government licensees. It took nothing but a short series of phone calls to effect the shutdown.

There are two reasons why these shutdowns happen in this manner. The first is that these governments wish to black out activities like, say, indiscriminate slaughter. That much is obvious. The second is sometimes not so obvious. These governments intend to turn the internet back on. Deep down, they believe they will be in their seats the next month and have the power to turn it back on. They believe they will win. It is the arrogance of power: they take their future for granted, and need only hide from the world the corpses it will be built on.

Cory Doctorow asks: “Why would a basket-case dictator even allow his citizenry to access the Internet in the first place?” and “Why not shut down the Internet the instant trouble breaks out?” The reason is that the Internet is a valuable tool for social control. Dictators can use the Internet for surveillance and propaganda as well as censorship, and they only resort to extreme censorship when the value of that outweighs the value of doing all three in some sort of totalitarian balance.

Related: Two articles on the countries most vulnerable to an Internet shutdown, based on their connectivity architecture.

Posted on December 11, 2012 at 6:08 AM28 Comments

Bypassing Two-Factor Authentication

Yet another way two-factor authentication has been bypassed:

For a user to fall prey to Eurograbber, he or she must first be using a computer infected with the trojan. This was typically done by luring the user onto a malicious web page via a round of unfortunate web surfing or email phishing attempts. Once infected, the trojan would monitor that computer’s web browser for banking sessions. When a user visited a banking site, Eurograbber would inject JavaScript and HTML markup into their browser, prompting the user for their phone number under the guise of a “banking software security upgrade”. This is also the key to Eurograbber’s ability to bypass two-factor authentication.

It’s amazing that I wrote about this almost eight years ago. Here’s another example of the same sort of failure.

Posted on December 10, 2012 at 1:04 PM40 Comments

Squids on the Economist Cover

Four squids on the cover of this week’s Economist represent the four massive (and intrusive) data-driven Internet giants: Google, Facebook, Apple, and Amazon.

Interestingly, these are the same four companies I’ve been listing as the new corporate threat to the Internet.

The first of three pillars propping up this outside threat are big data collectors, which in addition to Apple and Google, Schneier identified as Amazon and Facebook. (Notice Microsoft didn’t make the cut.) The goal of their data collection is for marketers to be able to make snap decisions about the product tastes, credit worthiness, and employment suitability of millions of people. Often, this information is fed into systems maintained by governments.

Notice that Microsoft didn’t make the Economist’s cut either.

I gave that talk at the RSA Conference in February of this year. The link in the article is from another conference the week before, where I test-drove the talk.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on December 7, 2012 at 4:04 PM44 Comments

Roger Williams' Cipher Cracked

Another historical cipher, this one from the 1600s, has been cracked:

Senior math major Lucas Mason-Brown, who has done the majority of the decoding, said his first instinct was to develop a statistical tool. The 21-year-old from Belmont, Mass., used frequency analysis, which looks at the frequency of letters or groups of letters in a text, but initially didn’t get far.

He picked up critical clues after learning Williams had been trained in shorthand as a court stenographer in London, and built his own proprietary shorthand off an existing system. Mason-Brown refined his analysis and came up with a rough key.

Williams’ system consisted of 28 symbols that stand for a combination of English letters or sounds. How they’re arranged is key to their meaning; arrange them one way and you get one word, arrange them another, you get something different. One major complication, according to Mason-Brown: Williams often improvised.

Posted on December 5, 2012 at 6:01 AM15 Comments

Feudal Security

It’s a feudal world out there.

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them—or to a particular one we don’t like. Or we can spread our allegiance around. But either way, it’s becoming increasingly difficult to not pledge allegiance to at least one of them.

Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.

Of course, I’m romanticizing here; European history was never this simple, and the description is based on stories of that time, but that’s the general model.

And it’s this model that’s starting to permeate computer security today.

I Pledge Allegiance to the United States of Convenience

Traditional computer security centered around users. Users had to purchase and install anti-virus software and firewalls, ensure their operating system and network were configured properly, update their software, and generally manage their own security.

This model is breaking, largely due to two developments:

  1. New Internet-enabled devices where the vendor maintains more control over the hardware and software than we do—like the iPhone and Kindle; and
  2. Services where the host maintains our data for us—like Flickr and Hotmail.

Now, we users must trust the security of these hardware manufacturers, software vendors, and cloud providers.

We choose to do it because of the convenience, redundancy, automation, and shareability. We like it when we can access our e-mail anywhere, from any computer. We like it when we can restore our contact lists after we’ve lost our phones. We want our calendar entries to automatically appear on all of our devices. These cloud storage sites do a better job of backing up our photos and files than we would manage by ourselves; Apple does a great job keeping malware out of its iPhone apps store.

In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won’t be exposed to hackers, criminals, and malware. We trust that governments won’t be allowed to illegally spy on us.

Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don’t know what sort of security methods they’re using, or how they’re configured. We mostly can’t install our own security products on iPhones or Android phones; we certainly can’t install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates—iPhone, for example—but we rarely know what they’re about or whether they’ll break anything else. (On the Kindle, we don’t even have that freedom.)

The Good, the Bad, and the Ugly

I’m not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.

Feudalism is good for the individual, for small startups, and for medium-sized businesses that can’t afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.

For large organizations, however, it’s more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They’ve been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don’t allow vassals to audit them, even if those vassals are themselves large and powerful.

Yet feudal security isn’t without its risks.

Our lords can make mistakes with security, as recently happened with Apple, Facebook, and Photobucket. They can act arbitrarily and capriciously, as Amazon did when it cut off a Kindle user for living in the wrong country. They tether us like serfs; just try to take data from one digital lord to another.

Ultimately, they will always act in their own self-interest, as companies do when they mine our data in order to sell more advertising and make more money. These companies own us, so they can sell us off—again, like serfs—to rival lords…or turn us in to the authorities.

Historically, early feudal arrangements were ad hoc, and the more powerful party would often simply renege on his part of the bargain. Eventually, the arrangements were formalized and standardized: both parties had rights and privileges (things they could do) as well as protections (things they couldn’t do to each other).

Today’s internet feudalism, however, is ad hoc and one-sided. We give companies our data and trust them with our security, but we receive very few assurances of protection in return, and those companies have very few restrictions on what they can do.

This needs to change. There should be limitations on what cloud vendors can do with our data; rights, like the requirement that they delete our data when we want them to; and liabilities when vendors mishandle our data.

Like everything else in security, it’s a trade-off. We need to balance that trade-off. In Europe, it was the rise of the centralized state and the rule of law that undermined the ad hoc feudal system; it provided more security and stability for both lords and vassals. But these days, government has largely abdicated its role in cyberspace, and the result is a return to the feudal relationships of yore.

Perhaps instead of hoping that our Internet-era lords will be sufficiently clever and benevolent—or putting our faith in the Robin Hoods who block phone surveillance and circumvent DRM systems—it’s time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals (and the lords as well). Otherwise, we really are just serfs.

A version of this essay was originally published on Wired.com.

Posted on December 3, 2012 at 7:24 AM81 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.