Entries Tagged "Facebook"

Page 5 of 11

DEA Sets Up Fake Facebook Page in Woman's Name

This is a creepy story. A woman has her phone seized by the Drug Enforcement Agency and gives them permission to look at her phone. Without her knowledge or consent, they steal photos off of the phone (the article says they were “racy”) and use it to set up a fake Facebook page in her name.

The woman sued the government over this. Extra creepy was the government’s defense in court: “Defendants admit that Plaintiff did not give express permission for the use of photographs contained on her phone on an undercover Facebook page, but state the Plaintiff implicitly consented by granting access to the information stored in her cell phone and by consenting to the use of that information to aid in an ongoing criminal investigations [sic].”

The article was edited to say: “Update: Facebook has removed the page and the Justice Department said it is reviewing the incident.” So maybe this is just an overzealous agent and not official DEA policy.

But as Marcy Wheeler said, this is a good reason to encrypt your cell phone.

Posted on October 15, 2014 at 7:06 AMView Comments

The Continuing Public/Private Surveillance Partnership

If you’ve been reading the news recently, you might think that corporate America is doing its best to thwart NSA surveillance.

Google just announced that it is encrypting Gmail when you access it from your computer or phone, and between data centers. Last week, Mark Zuckerberg personally called President Obama to complain about the NSA using Facebook as a means to hack computers, and Facebook’s Chief Security Officer explained to reporters that the attack technique has not worked since last summer. Yahoo, Google, Microsoft, and others are now regularly publishing “transparency reports,” listing approximately how many government data requests the companies have received and complied with.

On the government side, last week the NSA’s General Counsel Rajesh De seemed to have thrown those companies under a bus by stating that—despite their denials—they knew all about the NSA’s collection of data under both the PRISM program and some unnamed “upstream” collections on the communications links.

Yes, it may seem like the public/private surveillance partnership has frayed—but, unfortunately, it is alive and well. The main focus of massive Internet companies and government agencies both still largely align: to keep us all under constant surveillance. When they bicker, it’s mostly role-playing designed to keep us blasé about what’s really going on.

The U.S. intelligence community is still playing word games with us. The NSA collects our data based on four different legal authorities: the Foreign Intelligence Surveillance Act (FISA) of 1978, Executive Order 12333 of 1981 and modified in 2004 and 2008, Section 215 of the Patriot Act of 2001, and Section 702 of the FISA Amendments Act (FAA) of 2008. Be careful when someone from the intelligence community uses the caveat “not under this program” or “not under this authority”; almost certainly it means that whatever it is they’re denying is done under some other program or authority. So when De said that companies knew about NSA collection under Section 702, it doesn’t mean they knew about the other collection programs.

The big Internet companies know of PRISM—although not under that code name—because that’s how the program works; the NSA serves them with FISA orders. Those same companies did not know about any of the other surveillance against their users conducted on the far more permissive EO 12333. Google and Yahoo did not know about MUSCULAR, the NSA’s secret program to eavesdrop on their trunk connections between data centers. Facebook did not know about QUANTUMHAND, the NSA’s secret program to attack Facebook users. And none of the target companies knew that the NSA was harvesting their users’ address books and buddy lists.

These companies are certainly pissed that the publicity surrounding the NSA’s actions is undermining their users’ trust in their services, and they’re losing money because of it. Cisco, IBM, cloud service providers, and others have announced that they’re losing billions, mostly in foreign sales.

These companies are doing their best to convince users that their data is secure. But they’re relying on their users not understanding what real security looks like. IBM’s letter to its clients last week is an excellent example. The letter lists five "simple facts" that it hopes will mollify its customers, but the items are so qualified with caveats that they do the exact opposite to anyone who understands the full extent of NSA surveillance. And IBM’s spending $1.2B on data centers outside the U.S. will only reassure customers who don’t realize that National Security Letters require a company to turn over data, regardless of where in the world it is stored.

Google’s recent actions, and similar actions of many Internet companies, will definitely improve its users’ security against surreptitious government collection programs—both the NSA’s and other governments’—but their assurances deliberately ignores the massive security vulnerability built into its services by design. Google, and by extension, the U.S. government, still has access to your communications on Google’s servers.

Google could change that. It could encrypt your e-mail so only you could decrypt and read it. It could provide for secure voice and video so no one outside the conversations could eavesdrop.

It doesn’t. And neither does Microsoft, Facebook, Yahoo, Apple, or any of the others.

Why not? They don’t partly because they want to keep the ability to eavesdrop on your conversations. Surveillance is still the business model of the Internet, and every one of those companies wants access to your communications and your metadata. Your private thoughts and conversations are the product they sell to their customers. We also have learned that they read your e-mail for their own internal investigations.

But even if this were not true, even if—for example—Google were willing to forgo data mining your e-mail and video conversations in exchange for the marketing advantage it would give it over Microsoft, it still won’t offer you real security. It can’t.

The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.

This isn’t paranoia. We know that the U.S. government ordered the secure e-mail provider Lavabit to turn over its master keys and compromise every one of its users. We know that the U.S. government convinced Microsoft—either through bribery, coercion, threat, or legal compulsion—to make changes in how Skype operates, to make eavesdropping easier.

We don’t know what sort of pressure the U.S. government has put on Google and the others. We don’t know what secret agreements those companies have reached with the NSA. We do know the NSA’s BULLRUN program to subvert Internet cryptography was successful against many common protocols. Did the NSA demand Google’s keys, as it did with Lavabit? Did its Tailored Access Operations group break into to Google’s servers and steal the keys?

We just don’t know.

The best we have are caveat-laden pseudo-assurances. At SXSW earlier this month, CEO Eric Schmidt tried to reassure the audience by saying that he was “pretty sure that information within Google is now safe from any government’s prying eyes.” A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

Google, Facebook, Microsoft, and the others are already on the record as supporting these legislative changes. It would be better if they openly acknowledged their users’ insecurity and increased their pressure on the government to change, rather than trying to fool their users and customers.

This essay previously appeared on TheAtlantic.com.

Posted on March 31, 2014 at 9:18 AMView Comments

Automatic Face-Recognition Software Getting Better

Facebook has developed a face-recognition system that works almost as well as the human brain:

Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.

Human brains are optimized for facial recognition, which makes this even more impressive.

This kind of technology will change video surveillance. Right now, it’s general, and identifying people is largely a forensic activity. This will make cameras part of an automated process for identifying people.

Posted on March 20, 2014 at 7:12 AMView Comments

Building an Online Lie Detector

There’s an interesting project to detect false rumors on the Internet.

The EU-funded project aims to classify online rumours into four types: speculation—such as whether interest rates might rise; controversy—as over the MMR vaccine; misinformation, where something untrue is spread unwittingly; and disinformation, where it’s done with malicious intent.

The system will also automatically categorise sources to assess their authority, such as news outlets, individual journalists, experts, potential eye witnesses, members of the public or automated ‘bots’. It will also look for a history and background, to help spot where Twitter accounts have been created purely to spread false information.

It will search for sources that corroborate or deny the information, and plot how the conversations on social networks evolve, using all of this information to assess whether it is true or false. The results will be displayed to the user in a visual dashboard, to enable them to easily see whether a rumour is taking hold.

I have no idea how well it will work, or even whether it will work, but I like research in this direction. Of the three primary Internet mechanisms for social control, surveillance and censorship have received a lot more attention than propaganda. Anything that can potentially detect propaganda is a good thing.

Three news articles.

Posted on February 21, 2014 at 8:34 AMView Comments

Surveillance as a Business Model

Google recently announced that it would start including individual users’ names and photos in some ads. This means that if you rate some product positively, your friends may see ads for that product with your name and photo attached—without your knowledge or consent. Meanwhile, Facebook is eliminating a feature that allowed people to retain some portions of their anonymity on its website.

These changes come on the heels of Google’s move to explore replacing tracking cookies with something that users have even less control over. Microsoft is doing something similar by developing its own tracking technology.

More generally, lots of companies are evading the “Do Not Track” rules, meant to give users a say in whether companies track them. Turns out the whole “Do Not Track” legislation has been a sham.

It shouldn’t come as a surprise that big technology companies are tracking us on the Internet even more aggressively than before.

If these features don’t sound particularly beneficial to you, it’s because you’re not the customer of any of these companies. You’re the product, and you’re being improved for their actual customers: their advertisers.

This is nothing new. For years, these sites and others have systematically improved their “product” by reducing user privacy. This excellent infographic, for example, illustrates how Facebook has done so over the years.

The “Do Not Track” law serves as a sterling example of how bad things are. When it was proposed, it was supposed to give users the right to demand that Internet companies not track them. Internet companies fought hard against the law, and when it was passed, they fought to ensure that it didn’t have any benefit to users. Right now, complying is entirely voluntary, meaning that no Internet company has to follow the law. If a company does, because it wants the PR benefit of seeming to take user privacy seriously, it can still track its users.

Really: if you tell a “Do Not Track”-enabled company that you don’t want to be tracked, it will stop showing you personalized ads. But your activity will be tracked—and your personal information collected, sold and used—just like everyone else’s. It’s best to think of it as a “track me in secret” law.

Of course, people don’t think of it that way. Most people aren’t fully aware of how much of their data is collected by these sites. And, as the “Do Not Track” story illustrates, Internet companies are doing their best to keep it that way.

The result is a world where our most intimate personal details are collected and stored. I used to say that Google has a more intimate picture of what I’m thinking of than my wife does. But that’s not far enough: Google has a more intimate picture than I do. The company knows exactly what I am thinking about, how much I am thinking about it, and when I stop thinking about it: all from my Google searches. And it remembers all of that forever.

As the Edward Snowden revelations continue to expose the full extent of the National Security Agency’s eavesdropping on the Internet, it has become increasingly obvious how much of that has been enabled by the corporate world’s existing eavesdropping on the Internet.

The public/private surveillance partnership is fraying, but it’s largely alive and well. The NSA didn’t build its eavesdropping system from scratch; it got itself a copy of what the corporate world was already collecting.

There are a lot of reasons why Internet surveillance is so prevalent and pervasive.

One, users like free things, and don’t realize how much value they’re giving away to get it. We know that “free” is a special price that confuses peoples’ thinking.

Google’s 2013 third quarter profits were nearly $3 billion; that profit is the difference between how much our privacy is worth and the cost of the services we receive in exchange for it.

Two, Internet companies deliberately make privacy not salient. When you log onto Facebook, you don’t think about how much personal information you’re revealing to the company; you’re chatting with your friends. When you wake up in the morning, you don’t think about how you’re going to allow a bunch of companies to track you throughout the day; you just put your cell phone in your pocket.

And three, the Internet’s winner-takes-all market means that privacy-preserving alternatives have trouble getting off the ground. How many of you know that there is a Google alternative called DuckDuckGo that doesn’t track you? Or that you can use cut-out sites to anonymize your Google queries? I have opted out of Facebook, and I know it affects my social life.

There are two types of changes that need to happen in order to fix this. First, there’s the market change. We need to become actual customers of these sites so we can use purchasing power to force them to take our privacy seriously. But that’s not enough. Because of the market failures surrounding privacy, a second change is needed. We need government regulations that protect our privacy by limiting what these sites can do with our data.

Surveillance is the business model of the Internet—Al Gore recently called it a “stalker economy.” All major websites run on advertising, and the more personal and targeted that advertising is, the more revenue the site gets for it. As long as we users remain the product, there is minimal incentive for these companies to provide any real privacy.

This essay previously appeared on CNN.com.

Posted on November 25, 2013 at 6:53 AMView Comments

Restoring Trust in Government and the Internet

In July 2012, responding to allegations that the video-chat service Skype—owned by Microsoft—was changing its protocols to make it possible for the government to eavesdrop on users, Corporate Vice President Mark Gillett took to the company’s blog to deny it.

Turns out that wasn’t quite true.

Or at least he—or the company’s lawyers—carefully crafted a statement that could be defended as true while completely deceiving the reader. You see, Skype wasn’t changing its protocols to make it possible for the government to eavesdrop on users, because the government was already able to eavesdrop on users.

At a Senate hearing in March, Director of National Intelligence James Clapper assured the committee that his agency didn’t collect data on hundreds of millions of Americans. He was lying, too. He later defended his lie by inventing a new definition of the word “collect,” an excuse that didn’t even pass the laugh test.

As Edward Snowden’s documents reveal more about the NSA’s activities, it’s becoming clear that we can’t trust anything anyone official says about these programs.

Google and Facebook insist that the NSA has no “direct access” to their servers. Of course not; the smart way for the NSA to get all the data is through sniffers.

Apple says it’s never heard of PRISM. Of course not; that’s the internal name of the NSA database. Companies are publishing reports purporting to show how few requests for customer-data access they’ve received, a meaningless number when a single Verizon request can cover all of their customers. The Guardian reported that Microsoft secretly worked with the NSA to subvert the security of Outlook, something it carefully denies. Even President Obama’s justifications and denials are phrased with the intent that the listener will take his words very literally and not wonder what they really mean.

NSA Director Gen. Keith Alexander has claimed that the NSA’s massive surveillance and data mining programs have helped stop more than 50 terrorist plots, 10 inside the U.S. Do you believe him? I think it depends on your definition of “helped.” We’re not told whether these programs were instrumental in foiling the plots or whether they just happened to be of minor help because the data was there. It also depends on your definition of “terrorist plots.” An examination of plots that that FBI claims to have foiled since 9/11 reveals that would-be terrorists have commonly been delusional, and most have been egged on by FBI undercover agents or informants.

Left alone, few were likely to have accomplished much of anything.

Both government agencies and corporations have cloaked themselves in so much secrecy that it’s impossible to verify anything they say; revelation after revelation demonstrates that they’ve been lying to us regularly and tell the truth only when there’s no alternative.

There’s much more to come. Right now, the press has published only a tiny percentage of the documents Snowden took with him. And Snowden’s files are only a tiny percentage of the number of secrets our government is keeping, awaiting the next whistle-blower.

Ronald Reagan once said “trust but verify.” That works only if we can verify. In a world where everyone lies to us all the time, we have no choice but to trust blindly, and we have no reason to believe that anyone is worthy of blind trust. It’s no wonder that most people are ignoring the story; it’s just too much cognitive dissonance to try to cope with it.

This sort of thing can destroy our country. Trust is essential in our society. And if we can’t trust either our government or the corporations that have intimate access into so much of our lives, society suffers. Study after study demonstrates the value of living in a high-trust society and the costs of living in a low-trust one.

Rebuilding trust is not easy, as anyone who has betrayed or been betrayed by a friend or lover knows, but the path involves transparency, oversight and accountability. Transparency first involves coming clean. Not a little bit at a time, not only when you have to, but complete disclosure about everything. Then it involves continuing disclosure. No more secret rulings by secret courts about secret laws. No more secret programs whose costs and benefits remain hidden.

Oversight involves meaningful constraints on the NSA, the FBI and others. This will be a combination of things: a court system that acts as a third-party advocate for the rule of law rather than a rubber-stamp organization, a legislature that understands what these organizations are doing and regularly debates requests for increased power, and vibrant public-sector watchdog groups that analyze and debate the government’s actions.

Accountability means that those who break the law, lie to Congress or deceive the American people are held accountable. The NSA has gone rogue, and while it’s probably not possible to prosecute people for what they did under the enormous veil of secrecy it currently enjoys, we need to make it clear that this behavior will not be tolerated in the future. Accountability also means voting, which means voters need to know what our leaders are doing in our name.

This is the only way we can restore trust. A market economy doesn’t work unless consumers can make intelligent buying decisions based on accurate product information. That’s why we have agencies like the FDA, truth-in-packaging laws and prohibitions against false advertising.

In the same way, democracy can’t work unless voters know what the government is doing in their name. That’s why we have open-government laws. Secret courts making secret rulings on secret laws, and companies flagrantly lying to consumers about the insecurity of their products and services, undermine the very foundations of our society.

Since the Snowden documents became public, I have been receiving e-mails from people seeking advice on whom to trust. As a security and privacy expert, I’m expected to know which companies protect their users’ privacy and which encryption programs the NSA can’t break. The truth is, I have no idea. No one outside the classified government world does. I tell people that they have no choice but to decide whom they trust and to then trust them as a matter of faith. It’s a lousy answer, but until our government starts down the path of regaining our trust, it’s the only thing we can do.

This essay originally appeared on CNN.com.

EDITED TO ADD (8/7): Two more links describing how the US government lies about NSA surveillance.

Posted on August 7, 2013 at 6:29 AMView Comments

Finding Sociopaths on Facebook

On his blog, Scott Adams suggests that it might be possible to identify sociopaths based on their interactions on social media.

My hypothesis is that science will someday be able to identify sociopaths and terrorists by their patterns of Facebook and Internet use. I’ll bet normal people interact with Facebook in ways that sociopaths and terrorists couldn’t duplicate.

Anyone can post fake photos and acquire lots of friends who are actually acquaintances. But I’ll bet there are so many patterns and tendencies of “normal” use on Facebook that a terrorist wouldn’t be able to successfully fake it.

Okay, but so what? Imagine you had such an amazingly accurate test…then what? Do we investigate those who test positive, even though there’s no suspicion that they’ve actually done anything? Do we follow them around? Subject them to additional screening at airports? Throw them in jail because we know the streets will be safer because of it? Do we want to live in a Minority Report world?

The problem isn’t just that such a system is wrong, it’s that the mathematics of testing makes this sort of thing pretty ineffective in practice. It’s called the “base rate fallacy.” Suppose you have a test that’s 90% accurate in identifying both sociopaths and non-sociopaths. If you assume that 4% of people are sociopaths, then the chance of someone who tests positive actually being a sociopath is 26%. (For every thousand people tested, 90% of the 40 sociopaths will test positive, but so will 10% of the 960 non-sociopaths.) You have postulate a test with an amazing 99% accuracy—only a 1% false positive rate—even to have an 80% chance of someone testing positive actually being a sociopath.

This fallacy isn’t new. It’s the same thinking that caused us to intern Japanese-Americans during World War II, stop people in their cars because they’re black, and frisk them at airports because they’re Muslim. It’s the same thinking behind massive NSA surveillance programs like PRISM. It’s one of the things that scares me about police DNA databases.

Many authors have written stories about thoughtcrime. Who has written about genecrime?

BTW, if you want to meet an actual sociopath, I recommend this book (review here) and this blog.

Posted on June 19, 2013 at 11:19 AMView Comments

Details of NSA Data Requests from US Corporations

Facebook (here), Apple (here), and Yahoo (here) have all released details of US government requests for data. They each say that they’ve turned over user data for about 10,000 people, although the time frames are different. The exact number isn’t important; what’s important is that it’s much lower than the millions implied by the PRISM document.

Now the big question: do we believe them? If we don’t, what would it take before we did believe them?

Posted on June 18, 2013 at 4:00 PMView Comments

More on Feudal Security

Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don’t know which ones. Both Twitter and LinkedIn have recently suffered security breaches that affected the data of hundreds of thousands of their users.

If you’ve started to think of yourself as a hapless peasant in a Game of Thrones power struggle, you’re more right than you may realize. These are not traditional companies, and we are not traditional customers. These are feudal lords, and we are their vassals, peasants, and serfs.

Power has shifted in IT, in favor of both cloud-service providers and closed-platform vendors. This power shift affects many things, and it profoundly affects security.

Traditionally, computer security was the user’s responsibility. Users purchased their own antivirus software and firewalls, and any breaches were blamed on their inattentiveness. It’s kind of a crazy business model. Normally we expect the products and services we buy to be safe and secure, but in IT we tolerated lousy products and supported an enormous aftermarket for security.

Now that the IT industry has matured, we expect more security “out of the box.” This has become possible largely because of two technology trends: cloud computing and vendor-controlled platforms. The first means that most of our data resides on other networks: Google Docs, Salesforce.com, Facebook, Gmail. The second means that our new Internet devices are both closed and controlled by the vendors, giving us limited configuration control: iPhones, ChromeBooks, Kindles, BlackBerry PDAs. Meanwhile, our relationship with IT has changed. We used to use our computers to do things. We now use our vendor-controlled computing devices to go places. All of these places are owned by someone.

The new security model is that someone else takes care of it—without telling us any of the details. I have no control over the security of my Gmail or my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello, no matter how confidential they are. I can’t audit any of these cloud services. I can’t delete cookies on my iPad or ensure that files are securely erased. Updates on my Kindle happen automatically, without my knowledge or consent. I have so little visibility into the security of Facebook that I have no idea what operating system they’re using.

There are a lot of good reasons why we’re all flocking to these cloud services and vendor-controlled platforms. The benefits are enormous, from cost to convenience to reliability to security itself. But it is inherently a feudal relationship. We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm. And if we pledge complete allegiance to them—if we let them control our email and calendar and address book and photos and everything—we get even more benefits. We become their vassals; or, on a bad day, their serfs.

There are a lot of feudal lords out there. Google and Apple are the obvious ones, but Microsoft is trying to control both user data and the end-user platform as well. Facebook is another lord, controlling much of the socializing we do on the Internet. Other feudal lords are smaller and more specialized—Amazon, Yahoo, Verizon, and so on—but the model is the same.

To be sure, feudal security has its advantages. These companies are much better at security than the average user. Automatic backup has saved a lot of data after hardware failures, user mistakes, and malware infections. Automatic updates have increased security dramatically. This is also true for small organizations; they are more secure than they would be if they tried to do it themselves. For large corporations with dedicated IT security departments, the benefits are less clear. Sure, even large companies outsource critical functions like tax preparation and cleaning services, but large companies have specific requirements for security, data retention, audit, and so on—and that’s just not possible with most of these feudal lords.

Feudal security also has its risks. Vendors can, and do, make security mistakes affecting hundreds of thousands of people. Vendors can lock people into relationships, making it hard for them to take their data and leave. Vendors can act arbitrarily, against our interests; Facebook regularly does this when it changes peoples’ defaults, implements new features, or modifies its privacy policy. Many vendors give our data to the government without notice, consent, or a warrant; almost all sell it for profit. This isn’t surprising, really; companies should be expected to act in their own self-interest and not in their users’ best interest.

The feudal relationship is inherently based on power. In Medieval Europe, people would pledge their allegiance to a feudal lord in exchange for that lord’s protection. This arrangement changed as the lords realized that they had all the power and could do whatever they wanted. Vassals were used and abused; peasants were tied to their land and became serfs.

It’s the Internet lords’ popularity and ubiquity that enable them to profit; laws and government relationships make it easier for them to hold onto power. These lords are vying with each other for profits and power. By spending time on their sites and giving them our personal information—whether through search queries, e-mails, status updates, likes, or simply our behavioral characteristics—we are providing the raw material for that struggle. In this way we are like serfs, toiling the land for our feudal lords. If you don’t believe me, try to take your data with you when you leave Facebook. And when war breaks out among the giants, we become collateral damage.

So how do we survive? Increasingly, we have little alternative but to trust someone, so we need to decide who we trust—and who we don’t—and then act accordingly. This isn’t easy; our feudal lords go out of their way not to be transparent about their actions, their security, or much of anything. Use whatever power you have—as individuals, none; as large corporations, more—to negotiate with your lords. And, finally, don’t be extreme in any way: politically, socially, culturally. Yes, you can be shut down without recourse, but it’s usually those on the edges that are affected. Not much solace, I agree, but it’s something.

On the policy side, we have an action plan. In the short term, we need to keep circumvention—the ability to modify our hardware, software, and data files—legal and preserve net neutrality. Both of these things limit how much the lords can take advantage of us, and they increase the possibility that the market will force them to be more benevolent. The last thing we want is the government—that’s us—spending resources to enforce one particular business model over another and stifling competition.

In the longer term, we all need to work to reduce the power imbalance. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad hoc and one-sided. We have no choice but to trust the lords, but we receive very few assurances in return. The lords have a lot of rights, but few responsibilities or limits. We need to balance this relationship, and government intervention is the only way we’re going to get it. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people.

We need a similar process to rein in our Internet lords, and it’s not something that market forces are likely to provide. The very definition of power is changing, and the issues are far bigger than the Internet and our relationships with our IT providers.

This essay originally appeared on the Harvard Business Review website. It is an update of this earlier essay on the same topic. “Feudal security” is a metaphor I have been using a lot recently; I wrote this essay without rereading my previous essay.

EDITED TO ADD (6/13): There is another way the feudal metaphor applies to the Internet. There is no commons; every part of the Internet is owned by someone. This article explores that aspect of the metaphor.

Posted on June 13, 2013 at 11:34 AMView Comments

1 3 4 5 6 7 11

Sidebar photo of Bruce Schneier by Joe MacInnis.