Entries Tagged "feudal security"

Page 1 of 2

Amazon's Door Lock Is Amazon's Bid to Control Your Home

Interesting essay about Amazon’s smart lock:

When you add Amazon Key to your door, something more sneaky also happens: Amazon takes over.

You can leave your keys at home and unlock your door with the Amazon Key app—but it’s really built for Amazon deliveries. To share online access with family and friends, I had to give them a special code to SMS (yes, text) to unlock the door. (Amazon offers other smartlocks that have physical keypads).

The Key-compatible locks are made by Yale and Kwikset, yet don’t work with those brands’ own apps. They also can’t connect with a home-security system or smart-home gadgets that work with Apple and Google software.

And, of course, the lock can’t be accessed by businesses other than Amazon. No Walmart, no UPS, no local dog-walking company.

Keeping tight control over Key might help Amazon guarantee security or a better experience. “Our focus with smart home is on making things simpler for customers ­—things like providing easy control of connected devices with your voice using Alexa, simplifying tasks like reordering household goods and receiving packages,” the Amazon spokeswoman said.

But Amazon is barely hiding its goal: It wants to be the operating system for your home. Amazon says Key will eventually work with dog walkers, maids and other service workers who bill through its marketplace. An Amazon home security service and grocery delivery from Whole Foods can’t be far off.

This is happening all over. Everyone wants to control your life: Google, Apple, Amazon…everyone. It’s what I’ve been calling the feudal Internet. I fear it’s going to get a lot worse.

Posted on December 22, 2017 at 6:25 AMView Comments

Corporations Misusing Our Data

In the Internet age, we have no choice but to entrust our data with private companies: e-mail providers, service providers, retailers, and so on.

We realize that this data is at risk from hackers. But there’s another risk as well: the employees of the companies who are holding our data for us.

In the early years of Facebook, employees had a master password that enabled them to view anything they wanted in any account. NSA employees occasionally snoop on their friends and partners. The agency even has a name for it: LOVEINT. And well before the Internet, people with access to police or medical records occasionally used that power to look up either famous people or people they knew.

The latest company accused of allowing this sort of thing is Uber, the Internet car-ride service. The company is under investigation for spying on riders without their permission. Called the “god view,” some Uber employees are able to see who is using the service and where they’re going—and used this at least once in 2011 as a party trick to show off the service. A senior executive also suggested the company should hire people to dig up dirt on their critics, making their database of people’s rides even more “useful.”

None of us wants to be stalked—whether it’s from looking at our location data, our medical data, our emails and texts, or anything else—by friends or strangers who have access due to their jobs. Unfortunately, there are few rules protecting us.

Government employees are prohibited from looking at our data, although none of the NSA LOVEINT creeps were ever prosecuted. The HIPAA law protects the privacy of our medical records, but we have nothing to protect most of our other information.

Your Facebook and Uber data are only protected by company culture. There’s nothing in their license agreements that you clicked “agree” to but didn’t read that prevents those companies from violating your privacy.

This needs to change. Corporate databases containing our data should be secured from everyone who doesn’t need access for their work. Voyeurs who peek at our data without a legitimate reason should be punished.

There are audit technologies that can detect this sort of thing, and they should be required. As long as we have to give our data to companies and government agencies, we need assurances that our privacy will be protected.

This essay previously appeared on CNN.com.

Posted on December 5, 2014 at 6:45 AMView Comments

The Battle for Power on the Internet

We’re in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals. Initially, the Internet empowered the second side. It gave them a place to coordinate and communicate efficiently, and made them seem unbeatable. But now, the more traditional institutional powers are winning, and winning big. How these two sides fare in the long term, and the fate of the rest of us who don’t fall into either group, is an open question—and one vitally important to the future of the Internet.

In the Internet’s early days, there was a lot of talk about its “natural laws”—how it would upend traditional power blocks, empower the masses, and spread freedom throughout the world. The international nature of the Internet circumvented national laws. Anonymity was easy. Censorship was impossible. Police were clueless about cybercrime. And bigger changes seemed inevitable. Digital cash would undermine national sovereignty. Citizen journalism would topple traditional media, corporate PR, and political parties. Easy digital copying would destroy the traditional movie and music industries. Web marketing would allow even the smallest companies to compete against corporate giants. It really would be a new world order.

This was a utopian vision, but some of it did come to pass. Internet marketing has transformed commerce. The entertainment industries have been transformed by things like MySpace and YouTube, and are now more open to outsiders. Mass media has changed dramatically, and some of the most influential people in the media have come from the blogging world. There are new ways to organize politically and run elections. Crowdfunding has made tens of thousands of projects possible to finance, and crowdsourcing made more types of projects possible. Facebook and Twitter really did help topple governments.

But that is just one side of the Internet’s disruptive character. The Internet has emboldened traditional power as well.

On the corporate side, power is consolidating, a result of two current trends in computing. First, the rise of cloud computing means that we no longer have control of our data. Our e-mail, photos, calendars, address books, messages, and documents are on servers belonging to Google, Apple, Microsoft, Facebook, and so on. And second, we are increasingly accessing our data using devices that we have much less control over: iPhones, iPads, Android phones, Kindles, ChromeBooks, and so on. Unlike traditional operating systems, those devices are controlled much more tightly by the vendors, who limit what software can run, what they can do, how they’re updated, and so on. Even Windows 8 and Apple’s Mountain Lion operating system are heading in the direction of more vendor control.

I have previously characterized this model of computing as “feudal.” Users pledge their allegiance to more powerful companies who, in turn, promise to protect them from both sysadmin duties and security threats. It’s a metaphor that’s rich in history and in fiction, and a model that’s increasingly permeating computing today.

Medieval feudalism was a hierarchical political system, with obligations in both directions. Lords offered protection, and vassals offered service. The lord-peasant relationship was similar, with a much greater power differential. It was a response to a dangerous world.

Feudal security consolidates power in the hands of the few. Internet companies, like lords before them, act in their own self-interest. They use their relationship with us to increase their profits, sometimes at our expense. They act arbitrarily. They make mistakes. They’re deliberately—and incidentally—changing social norms. Medieval feudalism gave the lords vast powers over the landless peasants; we’re seeing the same thing on the Internet.

It’s not all bad, of course. We, especially those of us who are not technical, like the convenience, redundancy, portability, automation, and shareability of vendor-managed devices. We like cloud backup. We like automatic updates. We like not having to deal with security ourselves. We like that Facebook just works—from any device, anywhere.

Government power is also increasing on the Internet. There is more government surveillance than ever before. There is more government censorship than ever before. There is more government propaganda, and an increasing number of governments are controlling what their users can and cannot do on the Internet. Totalitarian governments are embracing a growing “cyber sovereignty” movement to further consolidate their power. And the cyberwar arms race is on, pumping an enormous amount of money into cyber-weapons and consolidated cyber-defenses, further increasing government power.

In many cases, the interests of corporate and government powers are aligning. Both corporations and governments benefit from ubiquitous surveillance, and the NSA is using Google, Facebook, Verizon, and others to get access to data it couldn’t otherwise. The entertainment industry is looking to governments to enforce its antiquated business models. Commercial security equipment from companies like BlueCoat and Sophos is being used by oppressive governments to surveil and censor their citizens. The same facial recognition technology that Disney uses in its theme parks can also identify protesters in China and Occupy Wall Street activists in New York. Think of it as a public/private surveillance partnership.

What happened? How, in those early Internet years, did we get the future so wrong?

The truth is that technology magnifies power in general, but rates of adoption are different. The unorganized, the distributed, the marginal, the dissidents, the powerless, the criminal: they can make use of new technologies very quickly. And when those groups discovered the Internet, suddenly they had power. But later, when the already-powerful big institutions finally figured out how to harness the Internet, they had more power to magnify. That’s the difference: the distributed were more nimble and were faster to make use of their new power, while the institutional were slower but were able to use their power more effectively.

So while the Syrian dissidents used Facebook to organize, the Syrian government used Facebook to identify dissidents to arrest.

All isn’t lost for distributed power, though. For institutional power, the Internet is a change in degree, but for distributed power, it’s a qualitative one. The Internet gives decentralized groups—for the first time—the ability to coordinate. This can have incredible ramifications, as we saw in the SOPA/PIPA debate, Gezi, Brazil, and the rising use of crowdfunding. It can invert power dynamics, even in the presence of surveillance, censorship, and use control. But aside from political coordination, the Internet allows for social coordination as well—to unite, for example, ethnic diasporas, gender minorities, sufferers of rare diseases, and people with obscure interests.

This isn’t static: Technological advances continue to provide advantage to the nimble. I discussed this trend in my book Liars and Outliers. If you think of security as an arms race between attackers and defenders, any technological advance gives one side or the other a temporary advantage. But most of the time, a new technology benefits the nimble first. They are not hindered by bureaucracy—and sometimes not by laws or ethics, either. They can evolve faster.

We saw it with the Internet. As soon as the Internet started being used for commerce, a new breed of cybercriminal emerged, immediately able to take advantage of the new technology. It took police a decade to catch up. And we saw it on social media, as political dissidents made use of its organizational powers before totalitarian regimes did.

This delay is what I call a “security gap.” It’s greater when there’s more technology, and in times of rapid technological change. Basically, if there are more innovations to exploit, there will be more damage resulting from society’s inability to keep up with exploiters of all of them. And since our world is one in which there’s more technology than ever before, and a faster rate of technological change than ever before, we should expect to see a greater security gap than ever before. In other words, there will be an increasing time period during which nimble distributed powers can make use of new technologies before slow institutional powers can make better use of those technologies.

This is the battle: quick vs. strong. To return to medieval metaphors, you can think of a nimble distributed power—whether marginal, dissident, or criminal—as Robin Hood; and ponderous institutional powers—both government and corporate—as the feudal lords.

So who wins? Which type of power dominates in the coming decades?

Right now, it looks like traditional power. Ubiquitous surveillance means that it’s easier for the government to identify dissidents than it is for the dissidents to remain anonymous. Data monitoring means easier for the Great Firewall of China to block data than it is for people to circumvent it. The way we all use the Internet makes it much easier for the NSA to spy on everyone than it is for anyone to maintain privacy. And even though it is easy to circumvent digital copy protection, most users still can’t do it.

The problem is that leveraging Internet power requires technical expertise. Those with sufficient ability will be able to stay ahead of institutional powers. Whether it’s setting up your own e-mail server, effectively using encryption and anonymity tools, or breaking copy protection, there will always be technologies that can evade institutional powers. This is why cybercrime is still pervasive, even as police savvy increases; why technically capable whistleblowers can do so much damage; and why organizations like Anonymous are still a viable social and political force. Assuming technology continues to advance—and there’s no reason to believe it won’t—there will always be a security gap in which technically advanced Robin Hoods can operate.

Most people, though, are stuck in the middle. These are people who don’t have the technical ability to evade large governments and corporations, avoid the criminal and hacker groups who prey on us, or join any resistance or dissident movements. These are the people who accept default configuration options, arbitrary terms of service, NSA-installed back doors, and the occasional complete loss of their data. These are the people who get increasingly isolated as government and corporate power align. In the feudal world, these are the hapless peasants. And it’s even worse when the feudal lords—or any powers—fight each other. As anyone watching Game of Thrones knows, peasants get trampled when powers fight: when Facebook, Google, Apple, and Amazon fight it out in the market; when the US, EU, China, and Russia fight it out in geopolitics; or when it’s the US vs. “the terrorists” or China vs. its dissidents.

The abuse will only get worse as technology continues to advance. In the battle between institutional power and distributed power, more technology means more damage. We’ve already seen this: Cybercriminals can rob more people more quickly than criminals who have to physically visit everyone they rob. Digital pirates can make more copies of more things much more quickly than their analog forebears. And we’ll see it in the future: 3D printers mean that the computer restriction debate will soon involves guns, not movies. Big data will mean that more companies will be able to identify and track you more easily. It’s the same problem as the “weapons of mass destruction” fear: terrorists with nuclear or biological weapons can do a lot more damage than terrorists with conventional explosives. And by the same token, terrorists with large-scale cyberweapons can potentially do more damage than terrorists with those same bombs.

It’s a numbers game. Very broadly, because of the way humans behave as a species and as a society, every society is going to have a certain amount of crime. And there’s a particular crime rate society is willing to tolerate. With historically inefficient criminals, we were willing to live with some percentage of criminals in our society. As technology makes each individual criminal more powerful, the percentage we can tolerate decreases. Again, remember the “weapons of mass destruction” debate: As the amount of damage each individual terrorist can do increases, we need to do increasingly more to prevent even a single terrorist from succeeding.

The more destabilizing the technologies, the greater the rhetoric of fear, and the stronger institutional powers will get. This means increasingly repressive security measures, even if the security gap means that such measures become increasingly ineffective. And it will squeeze the peasants in the middle even more.

Without the protection of his own feudal lord, the peasant was subject to abuse both by criminals and other feudal lords. But both corporations and the government—and often the two in cahoots—are using their power to their own advantage, trampling on our rights in the process. And without the technical savvy to become Robin Hoods ourselves, we have no recourse but to submit to whatever the ruling institutional power wants.

So what happens as technology increases? Is a police state the only effective way to control distributed power and keep our society safe? Or do the fringe elements inevitably destroy society as technology increases their power? Probably neither doomsday scenario will come to pass, but figuring out a stable middle ground is hard. These questions are complicated, and dependent on future technological advances that we cannot predict. But they are primarily political questions, and any solutions will be political.

In the short term, we need more transparency and oversight. The more we know of what institutional powers are doing, the more we can trust that they are not abusing their authority. We have long known this to be true in government, but we have increasingly ignored it in our fear of terrorism and other modern threats. This is also true for corporate power. Unfortunately, market dynamics will not necessarily force corporations to be transparent; we need laws to do that. The same is true for decentralized power; transparency is how we’ll differentiate political dissidents from criminal organizations.

Oversight is also critically important, and is another long-understood mechanism for checking power. This can be a combination of things: courts that act as third-party advocates for the rule of law rather than rubber-stamp organizations, legislatures that understand the technologies and how they affect power balances, and vibrant public-sector press and watchdog groups that analyze and debate the actions of those wielding power.

Transparency and oversight give us the confidence to trust institutional powers to fight the bad side of distributed power, while still allowing the good side to flourish. For if we’re going to entrust our security to institutional powers, we need to know they will act in our interests and not abuse that power. Otherwise, democracy fails.

In the longer term, we need to work to reduce power differences. The key to all of this is access to data. On the Internet, data is power. To the extent the powerless have access to it, they gain in power. To the extent that the already powerful have access to it, they further consolidate their power. As we look to reducing power imbalances, we have to look at data: data privacy for individuals, mandatory disclosure laws for corporations, and open government laws.

Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad-hoc and one-sided. Those in power have a lot of rights, but increasingly few responsibilities or limits. We need to rebalance this relationship. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people. In addition to re-reigning in government power, we need similar restrictions on corporate power: a new Magna Carta focused on the institutions that abuse power in the 21st century.

Today’s Internet is a fortuitous accident: a combination of an initial lack of commercial interests, government benign neglect, military requirements for survivability and resilience, and computer engineers building open systems that worked simply and easily.

We’re at the beginning of some critical debates about the future of the Internet: the proper role of law enforcement, the character of ubiquitous surveillance, the collection and retention of our entire life’s history, how automatic algorithms should judge us, government control over the Internet, cyberwar rules of engagement, national sovereignty on the Internet, limitations on the power of corporations over our data, the ramifications of information consumerism, and so on.

Data is the pollution problem of the information age. All computer processes produce it. It stays around. How we deal with it—how we reuse and recycle it, who has access to it, how we dispose of it, and what laws regulate it—is central to how the information age functions. And I believe that just as we look back at the early decades of the industrial age and wonder how society could ignore pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we dealt with the rebalancing of power resulting from all this new data.

This won’t be an easy period for us as we try to work these issues out. Historically, no shift in power has ever been easy. Corporations have turned our personal data into an enormous revenue generator, and they’re not going to back down. Neither will governments, who have harnessed that same data for their own purposes. But we have a duty to tackle this problem.

I can’t tell you what the result will be. These are all complicated issues, and require meaningful debate, international cooperation, and innovative solutions. We need to decide on the proper balance between institutional and decentralized power, and how to build tools that amplify what is good in each while suppressing the bad.

This essay previously appeared in the Atlantic.

EDITED TO ADD (11/5): This essay has been translated into Danish.

Posted on October 30, 2013 at 6:50 AMView Comments

Trading Privacy for Convenience

Ray Wang makes an important point about trust and our data:

This is the paradox. The companies contending to win our trust to manage our digital identities all seem to have complementary (or competing) business models that breach that trust by selling our data.

…and by turning it over to the government.

The current surveillance state is a result of a government/corporate partnership, and our willingness to give up privacy for convenience.

If the government demanded that we all carry tracking devices 24/7, we would rebel. Yet we all carry cell phones. If the government demanded that we deposit copies of all of our messages to each other with the police, we’d declare their actions unconstitutional. Yet we all use Gmail and Facebook messaging and SMS. If the government demanded that we give them access to all the photographs we take, and that we identify all of the people in them and tag them with locations, we’d refuse. Yet we do exactly that on Flickr and other sites.

Ray Ozzie is right when he said that we got what we asked for when we told the government we were scared and that they should do whatever they wanted to make us feel safer. But we also got what we asked for when we traded our privacy for convenience, trusting these corporations to look out for our best interests.

We’re living in a world of feudal security. And if you watch Game of Thrones, you know that feudalism benefits the powerful—at the expense of the peasants.

Last night, I was on All In with Chris Hayes (parts one and two). One of the things we talked about after the show was over is how technological solutions only work around the margins. That’s not a cause for despair. Think about technological solutions to murder. Yes, they exist—wearing a bullet-proof vest, for example—but they’re not really viable. The way we protect ourselves from murder is through laws. This is how we’re also going to protect our privacy.

EDITED TO ADD (6/18): The Onion nailed it back in 2011.

Posted on June 13, 2013 at 4:06 PMView Comments

More on Feudal Security

Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don’t know which ones. Both Twitter and LinkedIn have recently suffered security breaches that affected the data of hundreds of thousands of their users.

If you’ve started to think of yourself as a hapless peasant in a Game of Thrones power struggle, you’re more right than you may realize. These are not traditional companies, and we are not traditional customers. These are feudal lords, and we are their vassals, peasants, and serfs.

Power has shifted in IT, in favor of both cloud-service providers and closed-platform vendors. This power shift affects many things, and it profoundly affects security.

Traditionally, computer security was the user’s responsibility. Users purchased their own antivirus software and firewalls, and any breaches were blamed on their inattentiveness. It’s kind of a crazy business model. Normally we expect the products and services we buy to be safe and secure, but in IT we tolerated lousy products and supported an enormous aftermarket for security.

Now that the IT industry has matured, we expect more security “out of the box.” This has become possible largely because of two technology trends: cloud computing and vendor-controlled platforms. The first means that most of our data resides on other networks: Google Docs, Salesforce.com, Facebook, Gmail. The second means that our new Internet devices are both closed and controlled by the vendors, giving us limited configuration control: iPhones, ChromeBooks, Kindles, BlackBerry PDAs. Meanwhile, our relationship with IT has changed. We used to use our computers to do things. We now use our vendor-controlled computing devices to go places. All of these places are owned by someone.

The new security model is that someone else takes care of it—without telling us any of the details. I have no control over the security of my Gmail or my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello, no matter how confidential they are. I can’t audit any of these cloud services. I can’t delete cookies on my iPad or ensure that files are securely erased. Updates on my Kindle happen automatically, without my knowledge or consent. I have so little visibility into the security of Facebook that I have no idea what operating system they’re using.

There are a lot of good reasons why we’re all flocking to these cloud services and vendor-controlled platforms. The benefits are enormous, from cost to convenience to reliability to security itself. But it is inherently a feudal relationship. We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm. And if we pledge complete allegiance to them—if we let them control our email and calendar and address book and photos and everything—we get even more benefits. We become their vassals; or, on a bad day, their serfs.

There are a lot of feudal lords out there. Google and Apple are the obvious ones, but Microsoft is trying to control both user data and the end-user platform as well. Facebook is another lord, controlling much of the socializing we do on the Internet. Other feudal lords are smaller and more specialized—Amazon, Yahoo, Verizon, and so on—but the model is the same.

To be sure, feudal security has its advantages. These companies are much better at security than the average user. Automatic backup has saved a lot of data after hardware failures, user mistakes, and malware infections. Automatic updates have increased security dramatically. This is also true for small organizations; they are more secure than they would be if they tried to do it themselves. For large corporations with dedicated IT security departments, the benefits are less clear. Sure, even large companies outsource critical functions like tax preparation and cleaning services, but large companies have specific requirements for security, data retention, audit, and so on—and that’s just not possible with most of these feudal lords.

Feudal security also has its risks. Vendors can, and do, make security mistakes affecting hundreds of thousands of people. Vendors can lock people into relationships, making it hard for them to take their data and leave. Vendors can act arbitrarily, against our interests; Facebook regularly does this when it changes peoples’ defaults, implements new features, or modifies its privacy policy. Many vendors give our data to the government without notice, consent, or a warrant; almost all sell it for profit. This isn’t surprising, really; companies should be expected to act in their own self-interest and not in their users’ best interest.

The feudal relationship is inherently based on power. In Medieval Europe, people would pledge their allegiance to a feudal lord in exchange for that lord’s protection. This arrangement changed as the lords realized that they had all the power and could do whatever they wanted. Vassals were used and abused; peasants were tied to their land and became serfs.

It’s the Internet lords’ popularity and ubiquity that enable them to profit; laws and government relationships make it easier for them to hold onto power. These lords are vying with each other for profits and power. By spending time on their sites and giving them our personal information—whether through search queries, e-mails, status updates, likes, or simply our behavioral characteristics—we are providing the raw material for that struggle. In this way we are like serfs, toiling the land for our feudal lords. If you don’t believe me, try to take your data with you when you leave Facebook. And when war breaks out among the giants, we become collateral damage.

So how do we survive? Increasingly, we have little alternative but to trust someone, so we need to decide who we trust—and who we don’t—and then act accordingly. This isn’t easy; our feudal lords go out of their way not to be transparent about their actions, their security, or much of anything. Use whatever power you have—as individuals, none; as large corporations, more—to negotiate with your lords. And, finally, don’t be extreme in any way: politically, socially, culturally. Yes, you can be shut down without recourse, but it’s usually those on the edges that are affected. Not much solace, I agree, but it’s something.

On the policy side, we have an action plan. In the short term, we need to keep circumvention—the ability to modify our hardware, software, and data files—legal and preserve net neutrality. Both of these things limit how much the lords can take advantage of us, and they increase the possibility that the market will force them to be more benevolent. The last thing we want is the government—that’s us—spending resources to enforce one particular business model over another and stifling competition.

In the longer term, we all need to work to reduce the power imbalance. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad hoc and one-sided. We have no choice but to trust the lords, but we receive very few assurances in return. The lords have a lot of rights, but few responsibilities or limits. We need to balance this relationship, and government intervention is the only way we’re going to get it. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people.

We need a similar process to rein in our Internet lords, and it’s not something that market forces are likely to provide. The very definition of power is changing, and the issues are far bigger than the Internet and our relationships with our IT providers.

This essay originally appeared on the Harvard Business Review website. It is an update of this earlier essay on the same topic. “Feudal security” is a metaphor I have been using a lot recently; I wrote this essay without rereading my previous essay.

EDITED TO ADD (6/13): There is another way the feudal metaphor applies to the Internet. There is no commons; every part of the Internet is owned by someone. This article explores that aspect of the metaphor.

Posted on June 13, 2013 at 11:34 AMView Comments

What I've Been Thinking About

I’m starting to think about my next book, which will be about power and the Internet—from the perspective of security. My objective will be to describe current trends, explain where those trends are leading us, and discuss alternatives for avoiding that outcome. Many of my recent essays have touched on various facets of this, although I’m still looking for synthesis. These facets include:

  1. The relationship between the Internet and power: how the Internet affects power, and how power affects the Internet. Increasingly, those in power are using information technology to increase their power.
  2. A feudal model of security that leaves users with little control over their data or computing platforms, forcing them to trust the companies that sell the hardware, software, and systems—and allowing those companies to abuse that trust.
  3. The rise of nationalism on the Internet and a cyberwar arms race, both of which play on our fears and which are resulting in increased military involvement in our information infrastructure.
  4. Ubiquitous surveillance for both government and corporate purposes—aided by cloud computing, social networking, and Internet-enabled everything—resulting in a world without any real privacy.
  5. The four tools of Internet oppression—surveillance, censorship, propaganda, and use control—have both government and corporate uses. And these are interrelated; often building tools to fight one as the side effect of facilitating another.
  6. Ill-conceived laws and regulations on behalf of either government or corporate power, either to prop up their business models (copyright protections), fight crime (increased police access to data), or control our actions in cyberspace.
  7. The need for leaks: both whistleblowers and FOIA suits. So much of what the government does to us is shrouded in secrecy, and leaks are the only we know what’s going on. This also applies to the corporate algorithms and systems and control much of our lives.

On the one hand, we need new regimes of trust in the information age. (I wrote about the extensively in my most recent book, Liars and Outliers.) On the other hand, the risks associated with increasing technology might mean that the fear of catastrophic attack will make us unable to create those new regimes.

I believe society is headed down a dangerous path, and that we—as members of society—need to make some hard choices about what sort of world we want to live in. If we maintain our current trajectory, the future does not look good. It’s not clear if we have the social or political will to address the intertwined issues of power, security, and technology, or even have the conversations necessary to understand the decisions we need to make. Writing about topics like this is what I do best, and I hope that a book on this topic will have a positive effect on the discourse.

The working title of the book is Power.com—although that might be too similar to the book Power, Inc. for the final title.

These thoughts are still in draft, and not yet part of a coherent whole. For me, the writing process is how I understand a topic, and the shape of this book will almost certainly change substantially as I write. I’m very interested in what people think about this, especially in terms of solutions. Please pass this around to interested people, and leave comments to this blog post.

Posted on April 1, 2013 at 6:07 AMView Comments

Who Does Skype Let Spy?

Lately I’ve been thinking a lot about power and the Internet, and what I call the feudal model of IT security that is becoming more and more pervasive. Basically, between cloud services and locked-down end-user devices, we have less control and visibility over our security—and have no point but to trust those in power to keep us safe.

The effects of this model were in the news last week, when privacy activists pleaded with Skype to tell them who is spying on Skype calls.

“Many of its users rely on Skype for secure communications—whether they are activists operating in countries governed by authoritarian regimes, journalists communicating with sensitive sources, or users who wish to talk privately in confidence with business associates, family, or friends,” the letter explains.

Among the group’s concerns is that although Skype was founded in Europe, its acquisition by a US-based company—Microsoft—may mean it is now subject to different eavesdropping and data-disclosure requirements than it was before.

The group claims that both Microsoft and Skype have refused to answer questions about what kinds of user data the service retains, whether it discloses such data to governments, and whether Skype conversations can be intercepted.

The letter calls upon Microsoft to publish a regular Transparency Report outlining what kind of data Skype collects, what third parties might be able to intercept or retain, and how Skype interprets its responsibilities under the laws that pertain to it. In addition it asks for quantitative data about when, why, and how Skype shares data with third parties, including governments.

That’s security in today’s world. We have no choice but to trust Microsoft. Microsoft has reasons to be trustworthy, but they also have reasons to betray our trust in favor of other interests. And all we can do is ask them nicely to tell us first.

Posted on January 30, 2013 at 6:51 AMView Comments

Terms of Service as a Security Threat

After the Instagram debacle, where it changed its terms of service to give itself greater rights over user photos and reversed itself after a user backlash, it’s worth thinking about the security threat stemming from terms of service in general.

As cloud computing becomes the norm, as Internet security becomes more feudal, these terms of service agreements define what our service providers can do, both with the data we post and with the information they gather about how we use their service. The agreements are very one-sided—most of the time, we’re not even paying customers of these providers—and can change without warning. And, of course, none of us ever read them.

Here’s one example. Prezi is a really cool presentation system. While you can run presentations locally, it’s basically cloud-based. Earlier this year, I was at a CISO Summit in Prague, and one of the roundtable discussions centered around services like Prezi. CISOs were worried that sensitive company information was leaking out of the company and being stored insecurely in the cloud. My guess is that they would have been much more worried if they read Prezi’s terms of use:

With respect to Public User Content, you hereby do and shall grant to Prezi (and its successors, assigns, and third party service providers) a worldwide, non-exclusive, perpetual, irrevocable, royalty-free, fully paid, sublicensable, and transferable license to use, reproduce, modify, create derivative works from, distribute, publicly display, publicly perform, and otherwise exploit the content on and in connection with the manufacture, sale, promotion, marketing and distribution of products sold on, or in association with, the Service, or for purposes of providing you with the Service and promoting the same, in any medium and by any means currently existing or yet to be devised.

With respect to Private User Content, you hereby do and shall grant to Prezi (and its successors, assigns, and third party service providers) a worldwide, non-exclusive, perpetual, irrevocable, royalty-free, fully paid, sublicensable, and transferable license to use, reproduce, modify, create derivative works from, distribute, publicly display, publicly perform, and otherwise exploit the content solely for purposes of providing you with the Service.

Those paragraphs sure sound like Prezi can do anything it wants, including start a competing business, with any presentation I post to its site. (Note that Prezi’s human readable—but not legally correct—terms of use document makes no mention of this.) Yes, I know Prezi doesn’t currently intend to do that, but things change, companies fail, assets get bought, and what matters in the end is what the agreement says.

I don’t mean to pick on Prezi; it’s just an example. How many other of these Trojan horses are hiding in commonly used cloud provider agreements: both from providers that companies decide to use as a matter of policy, and providers that company employees use in violation of policy, for reasons of convenience?

Posted on December 31, 2012 at 6:44 AMView Comments

Feudal Security

It’s a feudal world out there.

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them—or to a particular one we don’t like. Or we can spread our allegiance around. But either way, it’s becoming increasingly difficult to not pledge allegiance to at least one of them.

Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.

Of course, I’m romanticizing here; European history was never this simple, and the description is based on stories of that time, but that’s the general model.

And it’s this model that’s starting to permeate computer security today.

I Pledge Allegiance to the United States of Convenience

Traditional computer security centered around users. Users had to purchase and install anti-virus software and firewalls, ensure their operating system and network were configured properly, update their software, and generally manage their own security.

This model is breaking, largely due to two developments:

  1. New Internet-enabled devices where the vendor maintains more control over the hardware and software than we do—like the iPhone and Kindle; and
  2. Services where the host maintains our data for us—like Flickr and Hotmail.

Now, we users must trust the security of these hardware manufacturers, software vendors, and cloud providers.

We choose to do it because of the convenience, redundancy, automation, and shareability. We like it when we can access our e-mail anywhere, from any computer. We like it when we can restore our contact lists after we’ve lost our phones. We want our calendar entries to automatically appear on all of our devices. These cloud storage sites do a better job of backing up our photos and files than we would manage by ourselves; Apple does a great job keeping malware out of its iPhone apps store.

In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won’t be exposed to hackers, criminals, and malware. We trust that governments won’t be allowed to illegally spy on us.

Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don’t know what sort of security methods they’re using, or how they’re configured. We mostly can’t install our own security products on iPhones or Android phones; we certainly can’t install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates—iPhone, for example—but we rarely know what they’re about or whether they’ll break anything else. (On the Kindle, we don’t even have that freedom.)

The Good, the Bad, and the Ugly

I’m not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.

Feudalism is good for the individual, for small startups, and for medium-sized businesses that can’t afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.

For large organizations, however, it’s more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They’ve been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don’t allow vassals to audit them, even if those vassals are themselves large and powerful.

Yet feudal security isn’t without its risks.

Our lords can make mistakes with security, as recently happened with Apple, Facebook, and Photobucket. They can act arbitrarily and capriciously, as Amazon did when it cut off a Kindle user for living in the wrong country. They tether us like serfs; just try to take data from one digital lord to another.

Ultimately, they will always act in their own self-interest, as companies do when they mine our data in order to sell more advertising and make more money. These companies own us, so they can sell us off—again, like serfs—to rival lords…or turn us in to the authorities.

Historically, early feudal arrangements were ad hoc, and the more powerful party would often simply renege on his part of the bargain. Eventually, the arrangements were formalized and standardized: both parties had rights and privileges (things they could do) as well as protections (things they couldn’t do to each other).

Today’s internet feudalism, however, is ad hoc and one-sided. We give companies our data and trust them with our security, but we receive very few assurances of protection in return, and those companies have very few restrictions on what they can do.

This needs to change. There should be limitations on what cloud vendors can do with our data; rights, like the requirement that they delete our data when we want them to; and liabilities when vendors mishandle our data.

Like everything else in security, it’s a trade-off. We need to balance that trade-off. In Europe, it was the rise of the centralized state and the rule of law that undermined the ad hoc feudal system; it provided more security and stability for both lords and vassals. But these days, government has largely abdicated its role in cyberspace, and the result is a return to the feudal relationships of yore.

Perhaps instead of hoping that our Internet-era lords will be sufficiently clever and benevolent—or putting our faith in the Robin Hoods who block phone surveillance and circumvent DRM systems—it’s time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals (and the lords as well). Otherwise, we really are just serfs.

A version of this essay was originally published on Wired.com.

Posted on December 3, 2012 at 7:24 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.