Entries Tagged "control"

Page 3 of 8

More on Feudal Security

Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don’t know which ones. Both Twitter and LinkedIn have recently suffered security breaches that affected the data of hundreds of thousands of their users.

If you’ve started to think of yourself as a hapless peasant in a Game of Thrones power struggle, you’re more right than you may realize. These are not traditional companies, and we are not traditional customers. These are feudal lords, and we are their vassals, peasants, and serfs.

Power has shifted in IT, in favor of both cloud-service providers and closed-platform vendors. This power shift affects many things, and it profoundly affects security.

Traditionally, computer security was the user’s responsibility. Users purchased their own antivirus software and firewalls, and any breaches were blamed on their inattentiveness. It’s kind of a crazy business model. Normally we expect the products and services we buy to be safe and secure, but in IT we tolerated lousy products and supported an enormous aftermarket for security.

Now that the IT industry has matured, we expect more security “out of the box.” This has become possible largely because of two technology trends: cloud computing and vendor-controlled platforms. The first means that most of our data resides on other networks: Google Docs, Salesforce.com, Facebook, Gmail. The second means that our new Internet devices are both closed and controlled by the vendors, giving us limited configuration control: iPhones, ChromeBooks, Kindles, BlackBerry PDAs. Meanwhile, our relationship with IT has changed. We used to use our computers to do things. We now use our vendor-controlled computing devices to go places. All of these places are owned by someone.

The new security model is that someone else takes care of it — without telling us any of the details. I have no control over the security of my Gmail or my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello, no matter how confidential they are. I can’t audit any of these cloud services. I can’t delete cookies on my iPad or ensure that files are securely erased. Updates on my Kindle happen automatically, without my knowledge or consent. I have so little visibility into the security of Facebook that I have no idea what operating system they’re using.

There are a lot of good reasons why we’re all flocking to these cloud services and vendor-controlled platforms. The benefits are enormous, from cost to convenience to reliability to security itself. But it is inherently a feudal relationship. We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm. And if we pledge complete allegiance to them — if we let them control our email and calendar and address book and photos and everything — we get even more benefits. We become their vassals; or, on a bad day, their serfs.

There are a lot of feudal lords out there. Google and Apple are the obvious ones, but Microsoft is trying to control both user data and the end-user platform as well. Facebook is another lord, controlling much of the socializing we do on the Internet. Other feudal lords are smaller and more specialized — Amazon, Yahoo, Verizon, and so on — but the model is the same.

To be sure, feudal security has its advantages. These companies are much better at security than the average user. Automatic backup has saved a lot of data after hardware failures, user mistakes, and malware infections. Automatic updates have increased security dramatically. This is also true for small organizations; they are more secure than they would be if they tried to do it themselves. For large corporations with dedicated IT security departments, the benefits are less clear. Sure, even large companies outsource critical functions like tax preparation and cleaning services, but large companies have specific requirements for security, data retention, audit, and so on — and that’s just not possible with most of these feudal lords.

Feudal security also has its risks. Vendors can, and do, make security mistakes affecting hundreds of thousands of people. Vendors can lock people into relationships, making it hard for them to take their data and leave. Vendors can act arbitrarily, against our interests; Facebook regularly does this when it changes peoples’ defaults, implements new features, or modifies its privacy policy. Many vendors give our data to the government without notice, consent, or a warrant; almost all sell it for profit. This isn’t surprising, really; companies should be expected to act in their own self-interest and not in their users’ best interest.

The feudal relationship is inherently based on power. In Medieval Europe, people would pledge their allegiance to a feudal lord in exchange for that lord’s protection. This arrangement changed as the lords realized that they had all the power and could do whatever they wanted. Vassals were used and abused; peasants were tied to their land and became serfs.

It’s the Internet lords’ popularity and ubiquity that enable them to profit; laws and government relationships make it easier for them to hold onto power. These lords are vying with each other for profits and power. By spending time on their sites and giving them our personal information — whether through search queries, e-mails, status updates, likes, or simply our behavioral characteristics — we are providing the raw material for that struggle. In this way we are like serfs, toiling the land for our feudal lords. If you don’t believe me, try to take your data with you when you leave Facebook. And when war breaks out among the giants, we become collateral damage.

So how do we survive? Increasingly, we have little alternative but to trust someone, so we need to decide who we trust — and who we don’t — and then act accordingly. This isn’t easy; our feudal lords go out of their way not to be transparent about their actions, their security, or much of anything. Use whatever power you have — as individuals, none; as large corporations, more — to negotiate with your lords. And, finally, don’t be extreme in any way: politically, socially, culturally. Yes, you can be shut down without recourse, but it’s usually those on the edges that are affected. Not much solace, I agree, but it’s something.

On the policy side, we have an action plan. In the short term, we need to keep circumvention — the ability to modify our hardware, software, and data files — legal and preserve net neutrality. Both of these things limit how much the lords can take advantage of us, and they increase the possibility that the market will force them to be more benevolent. The last thing we want is the government — that’s us — spending resources to enforce one particular business model over another and stifling competition.

In the longer term, we all need to work to reduce the power imbalance. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad hoc and one-sided. We have no choice but to trust the lords, but we receive very few assurances in return. The lords have a lot of rights, but few responsibilities or limits. We need to balance this relationship, and government intervention is the only way we’re going to get it. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people.

We need a similar process to rein in our Internet lords, and it’s not something that market forces are likely to provide. The very definition of power is changing, and the issues are far bigger than the Internet and our relationships with our IT providers.

This essay originally appeared on the Harvard Business Review website. It is an update of this earlier essay on the same topic. “Feudal security” is a metaphor I have been using a lot recently; I wrote this essay without rereading my previous essay.

EDITED TO ADD (6/13): There is another way the feudal metaphor applies to the Internet. There is no commons; every part of the Internet is owned by someone. This article explores that aspect of the metaphor.

Posted on June 13, 2013 at 11:34 AMView Comments

"The Global Cyber Game"

This 127-page report was just published by the UK Defence Academy. I have not read it yet, but it looks really interesting.

Executive Summary: This report presents a systematic way of thinking about cyberpower and its use by a variety of global players. The urgency of addressing cyberpower in this way is a consequence of the very high value of the Internet and the hazards of its current militarization.

Cyberpower and cyber security are conceptualized as a ‘Global Game’ with a novel ‘Cyber Gameboard’ consisting of a nine-cell grid. The horizontal direction on the grid is divided into three columns representing aspects of information (i.e. cyber): connection, computation and cognition. The vertical direction on the grid is divided into three rows representing types of power: coercion, co-option, and cooperation. The nine cells of the grid represent all the possible combinations of power and information, that is, forms of cyberpower.

The Cyber Gameboard itself is also an abstract representation of the surface of cyberspace, or C-space as defined in this report. C-space is understood as a networked medium capable of conveying various combinations of power and information to produce effects in physical or ‘flow space,’ referred to as F-space in this report. Game play is understood as the projection via C-space of a cyberpower capability existing in any one cell of the gameboard to produce an effect in F-space vis-a-vis another player in any other cell of the gameboard. By default, the Cyber Game is played either actively or passively by all those using network connected computers. The players include states, businesses, NGOs, individuals, non-state political groups, and organized crime, among others. Each player is seen as having a certain level of cyberpower when its capability in each cell is summed across the whole board. In general states have the most cyberpower.

The possible future path of the game is depicted by two scenarios, N-topia and N-crash. These are the stakes for which the Cyber Game is played. N-topia represents the upside potential of the game, in which the full value of a globally connected knowledge society is realized. N-crash represents the downside potential, in which militarization and fragmentation of the Internet cause its value to be substantially destroyed. Which scenario eventuates will be determined largely by the overall pattern of play of the Cyber Game.

States have a high level of responsibility for determining the outcome. The current pattern of play is beginning to resemble traditional state-on-state geopolitical conflict. This puts the civil Internet at risk, and civilian cyber players are already getting caught in the crossfire. As long as the civil Internet remains undefended and easily permeable to cyber attack it will be hard to achieve the N-topia scenario.

Defending the civil Internet in depth, and hardening it by re-architecting will allow its full social and economic value to be realized but will restrict the potential for espionage and surveillance by states. This trade-off is net positive and in accordance with the espoused values of Western-style democracies. It does however call for leadership based on enlightened self-interest by state players.

Posted on May 22, 2013 at 12:05 PMView Comments

Surveillance and the Internet of Things

The Internet has turned into a massive surveillance tool. We’re constantly monitored on the Internet by hundreds of companies — both familiar and unfamiliar. Everything we do there is recorded, collected, and collated — sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.

Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.

It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behavior, it’s still only behavior that involves computers.

The Internet of Things refers to a world where much more than our computers and cell phones is Internet-enabled. Soon there will be Internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be Internet-connected tags on our clothing. In its extreme, everything can be connected to the Internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.

Lots has been written about theInternet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.

Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.

We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.

Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be Internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the Internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to — to monitor how people move about their house and how they spend their time.

Drones are another “thing” moving onto the Internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the Internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.

Google’s Internet-enabled glasses — Google Glass — are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.

In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behavior and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals — not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyze this massive data stream will improve.

In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days — and nights — with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.

Will you know any of this? Will your friends? It depends. Lots of these devices have, and will have, privacy settings. But these settings are remarkable not in how much privacy they afford, but in how much they deny. Access will likely be similar to your browsing habits, your files stored on Dropbox, your searches on Google, and your text messages from your phone. All of your data is saved by those companies — and many others — correlated, and then bought and sold without your knowledge or consent. You’d think that your privacy settings would keep random strangers from learning everything about you, but it only keeps random strangers who don’t pay for the privilege — or don’t work for the government and have the ability to demand the data. Power is what matters here: you’ll be able to keep the powerless from invading your privacy, but you’ll have no ability to prevent the powerful from doing it again and again.

This essay originally appeared on the Guardian.

EDITED TO ADD (6/14): Another article on the subject.

Posted on May 21, 2013 at 6:15 AMView Comments

Government Use of Hackers as an Object of Fear

Interesting article about the perception of hackers in popular culture, and how the government uses the general fear of them to push for more power:

But these more serious threats don’t seem to loom as large as hackers in the minds of those who make the laws and regulations that shape the Internet. It is the hacker — a sort of modern folk devil who personifies our anxieties about technology — who gets all the attention. The result is a set of increasingly paranoid and restrictive laws and regulations affecting our abilities to communicate freely and privately online, to use and control our own technology, and which puts users at risk for overzealous prosecutions and invasive electronic search and seizure practices. The Computer Fraud and Abuse Act, the cornerstone of domestic computer-crime legislation, is overly broad and poorly defined. Since its passage in 1986, it has created a pile of confused caselaw and overzealous prosecutions. The Departments of Defense and Homeland Security manipulate fears of techno-disasters to garner funding and support for laws and initiatives, such as the recently proposed Cyber Intelligence Sharing and Protection Act, that could have horrific implications for user rights. In order to protect our rights to free speech and privacy on the internet, we need to seriously reconsider those laws and the shadowy figure used to rationalize them.

[…]

In the effort to protect society and the state from the ravages of this imagined hacker, the US government has adopted overbroad, vaguely worded laws and regulations which severely undermine internet freedom and threaten the Internet’s role as a place of political and creative expression. In an effort to stay ahead of the wily hacker, laws like the Computer Fraud and Abuse Act (CFAA) focus on electronic conduct or actions, rather than the intent of or actual harm caused by those actions. This leads to a wide range of seemingly innocuous digital activities potentially being treated as criminal acts. Distrust for the hacker politics of Internet freedom, privacy, and access abets the development of ever-stricter copyright regimes, or laws like the proposed Cyber Intelligence Sharing and Protection Act, which if passed would have disastrous implications for personal privacy online.

Note that this was written last year, before any of the recent overzealous prosecutions.

Posted on April 8, 2013 at 6:34 AMView Comments

Dictators Shutting Down the Internet

Excellent article: “How to Shut Down Internets.”

First, he describes what just happened in Syria. Then:

Egypt turned off the internet by using the Border Gateway Protocol trick, and also by switching off DNS. This has a similar effect to throwing bleach over a map. The location of every street and house in the country is blotted out. All the Egyptian ISPs were, and probably still are, government licensees. It took nothing but a short series of phone calls to effect the shutdown.

There are two reasons why these shutdowns happen in this manner. The first is that these governments wish to black out activities like, say, indiscriminate slaughter. That much is obvious. The second is sometimes not so obvious. These governments intend to turn the internet back on. Deep down, they believe they will be in their seats the next month and have the power to turn it back on. They believe they will win. It is the arrogance of power: they take their future for granted, and need only hide from the world the corpses it will be built on.

Cory Doctorow asks: “Why would a basket-case dictator even allow his citizenry to access the Internet in the first place?” and “Why not shut down the Internet the instant trouble breaks out?” The reason is that the Internet is a valuable tool for social control. Dictators can use the Internet for surveillance and propaganda as well as censorship, and they only resort to extreme censorship when the value of that outweighs the value of doing all three in some sort of totalitarian balance.

Related: Two articles on the countries most vulnerable to an Internet shutdown, based on their connectivity architecture.

Posted on December 11, 2012 at 6:08 AMView Comments

IT for Oppression

I’ve been thinking a lot about how information technology, and the Internet in particular, is becoming a tool for oppressive governments. As Evgeny Morozov describes in his great book The Net Delusion: The Dark Side of Internet Freedom, repressive regimes all over the world are using the Internet to more efficiently implement surveillance, censorship, and propaganda. And they’re getting really good at it.

For a lot of us who imagined that the Internet would spark an inevitable wave of Internet freedom, this has come as a bit of a surprise. But it turns out that information technology is not just a tool for freedom-fighting rebels under oppressive governments, it’s also a tool for those oppressive governments. Basically, IT magnifies power; the more power you have, the more it can be magnified in IT.

I think we got this wrong — anyone remember John Perry Barlow’s 1996 manifesto? — because, like most technologies, IT technologies are first used by the more agile individuals and groups outside the formal power structures. In the same way criminals can make use of a technological innovation faster than the police can, dissidents in countries all over the world were able to make use of Internet technologies faster than governments could. Unfortunately, and inevitably, governments have caught up.

This is the “security gap” I talk about in the closing chapters of Liars and Outliers.

I thought about all these things as I read this article on how the Syrian government hacked into the computers of dissidents:

The cyberwar in Syria began with a feint. On Feb. 8, 2011, just as the Arab Spring was reaching a crescendo, the government in Damascus suddenly reversed a long-standing ban on websites such as Facebook, Twitter, YouTube, and the Arabic version of Wikipedia. It was an odd move for a regime known for heavy-handed censorship; before the uprising, police regularly arrested bloggers and raided Internet cafes. And it came at an odd time. Less than a month earlier demonstrators in Tunisia, organizing themselves using social networking services, forced their president to flee the country after 23 years in office. Protesters in Egypt used the same tools to stage protests that ultimately led to the end of Hosni Mubarak’s 30-year rule. The outgoing regimes in both countries deployed riot police and thugs and tried desperately to block the websites and accounts affiliated with the revolutionaries. For a time, Egypt turned off the Internet altogether.

Syria, however, seemed to be taking the opposite tack. Just as protesters were casting about for the means with which to organize and broadcast their messages, the government appeared to be handing them the keys.

[…]

The first documented attack in the Syrian cyberwar took place in early May 2011, some two months after the start of the uprising. It was a clumsy one. Users who tried to access Facebook in Syria were presented with a fake security certificate that triggered a warning on most browsers. People who ignored it and logged in would be giving up their user name and password, and with them, their private messages and contacts.

I dislike this being called a “cyberwar,” but that’s my only complaint with the article.

There are no easy solutions here, especially because technologies that defend against one of those three things — surveillance, censorship, and propaganda — often make one of the others easier. But this is an important problem to solve if we want the Internet to be a vehicle of freedom and not control.

EDITED TO ADD (12/13): This is a good 90-minute talk about how governments have tried to block Tor.

Posted on November 30, 2012 at 5:23 AMView Comments

Me at RSA 2012

This is not a video of my talk at the RSA Conference earlier this year. This is a 16-minute version of that talk — TED-like — that the conference filmed the day after for the purpose of putting it on the Internet.

Today’s Internet threats are not technical; they’re social and political. They aren’t criminals, hackers, or terrorists. They’re the government and corporate attempts to mold the Internet into what they want it to be, either to bolster their business models or facilitate social control. Right now, these two goals coincide, making it harder than ever to keep the Internet free and open.

Posted on April 13, 2012 at 2:11 PMView Comments

The Battle for Internet Governance

Good article on the current battle for Internet governance:

The War for the Internet was inevitable — a time bomb built into its creation. The war grows out of tensions that came to a head as the Internet grew to serve populations far beyond those for which it was designed. Originally built to supplement the analog interactions among American soldiers and scientists who knew one another off-line, the Internet was established on a bedrock of trust: trust that people were who they said they were, and trust that information would be handled according to existing social and legal norms. That foundation of trust crumbled as the Internet expanded. The system is now approaching a state of crisis on four main fronts.

The first is sovereignty: by definition, a boundary-less system flouts geography and challenges the power of nation-states. The second is piracy and intellectual property: information wants to be free, as the hoary saying goes, but rights-holders want to be paid and protected. The third is privacy: online anonymity allows for creativity and political dissent, but it also gives cover to disruptive and criminal behavior — and much of what Internet users believe they do anonymously online can be tracked and tied to people’s real-world identities. The fourth is security: free access to an open Internet makes users vulnerable to various kinds of hacking, including corporate and government espionage, personal surveillance, the hijacking of Web traffic, and remote manipulation of computer-controlled military and industrial processes.

Posted on April 4, 2012 at 12:34 PMView Comments

Tor Arms Race

Iran blocks Tor, and Tor releases a workaround on the same day.

How did the filter work technically? Tor tries to make its traffic look like a web browser talking to an https web server, but if you look carefully enough you can tell some differences. In this case, the characteristic of Tor’s SSL handshake they looked at was the expiry time for our SSL session certificates: we rotate the session certificates every two hours, whereas normal SSL certificates you get from a certificate authority typically last a year or more. The fix was to simply write a larger expiration time on the certificates, so our certs have more plausible expiry times.

Posted on September 26, 2011 at 6:41 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.