Entries Tagged "computer security"

Page 9 of 33

More on Feudal Security

Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don’t know which ones. Both Twitter and LinkedIn have recently suffered security breaches that affected the data of hundreds of thousands of their users.

If you’ve started to think of yourself as a hapless peasant in a Game of Thrones power struggle, you’re more right than you may realize. These are not traditional companies, and we are not traditional customers. These are feudal lords, and we are their vassals, peasants, and serfs.

Power has shifted in IT, in favor of both cloud-service providers and closed-platform vendors. This power shift affects many things, and it profoundly affects security.

Traditionally, computer security was the user’s responsibility. Users purchased their own antivirus software and firewalls, and any breaches were blamed on their inattentiveness. It’s kind of a crazy business model. Normally we expect the products and services we buy to be safe and secure, but in IT we tolerated lousy products and supported an enormous aftermarket for security.

Now that the IT industry has matured, we expect more security “out of the box.” This has become possible largely because of two technology trends: cloud computing and vendor-controlled platforms. The first means that most of our data resides on other networks: Google Docs, Salesforce.com, Facebook, Gmail. The second means that our new Internet devices are both closed and controlled by the vendors, giving us limited configuration control: iPhones, ChromeBooks, Kindles, BlackBerry PDAs. Meanwhile, our relationship with IT has changed. We used to use our computers to do things. We now use our vendor-controlled computing devices to go places. All of these places are owned by someone.

The new security model is that someone else takes care of it—without telling us any of the details. I have no control over the security of my Gmail or my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello, no matter how confidential they are. I can’t audit any of these cloud services. I can’t delete cookies on my iPad or ensure that files are securely erased. Updates on my Kindle happen automatically, without my knowledge or consent. I have so little visibility into the security of Facebook that I have no idea what operating system they’re using.

There are a lot of good reasons why we’re all flocking to these cloud services and vendor-controlled platforms. The benefits are enormous, from cost to convenience to reliability to security itself. But it is inherently a feudal relationship. We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm. And if we pledge complete allegiance to them—if we let them control our email and calendar and address book and photos and everything—we get even more benefits. We become their vassals; or, on a bad day, their serfs.

There are a lot of feudal lords out there. Google and Apple are the obvious ones, but Microsoft is trying to control both user data and the end-user platform as well. Facebook is another lord, controlling much of the socializing we do on the Internet. Other feudal lords are smaller and more specialized—Amazon, Yahoo, Verizon, and so on—but the model is the same.

To be sure, feudal security has its advantages. These companies are much better at security than the average user. Automatic backup has saved a lot of data after hardware failures, user mistakes, and malware infections. Automatic updates have increased security dramatically. This is also true for small organizations; they are more secure than they would be if they tried to do it themselves. For large corporations with dedicated IT security departments, the benefits are less clear. Sure, even large companies outsource critical functions like tax preparation and cleaning services, but large companies have specific requirements for security, data retention, audit, and so on—and that’s just not possible with most of these feudal lords.

Feudal security also has its risks. Vendors can, and do, make security mistakes affecting hundreds of thousands of people. Vendors can lock people into relationships, making it hard for them to take their data and leave. Vendors can act arbitrarily, against our interests; Facebook regularly does this when it changes peoples’ defaults, implements new features, or modifies its privacy policy. Many vendors give our data to the government without notice, consent, or a warrant; almost all sell it for profit. This isn’t surprising, really; companies should be expected to act in their own self-interest and not in their users’ best interest.

The feudal relationship is inherently based on power. In Medieval Europe, people would pledge their allegiance to a feudal lord in exchange for that lord’s protection. This arrangement changed as the lords realized that they had all the power and could do whatever they wanted. Vassals were used and abused; peasants were tied to their land and became serfs.

It’s the Internet lords’ popularity and ubiquity that enable them to profit; laws and government relationships make it easier for them to hold onto power. These lords are vying with each other for profits and power. By spending time on their sites and giving them our personal information—whether through search queries, e-mails, status updates, likes, or simply our behavioral characteristics—we are providing the raw material for that struggle. In this way we are like serfs, toiling the land for our feudal lords. If you don’t believe me, try to take your data with you when you leave Facebook. And when war breaks out among the giants, we become collateral damage.

So how do we survive? Increasingly, we have little alternative but to trust someone, so we need to decide who we trust—and who we don’t—and then act accordingly. This isn’t easy; our feudal lords go out of their way not to be transparent about their actions, their security, or much of anything. Use whatever power you have—as individuals, none; as large corporations, more—to negotiate with your lords. And, finally, don’t be extreme in any way: politically, socially, culturally. Yes, you can be shut down without recourse, but it’s usually those on the edges that are affected. Not much solace, I agree, but it’s something.

On the policy side, we have an action plan. In the short term, we need to keep circumvention—the ability to modify our hardware, software, and data files—legal and preserve net neutrality. Both of these things limit how much the lords can take advantage of us, and they increase the possibility that the market will force them to be more benevolent. The last thing we want is the government—that’s us—spending resources to enforce one particular business model over another and stifling competition.

In the longer term, we all need to work to reduce the power imbalance. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad hoc and one-sided. We have no choice but to trust the lords, but we receive very few assurances in return. The lords have a lot of rights, but few responsibilities or limits. We need to balance this relationship, and government intervention is the only way we’re going to get it. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people.

We need a similar process to rein in our Internet lords, and it’s not something that market forces are likely to provide. The very definition of power is changing, and the issues are far bigger than the Internet and our relationships with our IT providers.

This essay originally appeared on the Harvard Business Review website. It is an update of this earlier essay on the same topic. “Feudal security” is a metaphor I have been using a lot recently; I wrote this essay without rereading my previous essay.

EDITED TO ADD (6/13): There is another way the feudal metaphor applies to the Internet. There is no commons; every part of the Internet is owned by someone. This article explores that aspect of the metaphor.

Posted on June 13, 2013 at 11:34 AMView Comments

Trust in IT

Ignore the sensationalist headline. This article is a good summary of the need for trust in IT, and provides some ideas for how to enable more of it.

Virtually everything we work with on a day-to-day basis is built by someone else. Avoiding insanity requires trusting those who designed, developed and manufactured the instruments of our daily existence.

All these other industries we rely on have evolved codes of conduct, regulations, and ultimately laws to ensure minimum quality, reliability and trust. In this light, I find the modern technosphere’s complete disdain for obtaining and retaining trust baffling, arrogant and at times enraging.

Posted on June 11, 2013 at 6:21 AMView Comments

DDOS as Civil Disobedience

For a while now, I have been thinking about what civil disobedience looks like in the Internet Age. Certainly DDOS attacks, and politically motivated hacking in general, is a part of that. This is one of the reasons I found Molly Sauter’s recent thesis, “Distributed Denial of Service Actions and the Challenge of Civil Disobedience on the Internet,” so interesting:

Abstract: This thesis examines the history, development, theory, and practice of distributed denial of service actions as a tactic of political activism. DDOS actions have been used in online political activism since the early 1990s, though the tactic has recently attracted significant public attention with the actions of Anonymous and Operation Payback in December 2010. Guiding this work is the overarching question of how civil disobedience and disruptive activism can be practiced in the current online space. The internet acts as a vital arena of communication, self expression, and interpersonal organizing. When there is a message to convey, words to get out, people to organize, many will turn to the internet as the zone of that activity. Online, people sign petitions, investigate stories and rumors, amplify links and videos, donate money, and show their support for causes in a variety of ways. But as familiar and widely accepted activist tools—petitions, fundraisers, mass letter-writing, call-in campaigns and others—find equivalent practices in the online space, is there also room for the tactics of disruption and civil disobedience that are equally familiar from the realm of street marches, occupations, and sit-ins? This thesis grounds activist DDOS historically, focusing on early deployments of the tactic as well as modern instances to trace its development over time, both in theory and in practice. Through that examination, as well as tool design and development, participant identity, and state and corporate responses, this thesis presents an account of the development and current state of activist DDOS actions. It ends by presenting an analytical framework for the analysis of activist DDOS actions.

One of the problems with the legal system is that it doesn’t make any differentiation between civil disobedience and “normal” criminal activity on the Internet, though it does in the real world.

Posted on May 22, 2013 at 6:24 AMView Comments

Pinging the Entire Internet

Turns out there’s a lot of vulnerable systems out there:

Many of the two terabytes (2,000 gigabytes) worth of replies Moore received from 310 million IPs indicated that they came from devices vulnerable to well-known flaws, or configured in a way that could to let anyone take control of them.

On Tuesday, Moore published results on a particularly troubling segment of those vulnerable devices: ones that appear to be used for business and industrial systems. Over 114,000 of those control connections were logged as being on the Internet with known security flaws. Many could be accessed using default passwords and 13,000 offered direct access through a command prompt without a password at all.

[…]

The new work adds to other significant findings from Moore’s unusual hobby. Results he published in January showed that around 50 million printers, games consoles, routers, and networked storage drives are connected to the Internet and easily compromised due to known flaws in a protocol called Universal Plug and Play (UPnP). This protocol allows computers to automatically find printers, but is also built into some security devices, broadband routers, and data storage systems, and could be putting valuable data at risk.

Posted on April 30, 2013 at 6:11 AMView Comments

Government Use of Hackers as an Object of Fear

Interesting article about the perception of hackers in popular culture, and how the government uses the general fear of them to push for more power:

But these more serious threats don’t seem to loom as large as hackers in the minds of those who make the laws and regulations that shape the Internet. It is the hacker—a sort of modern folk devil who personifies our anxieties about technology—who gets all the attention. The result is a set of increasingly paranoid and restrictive laws and regulations affecting our abilities to communicate freely and privately online, to use and control our own technology, and which puts users at risk for overzealous prosecutions and invasive electronic search and seizure practices. The Computer Fraud and Abuse Act, the cornerstone of domestic computer-crime legislation, is overly broad and poorly defined. Since its passage in 1986, it has created a pile of confused caselaw and overzealous prosecutions. The Departments of Defense and Homeland Security manipulate fears of techno-disasters to garner funding and support for laws and initiatives, such as the recently proposed Cyber Intelligence Sharing and Protection Act, that could have horrific implications for user rights. In order to protect our rights to free speech and privacy on the internet, we need to seriously reconsider those laws and the shadowy figure used to rationalize them.

[…]

In the effort to protect society and the state from the ravages of this imagined hacker, the US government has adopted overbroad, vaguely worded laws and regulations which severely undermine internet freedom and threaten the Internet’s role as a place of political and creative expression. In an effort to stay ahead of the wily hacker, laws like the Computer Fraud and Abuse Act (CFAA) focus on electronic conduct or actions, rather than the intent of or actual harm caused by those actions. This leads to a wide range of seemingly innocuous digital activities potentially being treated as criminal acts. Distrust for the hacker politics of Internet freedom, privacy, and access abets the development of ever-stricter copyright regimes, or laws like the proposed Cyber Intelligence Sharing and Protection Act, which if passed would have disastrous implications for personal privacy online.

Note that this was written last year, before any of the recent overzealous prosecutions.

Posted on April 8, 2013 at 6:34 AMView Comments

Peter Neumann Profile

Really nice profile in the New York Times. It includes a discussion of the Clean Slate program:

Run by Dr. Howard Shrobe, an M.I.T. computer scientist who is now a Darpa program manager, the effort began with a premise: If the computer industry got a do-over, what should it do differently?

The program includes two separate but related efforts: Crash, for Clean-Slate Design of Resilient Adaptive Secure Hosts; and MRC, for Mission-Oriented Resilient Clouds. The idea is to reconsider computing entirely, from the silicon wafers on which circuits are etched to the application programs run by users, as well as services that are placing more private and personal data in remote data centers.

Clean Slate is financing research to explore how to design computer systems that are less vulnerable to computer intruders and recover more readily once securityis breached.

Posted on November 1, 2012 at 6:34 AMView Comments

Stoking Cyber Fears

A lot of the debate around President Obama’s cybsersecurity initiative centers on how much of a burden it would be on industry, and how that should be financed. As important as that debate is, it obscures some of the larger issues surrounding cyberwar, cyberterrorism, and cybersecurity in general.

It’s difficult to have any serious policy discussion amongst the fear mongering. Secretary Panetta’s recent comments are just the latest; search the Internet for “cyber 9/11,” “cyber Pearl-Harbor,” “cyber Katrina,” or—my favorite—”cyber Armageddon.”

There’s an enormous amount of money and power that results from pushing cyberwar and cyberterrorism: power within the military, the Department of Homeland Security, and the Justice Department; and lucrative government contracts supporting those organizations. As long as cyber remains a prefix that scares, it’ll continue to be used as a bugaboo.

But while scare stories are more movie-plot than actual threat, there are real risks. The government is continually poked and probed in cyberspace, from attackers ranging from kids playing politics to sophisticated national intelligence gathering operations. Hackers can do damage, although nothing like the cyberterrorism rhetoric would lead you to believe. Cybercrime continues to rise, and still poses real risks to those of us who work, shop, and play on the Internet. And cyberdefense needs to be part of our military strategy.

Industry has definitely not done enough to protect our nation’s critical infrastructure, and federal government may need more involvement. This should come as no surprise; the economic externalities in cybersecurity are so great that even the freest free market would fail.

For example, the owner of a chemical plant will protect that plant from cyber attack up to the value of that plant to the owner; the residual risk to the community around the plant will remain. Politics will color how government involvement looks: market incentives, regulation, or outright government takeover of some aspects of cybersecurity.

None of this requires heavy-handed regulation. Over the past few years we’ve heard calls for the military to better control Internet protocols; for the United States to be able to “kill” all or part of the Internet, or to cut itself off from the greater Internet; for increased government surveillance; and for limits on anonymity. All of those would be dangerous, and would make us less secure. The world’s first military cyberweapon, Stuxnet, was used by the United States and Israel against Iran.

In all of this government posturing about cybersecurity, the biggest risk is a cyber-war arms race; and that’s where remarks like Panetta’s lead us. Increased government spending on cyberweapons and cyberdefense, and an increased militarization of cyberspace, is both expensive and destabilizing. Fears lead to weapons buildups, and weapons beg to be used.

I would like to see less fear mongering, and more reasoned discussion about the actual threats and reasonable countermeasures. Pushing the fear button benefits no one.

This essay originally appeared in the New York Times “Room for Debate” blog. Here are the other essays on the topic.

Posted on October 19, 2012 at 7:45 AMView Comments

Computer Security when Traveling to China

Interesting:

When Kenneth G. Lieberthal, a China expert at the Brookings Institution, travels to that country, he follows a routine that seems straight from a spy film.

He leaves his cellphone and laptop at home and instead brings “loaner” devices, which he erases before he leaves the United States and wipes clean the minute he returns. In China, he disables Bluetooth and Wi-Fi, never lets his phone out of his sight and, in meetings, not only turns off his phone but also removes the battery, for fear his microphone could be turned on remotely. He connects to the Internet only through an encrypted, password-protected channel, and copies and pastes his password from a USB thumb drive. He never types in a password directly, because, he said, “the Chinese are very good at installing key-logging software on your laptop.”

Posted on February 24, 2012 at 7:06 AMView Comments

Research into an Information Security Risk Rating

The NSF is funding research on giving organizations information-security risk ratings, similar to credit ratings for individuals:

Existing risk management techniques are based on annual audits and only provide a snapshot of a partner’s security posture. However, new vulnerabilities are discovered everyday and the industry needs a solution that enables a business to continuously monitor changing risk posture of all its partners and proactively manage assumed risks. The Phase II research objective is to build a scalable fully-automated ratings system. The research will focus on identifying and incorporating new data sources, improving the statistical properties of the ratings model, and making the ratings predictive of future behavior.

Historically, credit scoring has been a “cost and time-saving technology” that has provided tremendous value to lenders and borrowers alike by reducing costs, predicting future performance, and improving credit accessibility and affordability. Unlike credit scoring, no industry standard scoring service exists to rate business with respect to their information security risk. With Saperix’s ratings service, businesses and government will have the potential to reap the same time and cost savings that lenders do from credit scoring services. If the research is successful, Saperix’s solution would provide market incentives for improving security outcomes, which would be a significant change in how security investments are viewed by businesses.

I have no idea if this is snake oil or if it actually works, but note that this is a Phase II award. There was already a Phase I award, and the NSF must have liked the results from that.

Posted on January 25, 2012 at 6:44 AMView Comments

1 7 8 9 10 11 33

Sidebar photo of Bruce Schneier by Joe MacInnis.