Entries Tagged "taxonomies"

Page 2 of 2

A Taxonomy of Social Networking Data

At the Internet Governance Forum in Sharm El Sheikh this week, there was a conversation on social networking data. Someone made the point that there are several different types of data, and it would be useful to separate them. This is my taxonomy of social networking data.

  1. Service data. Service data is the data you need to give to a social networking site in order to use it. It might include your legal name, your age, and your credit card number.
  2. Disclosed data. This is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  3. Entrusted data. This is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data—someone else does.
  4. Incidental data. Incidental data is data the other people post about you. Again, it’s basically the same stuff as disclosed data, but the difference is that 1) you don’t have control over it, and 2) you didn’t create it in the first place.
  5. Behavioral data. This is data that the site collects about your habits by recording what you do and who you do it with.

Different social networking sites give users different rights for each data type. Some are always private, some can be made private, and some are always public. Some can be edited or deleted—I know one site that allows entrusted data to be edited or deleted within a 24-hour period—and some cannot. Some can be viewed and some cannot.

And people should have different rights with respect to each data type. It’s clear that people should be allowed to change and delete their disclosed data. It’s less clear what rights they have for their entrusted data. And far less clear for their incidental data. If you post pictures of a party with me in them, can I demand you remove those pictures—or at least blur out my face? And what about behavioral data? It’s often a critical part of a social networking site’s business model. We often don’t mind if they use it to target advertisements, but are probably less sanguine about them selling it to third parties.

As we continue our conversations about what sorts of fundamental rights people have with respect to their data, this taxonomy will be useful.

EDITED TO ADD (12/12): Another categorization centered on destination instead of trust level.

Posted on November 19, 2009 at 12:51 PMView Comments

Industry Differences in Types of Security Breaches

Interhack has been working on a taxonomy of security breaches, and has an interesting conclusion:

The Health Care and Social Assistance sector reported a larger than average proportion of lost and stolen computing hardware, but reported an unusually low proportion of compromised hosts. Educational Services reported a disproportionally large number of compromised hosts, while insider conduct and lost and stolen hardware were well below the proportion common to the set as a whole. Public Administration’s proportion of compromised host reports was below average, but their proportion of processing errors was well above the norm. The Finance and Insurance sector experienced the smallest overall proportion of processing errors, but the highest proportion of insider misconduct. Other sectors showed no statistically significant difference from the average, either due to a true lack of variance, or due to an insignificant number of samples for the statistical tests being used

Full study is here.

Posted on June 10, 2009 at 6:18 AMView Comments

Privacy and the Fourth Amendment

In the United States, the concept of “expectation of privacy” matters because it’s the constitutional test, based on the Fourth Amendment, that governs when and how the government can invade your privacy.

Based on the 1967 Katz v. United States Supreme Court decision, this test actually has two parts. First, the government’s action can’t contravene an individual’s subjective expectation of privacy; and second, that expectation of privacy must be one that society in general recognizes as reasonable. That second part isn’t based on anything like polling data; it is more of a normative idea of what level of privacy people should be allowed to expect, given the competing importance of personal privacy on one hand and the government’s interest in public safety on the other.

The problem is, in today’s information society, that definition test will rapidly leave us with no privacy at all.

In Katz, the Court ruled that the police could not eavesdrop on a phone call without a warrant: Katz expected his phone conversations to be private and this expectation resulted from a reasonable balance between personal privacy and societal security. Given NSA’s large-scale warrantless eavesdropping, and the previous administration’s continual insistence that it was necessary to keep America safe from terrorism, is it still reasonable to expect that our phone conversations are private?

Between the NSA’s massive internet eavesdropping program and Gmail’s content-dependent advertising, does anyone actually expect their e-mail to be private? Between calls for ISPs to retain user data and companies serving content-dependent web ads, does anyone expect their web browsing to be private? Between the various computer-infecting malware, and world governments increasingly demanding to see laptop data at borders, hard drives are barely private. I certainly don’t believe that my SMSes, any of my telephone data, or anything I say on LiveJournal or Facebook—regardless of the privacy settings—is private.

Aerial surveillance, data mining, automatic face recognition, terahertz radar that can “see” through walls, wholesale surveillance, brain scans, RFID, “life recorders” that save everything: Even if society still has some small expectation of digital privacy, that will change as these and other technologies become ubiquitous. In short, the problem with a normative expectation of privacy is that it changes with perceived threats, technology and large-scale abuses.

Clearly, something has to change if we are to be left with any privacy at all. Three legal scholars have written law review articles that wrestle with the problems of applying the Fourth Amendment to cyberspace and to our computer-mediated world in general.

George Washington University’s Daniel Solove, who blogs at Concurring Opinions, has tried to capture the byzantine complexities of modern privacy. He points out, for example, that the following privacy violations—all real—are very different: A company markets a list of 5 million elderly incontinent women; reporters deceitfully gain entry to a person’s home and secretly photograph and record the person; the government uses a thermal sensor device to detect heat patterns in a person’s home; and a newspaper reports the name of a rape victim. Going beyond simple definitions such as the divulging of a secret, Solove has developed a taxonomy of privacy, and the harms that result from their violation.

His 16 categories are: surveillance, interrogation, aggregation, identification, insecurity, secondary use, exclusion, breach of confidentiality, disclosure, exposure, increased accessibility, blackmail, appropriation, distortion, intrusion and decisional interference. Solove’s goal is to provide a coherent and comprehensive understanding of what is traditionally an elusive and hard-to-explain concept: privacy violations. (This taxonomy is also discussed in Solove’s book, Understanding Privacy.)

Orin Kerr, also a law professor at George Washington University, and a blogger at Volokh Conspiracy, has attempted to lay out general principles for applying the Fourth Amendment to the internet. First, he points out that the traditional inside/outside distinction—the police can watch you in a public place without a warrant, but not in your home—doesn’t work very well with regard to cyberspace. Instead, he proposes a distinction between content and non-content information: the body of an e-mail versus the header information, for example. The police should be required to get a warrant for the former, but not for the latter. Second, he proposes that search warrants should be written for particular individuals and not for particular internet accounts.

Meanwhile, Jed Rubenfeld of Yale Law School has tried to reinterpret the Fourth Amendment not in terms of privacy, but in terms of security. Pointing out that the whole “expectations” test is circular—what the government does affects what the government can do—he redefines everything in terms of security: the security that our private affairs are private.

This security is violated when, for example, the government makes widespread use of informants, or engages in widespread eavesdropping—even if no one’s privacy is actually violated. This neatly bypasses the whole individual privacy versus societal security question—a balancing that the individual usually loses—by framing both sides in terms of personal security.

I have issues with all of these articles. Solove’s taxonomy is excellent, but the sense of outrage that accompanies a privacy violation—”How could they know/do/say that!?”—is an important part of the harm resulting from a privacy violation. The non-content information that Kerr believes should be collectible without a warrant can be very private and personal: URLs can be very personal, and it’s possible to figure out browsed content just from the size of encrypted SSL traffic. Also, the ease with which the government can collect all of it—the calling and called party of every phone call in the country—makes the balance very different. I believe these need to be protected with a warrant requirement. Rubenfeld’s reframing is interesting, but the devil is in the details. Reframing privacy in terms of security still results in a balancing of competing rights. I’d rather take the approach of stating the—obvious to me—individual and societal value of privacy, and giving privacy its rightful place as a fundamental human right. (There’s additional commentary on Rubenfeld’s thesis at ArsTechnica.)

The trick here is to realize that a normative definition of the expectation of privacy doesn’t need to depend on threats or technology, but rather on what we—as society—decide it should be. Sure, today’s technology make it easier than ever to violate privacy. But it doesn’t necessarily follow that we have to violate privacy. Today’s guns make it easier than ever to shoot virtually anyone for any reason. That doesn’t mean our laws have to change.

No one knows how this will shake out legally. These three articles are from law professors; they’re not judicial opinions. But clearly something has to change, and ideas like these may someday form the basis of new Supreme Court decisions that brings legal notions of privacy into the 21st century.

This essay originally appeared on Wired.com.

Posted on March 31, 2009 at 6:30 AMView Comments

Cyberwar

I haven’t posted anything about the cyberwar between Russia and Estonia because, well, because I didn’t think there was anything new to say. We know that this kind of thing is possible. We don’t have any definitive proof that Russia was behind it. But it would be foolish to think that the various world’s militaries don’t have capabilities like this.

And anyway, I wrote about cyberwar back in January 2005.

But it seems that the essay never made it into the blog. So here it is again.


Cyberwar

The first problem with any discussion about cyberwar is definitional. I’ve been reading about cyberwar for years now, and there seem to be as many definitions of the term as there are people who write about the topic. Some people try to limit cyberwar to military actions taken during wartime, while others are so inclusive that they include the script kiddies who deface websites for fun.

I think the restrictive definition is more useful, and would like to define four different terms as follows:

Cyberwar—Warfare in cyberspace. This includes warfare attacks against a nation’s military—forcing critical communications channels to fail, for example—and attacks against the civilian population.

Cyberterrorism—The use of cyberspace to commit terrorist acts. An example might be hacking into a computer system to cause a nuclear power plant to melt down, a dam to open, or two airplanes to collide. In a previous Crypto-Gram essay, I discussed how realistic the cyberterrorism threat is.

Cybercrime—Crime in cyberspace. This includes much of what we’ve already experienced: theft of intellectual property, extortion based on the threat of DDOS attacks, fraud based on identity theft, and so on.

Cybervandalism—The script kiddies who deface websites for fun are technically criminals, but I think of them more as vandals or hooligans. They’re like the kids who spray paint buses: in it more for the thrill than anything else.

At first glance, there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime, even vandalism are old concepts. That’s correct, the only thing new is the domain; it’s the same old stuff occurring in a new arena. But because the arena of cyberspace is different from other arenas, there are differences worth considering.

One thing that hasn’t changed is that the terms overlap: although the goals are different, many of the tactics used by armies, terrorists, and criminals are the same. Just as all three groups use guns and bombs, all three groups can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime, or even—if it’s done by some fourteen-year-old who doesn’t really understand what he’s doing—cybervandalism. Which it is will depend on the motivations of the attacker and the circumstances surrounding the attack…just as in the real world.

For it to be cyberwar, it must first be war. And in the 21st century, war will inevitably include cyberwar. For just as war moved into the air with the development of kites and balloons and then aircraft, and war moved into space with the development of satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics, and defenses.

The Waging of Cyberwar

There should be no doubt that the smarter and better-funded militaries of the world are planning for cyberwar, both attack and defense. It would be foolish for a military to ignore the threat of a cyberattack and not invest in defensive capabilities, or to disregard the strategic or tactical possibility of launching an offensive cyberattack against an enemy during wartime. And while history has taught us that many militaries are indeed foolish and ignore the march of progress, cyberwar has been discussed too much in military circles to be ignored.

This implies that at least some of our world’s militaries have Internet attack tools that they’re saving in case of wartime. They could be denial-of-service tools. They could be exploits that would allow military intelligence to penetrate military systems. They could be viruses and worms similar to what we’re seeing now, but perhaps country- or network-specific. They could be Trojans that eavesdrop on networks, disrupt network operations, or allow an attacker to penetrate still other networks.

Script kiddies are attackers who run exploit code written by others, but don’t really understand the intricacies of what they’re doing. Conversely, professional attackers spend an enormous amount of time developing exploits: finding vulnerabilities, writing code to exploit them, figuring out how to cover their tracks. The real professionals don’t release their code to the script kiddies; the stuff is much more valuable if it remains secret until it is needed. I believe that militaries have collections of vulnerabilities in common operating systems, generic applications, or even custom military software that their potential enemies are using, and code to exploit those vulnerabilities. I believe that these militaries are keeping these vulnerabilities secret, and that they are saving them in case of wartime or other hostilities. It would be irresponsible for them not to.

The most obvious cyberattack is the disabling of large parts of the Internet, at least for a while. Certainly some militaries have the capability to do this, but in the absence of global war I doubt that they would do so; the Internet is far too useful an asset and far too large a part of the world economy. More interesting is whether they would try to disable national pieces of it. If Country A went to war with Country B, would Country A want to disable Country B’s portion of the Internet, or remove connections between Country B’s Internet and the rest of the world? Depending on the country, a low-tech solution might be the easiest: disable whatever undersea cables they’re using as access. Could Country A’s military turn its own Internet into a domestic-only network if they wanted?

For a more surgical approach, we can also imagine cyberattacks designed to destroy particular organizations’ networks; e.g., as the denial-of-service attack against the Al Jazeera website during the recent Iraqi war, allegedly by pro-American hackers but possibly by the government. We can imagine a cyberattack against the computer networks at a nation’s military headquarters, or the computer networks that handle logistical information.

One important thing to remember is that destruction is the last thing a military wants to do with a communications network. A military only wants to shut an enemy’s network down if they aren’t getting useful information from it. The best thing to do is to infiltrate the enemy’s computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, the next best is to perform traffic analysis: analyze who is talking to whom and the characteristics of that communication. Only if a military can’t do any of that do they consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh all of the advantages.

Properties of Cyberwar

Because attackers and defenders use the same network hardware and software, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows. When a military discovers a vulnerability in a common product, they can either alert the manufacturer and fix the vulnerability, or not tell anyone. It’s not an easy decision. Fixing the vulnerability gives both the good guys and the bad guys a more secure system. Keeping the vulnerability secret means that the good guys can exploit the vulnerability to attack the bad guys, but it also means that the good guys are vulnerable. As long as everyone uses the same microprocessors, operating systems, network protocols, applications software, etc., the equities issue will always be a consideration when planning cyberwar.

Cyberwar can take on aspects of espionage, and does not necessarily involve open warfare. (In military talk, cyberwar is not necessarily “hot.”) Since much of cyberwar will be about seizing control of a network and eavesdropping on it, there may not be any obvious damage from cyberwar operations. This means that the same tactics might be used in peacetime by national intelligence agencies. There’s considerable risk here. Just as U.S. U2 flights over the Soviet Union could have been viewed as an act of war, the deliberate penetration of a country’s computer networks might be as well.

Cyberattacks target infrastructure. In this way they are no different than conventional military attacks against other networks: power, transportation, communications, etc. All of these networks are used by both civilians and the military during wartime, and attacks against them inconvenience both groups of people. For example, when the Allies bombed German railroad bridges during World War II, that affected both civilian and military transport. And when the United States bombed Iraqi communications links in both the First and Second Iraqi Wars, that affected both civilian and military communications. Cyberattacks, even attacks targeted as precisely as today’s smart bombs, are likely to have collateral effects.

Cyberattacks can be used to wage information war. Information war is another topic that’s received considerable media attention of late, although it is not new. Dropping leaflets on enemy soldiers to persuade them to surrender is information war. Broadcasting radio programs to enemy troops is information war. As people get more and more of their information over cyberspace, cyberspace will increasingly become a theater for information war. It’s not hard to imagine cyberattacks designed to co-opt the enemy’s communications channels and use them as a vehicle for information war.

Because cyberwar targets information infrastructure, the waging of it can be more damaging to countries that have significant computer-network infrastructure. The idea is that a technologically poor country might decide that a cyberattack that affects the entire world would disproportionately affect its enemies, because rich nations rely on the Internet much more than poor ones. In some ways this is the dark side of the digital divide, and one of the reasons countries like the United States are so worried about cyberdefense.

Cyberwar is asymmetric, and can be a guerrilla attack. Unlike conventional military offensives involving divisions of men and supplies, cyberattacks are carried out by a few trained operatives. In this way, cyberattacks can be part of a guerrilla warfare campaign.

Cyberattacks also make effective surprise attacks. For years we’ve heard dire warnings of an “electronic Pearl Harbor.” These are largely hyperbole today. I discuss this more in that previous Crypto-Gram essay on cyberterrorism, but right now the infrastructure just isn’t sufficiently vulnerable in that way.

Cyberattacks do not necessarily have an obvious origin. Unlike other forms of warfare, misdirection is more likely a feature of a cyberattack. It’s possible to have damage being done, but not know where it’s coming from. This is a significant difference; there’s something terrifying about not knowing your opponent—or knowing it, and then being wrong. Imagine if, after Pearl Harbor, we did not know who attacked us?

Cyberwar is a moving target. In the previous paragraph, I said that today the risks of an electronic Pearl Harbor are unfounded. That’s true; but this, like all other aspects of cyberspace, is continually changing. Technological improvements affect everyone, including cyberattack mechanisms. And the Internet is becoming critical to more of our infrastructure, making cyberattacks more attractive. There will be a time in the future, perhaps not too far into the future, when a surprise cyberattack becomes a realistic threat.

And finally, cyberwar is a multifaceted concept. It’s part of a larger military campaign, and attacks are likely to have both real-world and cyber components. A military might target the enemy’s communications infrastructure through both physical attack—bombings of selected communications facilities and transmission cables—and virtual attack. An information warfare campaign might include dropping of leaflets, usurpation of a television channel, and mass sending of e-mail. And many cyberattacks still have easier non-cyber equivalents: A country wanting to isolate another country’s Internet might find a low-tech solution, involving the acquiescence of backbone companies like Cable & Wireless, easier than a targeted worm or virus. Cyberwar doesn’t replace war; it’s just another arena in which the larger war is fought.

People overplay the risks of cyberwar and cyberterrorism. It’s sexy, and it gets media attention. And at the same time, people underplay the risks of cybercrime. Today crime is big business on the Internet, and it’s getting bigger all the time. But luckily, the defenses are the same. The countermeasures aimed at preventing both cyberwar and cyberterrorist attacks will also defend against cybercrime and cybervandalism. So even if organizations secure their networks for the wrong reasons, they’ll do the right thing.

Here’s my previous essay on cyberterrorism.

Posted on June 4, 2007 at 6:13 AMView Comments

Updating the Traditional Security Model

On the Firewall Wizards mailing list last year, Dave Piscitello made a fascinating observation. Commenting on the traditional four-step security model:

Authentication (who are you)
Authorization (what are you allowed to do)
Availability (is the data accessible)
Authenticity (is the data intact)

Piscitello said:

This model is no longer sufficient because it does not include asserting the trustworthiness of the endpoint device from which a (remote) user will authenticate and subsequently access data. Network admission and endpoint control are needed to determine that the device is free of malware (esp. key loggers) before you even accept a keystroke from a user. So let’s prepend “admissibility” to your list, and come up with a 5-legged stool, or call it the Pentagon of Trust.

He’s 100% right.

Posted on August 1, 2006 at 2:03 PMView Comments

Insider Threat Statistics

From Europe, although I doubt it’s any different in the U.S.:

  • One in five workers (21%) let family and friends use company laptops and PCs to access the Internet.
  • More than half (51%) connect their own devices or gadgets to their work PC.
  • A quarter of these do so every day.
  • Around 60% admit to storing personal content on their work PC.
  • One in ten confessed to downloading content at work they shouldn’t.
  • Two thirds (62%) admitted they have a very limited knowledge of IT Security.
  • More than half (51%) had no idea how to update the anti-virus protection on their company PC.
  • Five percent say they have accessed areas of their IT system they shouldn’t have.

One caveat: the study is from McAfee, and as the article rightly notes:

Naturally McAfee has a vested interest in talking up this kind of threat….

And finally:

Based on its survey, McAfee has identified four types of employees who put their workplace at risk:

  • The Security Softie – This group comprises the vast majority of employees. They have a very limited knowledge of security and put their business at risk through using their work computer at home or letting family members surf the Internet on their work PC.
  • The Gadget Geek – Those that come to work armed with a variety of devices/gadgets, all of which get plugged into their PC.
  • The Squatter – Those who use the company IT resources in ways they shouldn’t (i.e. by storing content or playing games).
  • The Saboteur – A very small minority of employees. This group will maliciously hack into areas of the IT system to which they shouldn’t have access or infect the network purposely from within

I like the list.

Posted on December 19, 2005 at 7:13 AMView Comments

A Taxonomy of Privacy

Interesting law review paper by Daniel Solove. Here’s the abstract:

Privacy is a concept in disarray. Nobody can articulate what it means. As one commentator has observed, privacy suffers from “an embarrassment of meanings.” Privacy is far too vague a concept to guide adjudication and lawmaking, as abstract incantations of the importance of “privacy” do not fare well when pitted against more concretely-stated countervailing interests.

In 1960, the famous torts scholar William Prosser attempted to make sense of the landscape of privacy law by identifying four different interests. But Prosser focused only on tort law, and the law of information privacy is significantly more vast and complex, extending to Fourth Amendment law, the constitutional right to information privacy, evidentiary privileges, dozens of federal privacy statutes, and hundreds of state statutes. Moreover, Prosser wrote over 40 years ago, and new technologies have given rise to a panoply of new privacy harms.

A new taxonomy to understand privacy violations is thus sorely needed. This article develops a taxonomy to identify privacy problems in a comprehensive and concrete manner. It endeavors to guide the law toward a more coherent understanding of privacy and to serve as a framework for the future development of the field of privacy law.

The paper is a follow-on to his previous paper, “Conceptualizing Privacy.”

Posted on April 19, 2005 at 1:32 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.