Entries Tagged "laws"

Page 16 of 35

Marc Rotenberg on Security vs. Privacy

Nice essay:

In the modern era, the right of privacy represents a vast array of rights that include clear legal standards, government accountability, judicial oversight, the design of techniques that are minimally intrusive and the respect for the dignity and autonomy of individuals.

The choice that we are being asked to make is not simply whether to reduce our expectation of privacy, but whether to reduce the rule of law, whether to diminish the role of the judiciary, whether to cast a shroud of secrecy over the decisions made by government.

In other words, we are being asked to become something other than the strong America that could promote innovation and safeguard privacy that could protect the country and its Constitutional traditions. We are being asked to become a weak nation that accepts surveillance without accountability that cannot defend both security and freedom.

That is a position we must reject. If we agree to reduce our expectation of privacy, we will erode our Constitutional democracy.

Posted on May 8, 2009 at 6:41 AMView Comments

An Expectation of Online Privacy

If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender’s ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization—domestic and international.

You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.

Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you’re relying on that company to keep your data private. If you use Google Docs, you’re relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google’s security, but we don’t know what it is.

This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it’s on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don’t even know exist.

And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation—or maybe a conversation inside Facebook.

Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.

We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don’t need one to read it from the backup tapes at your ISP.

This isn’t a technological problem; it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office and not in the target’s home or office—the Supreme Court must recognize that reading personal e-mail at an ISP is no different.

This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.

Posted on May 5, 2009 at 6:06 AMView Comments

Definition of "Weapon of Mass Destruction"

At least, according to U.S. law:

18 U.S.C. 2332a

  • (2) the term "weapon of mass destruction" means—
    • (A) any destructive device as defined in section 921 of this title;
    • (B) any weapon that is designed or intended to cause death or serious bodily injury through the release, dissemination, or impact of toxic or poisonous chemicals, or their precursors;
    • (C) any weapon involving a biological agent, toxin, or vector (as those terms are defined in section 178 of this title); or
    • (D) any weapon that is designed to release radiation or radioactivity at a level dangerous to human life;

18 U.S.C. 921

  • (4) The term "destructive device" means—
    • (A) any explosive, incendiary, or poison gas—
      • (i) bomb,
      • (ii) grenade,
      • (iii) rocket having a propellant charge of more than four ounces,
      • (iv) missile having an explosive or incendiary charge of more than one-quarter ounce,
      • (v) mine, or
      • (vi) device similar to any of the devices described in the preceding clauses;
    • (B) any type of weapon (other than a shotgun or a shotgun shell which the Attorney General finds is generally recognized as particularly suitable for sporting purposes) by whatever name known which will, or which may be readily converted to, expel a projectile by the action of an explosive or other propellant, and which has any barrel with a bore of more than one-half inch in diameter; and
    • (C) any combination of parts either designed or intended for use in converting any device into any destructive device described in subparagraph (A) or (B) and from which a destructive device may be readily assembled.

The term "destructive device" shall not include any device which is neither designed nor redesigned for use as a weapon; any device, although originally designed for use as a weapon, which is redesigned for use as a signaling, pyrotechnic, line throwing, safety, or similar device; surplus ordnance sold, loaned, or given by the Secretary of the Army pursuant to the provisions of section 4684 (2), 4685, or 4686 of title 10; or any other device which the Attorney General finds is not likely to be used as a weapon, is an antique, or is a rifle which the owner intends to use solely for sporting, recreational or cultural purposes.

This is a very broad definition, and one that involves the intention of the weapon’s creator as well as the details of the weapon itself.

In an e-mail, John Mueller commented:

As I understand it, not only is a grenade a weapon of mass destruction, but so is a maliciously-designed child’s rocket even if it doesn’t have a warhead. On the other hand, although a missile-propelled firecracker would be considered a weapons of mass destruction if its designers had wanted to think of it as a weapon, it would not be so considered if it had previously been designed for use as a weapon and then redesigned for pyrotechnic use or if it was surplus and had been sold, loaned, or given to you (under certain circumstances) by the Secretary of the Army.

It’s also means that we are coming up on the 25th anniversary of the Reagan administration’s long-misnamed WMD-for-Hostages deal with Iran.

Bad news for you, though. You’ll have to amend that line you like using in your presentations about how all WMD in all of history have killed fewer people than OIF (or whatever), since all artillery, and virtually every muzzle-loading military long arm for that matter, legally qualifies as an WMD. It does make the bombardment of Ft. Sumter all the more sinister. To say nothing of the revelation that The Star Spangled Banner is in fact an account of a WMD attack on American shores.

Amusing, to be sure, but there’s something important going on. The U.S. government has passed specific laws about “weapons of mass destruction,” because they’re particularly scary and damaging. But by generalizing the definition of WMDs, those who write the laws greatly broaden their applicability. And I have to wonder how many of those who vote in favor of the laws realize how general they really are, or—if they do know—vote for them anyway because they can’t be seen to be “soft” on WMDs.

It reminds me of those provisions of the USA PATRIOT Act—and other laws—that created police powers to be used for “terrorism and other crimes.”

EDITED TO ADD (4/14): Prosecutions based on this unreasonable definition.

Posted on April 6, 2009 at 7:10 AMView Comments

Privacy and the Fourth Amendment

In the United States, the concept of “expectation of privacy” matters because it’s the constitutional test, based on the Fourth Amendment, that governs when and how the government can invade your privacy.

Based on the 1967 Katz v. United States Supreme Court decision, this test actually has two parts. First, the government’s action can’t contravene an individual’s subjective expectation of privacy; and second, that expectation of privacy must be one that society in general recognizes as reasonable. That second part isn’t based on anything like polling data; it is more of a normative idea of what level of privacy people should be allowed to expect, given the competing importance of personal privacy on one hand and the government’s interest in public safety on the other.

The problem is, in today’s information society, that definition test will rapidly leave us with no privacy at all.

In Katz, the Court ruled that the police could not eavesdrop on a phone call without a warrant: Katz expected his phone conversations to be private and this expectation resulted from a reasonable balance between personal privacy and societal security. Given NSA’s large-scale warrantless eavesdropping, and the previous administration’s continual insistence that it was necessary to keep America safe from terrorism, is it still reasonable to expect that our phone conversations are private?

Between the NSA’s massive internet eavesdropping program and Gmail’s content-dependent advertising, does anyone actually expect their e-mail to be private? Between calls for ISPs to retain user data and companies serving content-dependent web ads, does anyone expect their web browsing to be private? Between the various computer-infecting malware, and world governments increasingly demanding to see laptop data at borders, hard drives are barely private. I certainly don’t believe that my SMSes, any of my telephone data, or anything I say on LiveJournal or Facebook—regardless of the privacy settings—is private.

Aerial surveillance, data mining, automatic face recognition, terahertz radar that can “see” through walls, wholesale surveillance, brain scans, RFID, “life recorders” that save everything: Even if society still has some small expectation of digital privacy, that will change as these and other technologies become ubiquitous. In short, the problem with a normative expectation of privacy is that it changes with perceived threats, technology and large-scale abuses.

Clearly, something has to change if we are to be left with any privacy at all. Three legal scholars have written law review articles that wrestle with the problems of applying the Fourth Amendment to cyberspace and to our computer-mediated world in general.

George Washington University’s Daniel Solove, who blogs at Concurring Opinions, has tried to capture the byzantine complexities of modern privacy. He points out, for example, that the following privacy violations—all real—are very different: A company markets a list of 5 million elderly incontinent women; reporters deceitfully gain entry to a person’s home and secretly photograph and record the person; the government uses a thermal sensor device to detect heat patterns in a person’s home; and a newspaper reports the name of a rape victim. Going beyond simple definitions such as the divulging of a secret, Solove has developed a taxonomy of privacy, and the harms that result from their violation.

His 16 categories are: surveillance, interrogation, aggregation, identification, insecurity, secondary use, exclusion, breach of confidentiality, disclosure, exposure, increased accessibility, blackmail, appropriation, distortion, intrusion and decisional interference. Solove’s goal is to provide a coherent and comprehensive understanding of what is traditionally an elusive and hard-to-explain concept: privacy violations. (This taxonomy is also discussed in Solove’s book, Understanding Privacy.)

Orin Kerr, also a law professor at George Washington University, and a blogger at Volokh Conspiracy, has attempted to lay out general principles for applying the Fourth Amendment to the internet. First, he points out that the traditional inside/outside distinction—the police can watch you in a public place without a warrant, but not in your home—doesn’t work very well with regard to cyberspace. Instead, he proposes a distinction between content and non-content information: the body of an e-mail versus the header information, for example. The police should be required to get a warrant for the former, but not for the latter. Second, he proposes that search warrants should be written for particular individuals and not for particular internet accounts.

Meanwhile, Jed Rubenfeld of Yale Law School has tried to reinterpret the Fourth Amendment not in terms of privacy, but in terms of security. Pointing out that the whole “expectations” test is circular—what the government does affects what the government can do—he redefines everything in terms of security: the security that our private affairs are private.

This security is violated when, for example, the government makes widespread use of informants, or engages in widespread eavesdropping—even if no one’s privacy is actually violated. This neatly bypasses the whole individual privacy versus societal security question—a balancing that the individual usually loses—by framing both sides in terms of personal security.

I have issues with all of these articles. Solove’s taxonomy is excellent, but the sense of outrage that accompanies a privacy violation—”How could they know/do/say that!?”—is an important part of the harm resulting from a privacy violation. The non-content information that Kerr believes should be collectible without a warrant can be very private and personal: URLs can be very personal, and it’s possible to figure out browsed content just from the size of encrypted SSL traffic. Also, the ease with which the government can collect all of it—the calling and called party of every phone call in the country—makes the balance very different. I believe these need to be protected with a warrant requirement. Rubenfeld’s reframing is interesting, but the devil is in the details. Reframing privacy in terms of security still results in a balancing of competing rights. I’d rather take the approach of stating the—obvious to me—individual and societal value of privacy, and giving privacy its rightful place as a fundamental human right. (There’s additional commentary on Rubenfeld’s thesis at ArsTechnica.)

The trick here is to realize that a normative definition of the expectation of privacy doesn’t need to depend on threats or technology, but rather on what we—as society—decide it should be. Sure, today’s technology make it easier than ever to violate privacy. But it doesn’t necessarily follow that we have to violate privacy. Today’s guns make it easier than ever to shoot virtually anyone for any reason. That doesn’t mean our laws have to change.

No one knows how this will shake out legally. These three articles are from law professors; they’re not judicial opinions. But clearly something has to change, and ideas like these may someday form the basis of new Supreme Court decisions that brings legal notions of privacy into the 21st century.

This essay originally appeared on Wired.com.

Posted on March 31, 2009 at 6:30 AMView Comments

Hiding Behind Terrorism Law

The Bayer company is refusing to talk about a fatal accident at a West Virginia plant, citing a 2002 terrorism law.

CSB had intended to hear community concerns, gather more information on the accident, and inform residents of the status of its investigation. However, Bayer attorneys contacted CSB Chairman John Bresland and set up a Feb. 12 conference at the board’s Washington, D.C., headquarters. There, they warned CSB not to reveal details of the accident or the facility’s layout at the community meeting.

“This is where it gets a little strange,” Bresland tells C&EN. To justify their request, Bayer attorneys cited the Maritime Transportation Security Act of 2002, an antiterrorism law that requires companies with plants on waterways to develop security plans to minimize the threat of a terrorist attack. Part of the plans can be designated as “sensitive security information” that can be disseminated only on a “need-to-know basis.” Enforcement of the act is overseen by the Coast Guard and covers some 3,200 facilities, including 320 chemical and petrochemical facilities. Among those facilities is the Bayer plant.

Bayer argued that CSB’s planned public meeting could reveal sensitive plant-specific security information, Bresland says, and therefore would be a violation of the maritime transportation law. The board got cold feet and canceled the meeting.

Bresland contends that CSB wasn’t agreeing with Bayer, but says it was better to put off the meeting than to hold it and be unable to answer questions posed by the public.

The board then met with Coast Guard officials, Bresland says, and formally canceled the community meeting. The outcome of the Coast Guard meeting remains murky. It is unclear what role the Coast Guard might have in editing or restricting release of future CSB reports of accidents at covered facilities, the board says. “This could really cause difficulties for us,” Bresland says. “We could find ourselves hemming and hawing about what actually happened in an accident.”

This isn’t the first time that the specter of terrorism has been used to keep embarrassing information secret.

EDITED TO ADD (3/20): The meeting has been rescheduled. No word on how forthcoming Bayer will be.

Posted on March 18, 2009 at 12:45 PMView Comments

History and Ethics of Military Robots

This article gives an overview of U.S. military robots, and discusses a bit around the issues regarding their use in war:

As military robots gain more and more autonomy, the ethical questions involved will become even more complex. The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties. Autonomous robots could, in theory, follow the rules of engagement; they could be programmed with a list of criteria for determining appropriate targets and when shooting is permissible. The robot might be programmed to require human input if any civilians were detected. An example of such a list at work might go as follows: “Is the target a Soviet-made T-80 tank? Identification confirmed. Is the target located in an authorized free-fire zone? Location confirmed. Are there any friendly units within a 200-meter radius? No friendlies detected. Are there any civilians within a 200-meter radius? No civilians detected. Weapons release authorized. No human command authority required.”

Such an “ethical” killing machine, though, may not prove so simple in the reality of war. Even if a robot has software that follows all the various rules of engagement, and even if it were somehow absolutely free of software bugs and hardware failures (a big assumption), the very question of figuring out who an enemy is in the first place—that is, whether a target should even be considered for the list of screening questions—is extremely complicated in modern war. It essentially is a judgment call. It becomes further complicated as the enemy adapts, changes his conduct, and even hides among civilians. If an enemy is hiding behind a child, is it okay to shoot or not? Or what if an enemy is plotting an attack but has not yet carried it out? Politicians, pundits, and lawyers can fill pages arguing these points. It is unreasonable to expect robots to find them any easier.

The legal questions related to autonomous systems are also extremely sticky. In 2002, for example, an Air National Guard pilot in an F-16 saw flashing lights underneath him while flying over Afghanistan at twenty-three thousand feet and thought he was under fire from insurgents. Without getting required permission from his commanders, he dropped a 500-pound bomb on the lights. They instead turned out to be troops from Canada on a night training mission. Four were killed and eight wounded. In the hearings that followed, the pilot blamed the ubiquitous “fog of war” for his mistake. It didn’t matter and he was found guilty of dereliction of duty.

Change this scenario to an unmanned system and military lawyers aren’t sure what to do. Asks a Navy officer, “If these same Canadian forces had been attacked by an autonomous UCAV, determining who is accountable proves difficult. Would accountability lie with the civilian software programmers who wrote the faulty target identification software, the UCAV squadron’s Commanding Officer, or the Combatant Commander who authorized the operational use of the UCAV? Or are they collectively held responsible and accountable?”

The article was adapted from his book Wired for War: The Robotics Revolution and Conflict in the 21st Century, published this year. I bought the book, but I have not read it yet.

Related is this paper on the ethics of autonomous military robots.

Posted on March 9, 2009 at 6:59 AMView Comments

Judge Orders Defendant to Decrypt Laptop

This is an interesting case:

At issue in this case is whether forcing Boucher to type in that PGP passphrase—which would be shielded from and remain unknown to the government—is “testimonial,” meaning that it triggers Fifth Amendment protections. The counterargument is that since defendants can be compelled to turn over a key to a safe filled with incriminating documents, or provide fingerprints, blood samples, or voice recordings, unlocking a partially-encrypted hard drive is no different.

Posted on March 2, 2009 at 12:30 PMView Comments

Perverse Security Incentives

An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.

I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They’re often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons.

Almost certainly, Whole Foods has a no-touching-the-customer policy because its attorneys recommended it. “No touching” is a security measure as well, but it’s security against customer lawsuits. The cost of these lawsuits would be much, much greater than the $346 worth of groceries stolen in this instance. Even applied to suspected shoplifters, the policy makes sense: The cost of a lawsuit resulting from tackling an innocent shopper by mistake would be far greater than the cost of letting actual shoplifters get away. As perverse it may seem, the result is completely reasonable given the corporate incentives—Whole Foods wrote a corporate policy that benefited itself.

At least, it works as long as the police and other factors keep society’s shoplifter population down to a reasonable level.

Incentives explain much that is perplexing about security trade-offs. Why does King County, Washington, require one form of ID to get a concealed-carry permit, but two forms of ID to pay for the permit by check? Making a mistake on a gun permit is an abstract problem, but a bad check actually costs some department money.

In the decades before 9/11, why did the airlines fight every security measure except the photo-ID check? Increased security annoys their customers, but the photo-ID check solved a security problem of a different kind: the resale of nonrefundable tickets. So the airlines were on board for that one.

And why does the TSA confiscate liquids at airport security, on the off chance that a terrorist will try to make a liquid explosive instead of using the more common solid ones? Because the officials in charge of the decision used CYA security measures to prevent specific, known tactics rather than broad, general ones.

The same misplaced incentives explain the ongoing problem of innocent prisoners spending years in places like Guantanamo and Abu Ghraib. The solution might seem obvious: Release the innocent ones, keep the guilty ones, and figure out whether the ones we aren’t sure about are innocent or guilty. But the incentives are more perverse than that. Who is going to sign the order releasing one of those prisoners? Which military officer is going to accept the risk, no matter how small, of being wrong?

I read almost five years ago that prisoners were being held by the United States far longer than they should, because ”no one wanted to be responsible for releasing the next Osama bin Laden.” That incentive to do nothing hasn’t changed. It might have even gotten stronger, as these innocents languish in prison.

In all these cases, the best way to change the trade-off is to change the incentives. Look at why the Whole Foods case works. Store employees don’t have to apprehend shoplifters, because society created a special organization specifically authorized to lay hands on people the grocery store points to as shoplifters: the police. If we want more rationality out of the TSA, there needs to be someone with a broader perspective willing to deal with general threats rather than specific targets or tactics.

For prisoners, society has created a special organization specifically entrusted with the role of judging the evidence against them and releasing them if appropriate: the judiciary. It’s only because the George W. Bush administration decided to remove the Guantanamo prisoners from the legal system that we are now stuck with these perverse incentives. Our country would be smart to move as many of these people through the court system as we can.

This essay originally appeared on Wired.com.

Posted on March 2, 2009 at 7:10 AMView Comments

Melissa Hathaway Interview

President Obama has tasked Melissa Hathaway with conducting a 60-day review of the nation’s cybersecurity policies.

Who is she?

Hathaway has been working as a cybercoordination executive for the Office of the Director of National Intelligence. She chaired a multiagency group called the National Cyber Study Group that was instrumental in developing the Comprehensive National Cyber Security Initiative, which was approved by former President George W. Bush early last year. Since then, she has been in charge of coordinating and monitoring the CNCI’s implementation.

Although, honestly, the best thing to read to get an idea of how she thinks is this interview from IEEE Security & Privacy:

In the technology field, concern to be first to market often does trump the need for security to be built in up front. Most of the nation’s infrastructure is owned, operated, and developed by the commercial sector. We depend on this sector to address the nation’s broader needs, so we’ll need a new information-sharing environment. Private-sector risk models aren’t congruent with the needs for national security. We need to think about a way to do business that meets both sets of needs. The proposed revisions to Federal Information Security Management Act [FISMA] legislation will raise awareness of vulnerabilities within broader-based commercial systems.

Increasingly, we see industry jointly addressing these vulnerabilities, such as with the Industry Consortium for Advancement of Security on the Internet to share common vulnerabilities and response mechanisms. In addition, there’s the Software Assurance Forum for Excellence in Code, an alliance of vendors who seek to improve software security. Industry is beginning to understand that [it has a] shared risk and shared responsibilities and sees the advantage of coordinating and collaborating up front during the development stage, so that we can start to address vulnerabilities from day one. We also need to look for niche partnerships to enhance product development and build trust into components. We need to understand when and how we introduce risk into the system and ask ourselves whether that risk is something we can live with.

The government is using its purchasing power to influence the market toward better security. We’re already seeing results with the Federal Desktop Core Configuration [FDCC] initiative, a mandated security configuration for federal computers set by the OMB. The Department of Commerce is working with several IT vendors on standardizing security settings for a wide variety of IT products and environments. Because a broad population of the government is using Windows XP and Vista, the FDCC imitative worked with Microsoft and others to determine security needs up front.

Posted on February 24, 2009 at 12:36 PMView Comments

Is Megan's Law Worth It?

A study from New Jersey shows that Megan’s Law—laws designed to identity sex offenders to the communities they live in—is ineffective in reducing sex crimes or deterring recidivists.

The study, funded by the National Institute of Justice, examined the cases of 550 sex offenders who were broken into two groups—those released from prison before the passage of Megan’s Law and those released afterward.

The researchers found no statistically significant difference between the groups in whether the offenders committed new sex crimes.

Among those released before the passage of Megan’s Law, 10 percent were re-arrested on sex-crime charges. Among the other group, 7.6 percent were re-arrested for such crimes.

Similarly, the researchers found no significant difference in the number of victims of the two groups. Together, the offenders had 796 victims, ages 1 to 87. Most of the offenders had prior relationships with their new victims, and nearly half were family members. In just 16 percent of the cases, the offender was a stranger.

One complicating factor for the researchers is that sex crimes had started to decline even before the adoption of Megan’s Law, making it difficult to pinpoint cause and effect. In addition, sex offenses vary from county to county, rising and falling from year to year.

Even so, the researchers noted an “accelerated” decline in sex offenses in the years after the law’s passage.

“Although the initial decline cannot be attributed to Megan’s Law, the continued decline may, in fact, be related in some way to registration and notification activities,” the authors wrote. Elsewhere in the report, they noted that notification and increased surveillance of offenders “may have a general deterrent effect.”

Posted on February 23, 2009 at 12:28 PMView Comments

1 14 15 16 17 18 35

Sidebar photo of Bruce Schneier by Joe MacInnis.