Entries Tagged "courts"

Page 19 of 29

Definition of "Weapon of Mass Destruction"

At least, according to U.S. law:

18 U.S.C. 2332a

  • (2) the term "weapon of mass destruction" means—
    • (A) any destructive device as defined in section 921 of this title;
    • (B) any weapon that is designed or intended to cause death or serious bodily injury through the release, dissemination, or impact of toxic or poisonous chemicals, or their precursors;
    • (C) any weapon involving a biological agent, toxin, or vector (as those terms are defined in section 178 of this title); or
    • (D) any weapon that is designed to release radiation or radioactivity at a level dangerous to human life;

18 U.S.C. 921

  • (4) The term "destructive device" means—
    • (A) any explosive, incendiary, or poison gas—
      • (i) bomb,
      • (ii) grenade,
      • (iii) rocket having a propellant charge of more than four ounces,
      • (iv) missile having an explosive or incendiary charge of more than one-quarter ounce,
      • (v) mine, or
      • (vi) device similar to any of the devices described in the preceding clauses;
    • (B) any type of weapon (other than a shotgun or a shotgun shell which the Attorney General finds is generally recognized as particularly suitable for sporting purposes) by whatever name known which will, or which may be readily converted to, expel a projectile by the action of an explosive or other propellant, and which has any barrel with a bore of more than one-half inch in diameter; and
    • (C) any combination of parts either designed or intended for use in converting any device into any destructive device described in subparagraph (A) or (B) and from which a destructive device may be readily assembled.

The term "destructive device" shall not include any device which is neither designed nor redesigned for use as a weapon; any device, although originally designed for use as a weapon, which is redesigned for use as a signaling, pyrotechnic, line throwing, safety, or similar device; surplus ordnance sold, loaned, or given by the Secretary of the Army pursuant to the provisions of section 4684 (2), 4685, or 4686 of title 10; or any other device which the Attorney General finds is not likely to be used as a weapon, is an antique, or is a rifle which the owner intends to use solely for sporting, recreational or cultural purposes.

This is a very broad definition, and one that involves the intention of the weapon’s creator as well as the details of the weapon itself.

In an e-mail, John Mueller commented:

As I understand it, not only is a grenade a weapon of mass destruction, but so is a maliciously-designed child’s rocket even if it doesn’t have a warhead. On the other hand, although a missile-propelled firecracker would be considered a weapons of mass destruction if its designers had wanted to think of it as a weapon, it would not be so considered if it had previously been designed for use as a weapon and then redesigned for pyrotechnic use or if it was surplus and had been sold, loaned, or given to you (under certain circumstances) by the Secretary of the Army.

It’s also means that we are coming up on the 25th anniversary of the Reagan administration’s long-misnamed WMD-for-Hostages deal with Iran.

Bad news for you, though. You’ll have to amend that line you like using in your presentations about how all WMD in all of history have killed fewer people than OIF (or whatever), since all artillery, and virtually every muzzle-loading military long arm for that matter, legally qualifies as an WMD. It does make the bombardment of Ft. Sumter all the more sinister. To say nothing of the revelation that The Star Spangled Banner is in fact an account of a WMD attack on American shores.

Amusing, to be sure, but there’s something important going on. The U.S. government has passed specific laws about “weapons of mass destruction,” because they’re particularly scary and damaging. But by generalizing the definition of WMDs, those who write the laws greatly broaden their applicability. And I have to wonder how many of those who vote in favor of the laws realize how general they really are, or—if they do know—vote for them anyway because they can’t be seen to be “soft” on WMDs.

It reminds me of those provisions of the USA PATRIOT Act—and other laws—that created police powers to be used for “terrorism and other crimes.”

EDITED TO ADD (4/14): Prosecutions based on this unreasonable definition.

Posted on April 6, 2009 at 7:10 AMView Comments

Privacy and the Fourth Amendment

In the United States, the concept of “expectation of privacy” matters because it’s the constitutional test, based on the Fourth Amendment, that governs when and how the government can invade your privacy.

Based on the 1967 Katz v. United States Supreme Court decision, this test actually has two parts. First, the government’s action can’t contravene an individual’s subjective expectation of privacy; and second, that expectation of privacy must be one that society in general recognizes as reasonable. That second part isn’t based on anything like polling data; it is more of a normative idea of what level of privacy people should be allowed to expect, given the competing importance of personal privacy on one hand and the government’s interest in public safety on the other.

The problem is, in today’s information society, that definition test will rapidly leave us with no privacy at all.

In Katz, the Court ruled that the police could not eavesdrop on a phone call without a warrant: Katz expected his phone conversations to be private and this expectation resulted from a reasonable balance between personal privacy and societal security. Given NSA’s large-scale warrantless eavesdropping, and the previous administration’s continual insistence that it was necessary to keep America safe from terrorism, is it still reasonable to expect that our phone conversations are private?

Between the NSA’s massive internet eavesdropping program and Gmail’s content-dependent advertising, does anyone actually expect their e-mail to be private? Between calls for ISPs to retain user data and companies serving content-dependent web ads, does anyone expect their web browsing to be private? Between the various computer-infecting malware, and world governments increasingly demanding to see laptop data at borders, hard drives are barely private. I certainly don’t believe that my SMSes, any of my telephone data, or anything I say on LiveJournal or Facebook—regardless of the privacy settings—is private.

Aerial surveillance, data mining, automatic face recognition, terahertz radar that can “see” through walls, wholesale surveillance, brain scans, RFID, “life recorders” that save everything: Even if society still has some small expectation of digital privacy, that will change as these and other technologies become ubiquitous. In short, the problem with a normative expectation of privacy is that it changes with perceived threats, technology and large-scale abuses.

Clearly, something has to change if we are to be left with any privacy at all. Three legal scholars have written law review articles that wrestle with the problems of applying the Fourth Amendment to cyberspace and to our computer-mediated world in general.

George Washington University’s Daniel Solove, who blogs at Concurring Opinions, has tried to capture the byzantine complexities of modern privacy. He points out, for example, that the following privacy violations—all real—are very different: A company markets a list of 5 million elderly incontinent women; reporters deceitfully gain entry to a person’s home and secretly photograph and record the person; the government uses a thermal sensor device to detect heat patterns in a person’s home; and a newspaper reports the name of a rape victim. Going beyond simple definitions such as the divulging of a secret, Solove has developed a taxonomy of privacy, and the harms that result from their violation.

His 16 categories are: surveillance, interrogation, aggregation, identification, insecurity, secondary use, exclusion, breach of confidentiality, disclosure, exposure, increased accessibility, blackmail, appropriation, distortion, intrusion and decisional interference. Solove’s goal is to provide a coherent and comprehensive understanding of what is traditionally an elusive and hard-to-explain concept: privacy violations. (This taxonomy is also discussed in Solove’s book, Understanding Privacy.)

Orin Kerr, also a law professor at George Washington University, and a blogger at Volokh Conspiracy, has attempted to lay out general principles for applying the Fourth Amendment to the internet. First, he points out that the traditional inside/outside distinction—the police can watch you in a public place without a warrant, but not in your home—doesn’t work very well with regard to cyberspace. Instead, he proposes a distinction between content and non-content information: the body of an e-mail versus the header information, for example. The police should be required to get a warrant for the former, but not for the latter. Second, he proposes that search warrants should be written for particular individuals and not for particular internet accounts.

Meanwhile, Jed Rubenfeld of Yale Law School has tried to reinterpret the Fourth Amendment not in terms of privacy, but in terms of security. Pointing out that the whole “expectations” test is circular—what the government does affects what the government can do—he redefines everything in terms of security: the security that our private affairs are private.

This security is violated when, for example, the government makes widespread use of informants, or engages in widespread eavesdropping—even if no one’s privacy is actually violated. This neatly bypasses the whole individual privacy versus societal security question—a balancing that the individual usually loses—by framing both sides in terms of personal security.

I have issues with all of these articles. Solove’s taxonomy is excellent, but the sense of outrage that accompanies a privacy violation—”How could they know/do/say that!?”—is an important part of the harm resulting from a privacy violation. The non-content information that Kerr believes should be collectible without a warrant can be very private and personal: URLs can be very personal, and it’s possible to figure out browsed content just from the size of encrypted SSL traffic. Also, the ease with which the government can collect all of it—the calling and called party of every phone call in the country—makes the balance very different. I believe these need to be protected with a warrant requirement. Rubenfeld’s reframing is interesting, but the devil is in the details. Reframing privacy in terms of security still results in a balancing of competing rights. I’d rather take the approach of stating the—obvious to me—individual and societal value of privacy, and giving privacy its rightful place as a fundamental human right. (There’s additional commentary on Rubenfeld’s thesis at ArsTechnica.)

The trick here is to realize that a normative definition of the expectation of privacy doesn’t need to depend on threats or technology, but rather on what we—as society—decide it should be. Sure, today’s technology make it easier than ever to violate privacy. But it doesn’t necessarily follow that we have to violate privacy. Today’s guns make it easier than ever to shoot virtually anyone for any reason. That doesn’t mean our laws have to change.

No one knows how this will shake out legally. These three articles are from law professors; they’re not judicial opinions. But clearly something has to change, and ideas like these may someday form the basis of new Supreme Court decisions that brings legal notions of privacy into the 21st century.

This essay originally appeared on Wired.com.

Posted on March 31, 2009 at 6:30 AMView Comments

Judge Orders Defendant to Decrypt Laptop

This is an interesting case:

At issue in this case is whether forcing Boucher to type in that PGP passphrase—which would be shielded from and remain unknown to the government—is “testimonial,” meaning that it triggers Fifth Amendment protections. The counterargument is that since defendants can be compelled to turn over a key to a safe filled with incriminating documents, or provide fingerprints, blood samples, or voice recordings, unlocking a partially-encrypted hard drive is no different.

Posted on March 2, 2009 at 12:30 PMView Comments

Perverse Security Incentives

An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.

I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They’re often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons.

Almost certainly, Whole Foods has a no-touching-the-customer policy because its attorneys recommended it. “No touching” is a security measure as well, but it’s security against customer lawsuits. The cost of these lawsuits would be much, much greater than the $346 worth of groceries stolen in this instance. Even applied to suspected shoplifters, the policy makes sense: The cost of a lawsuit resulting from tackling an innocent shopper by mistake would be far greater than the cost of letting actual shoplifters get away. As perverse it may seem, the result is completely reasonable given the corporate incentives—Whole Foods wrote a corporate policy that benefited itself.

At least, it works as long as the police and other factors keep society’s shoplifter population down to a reasonable level.

Incentives explain much that is perplexing about security trade-offs. Why does King County, Washington, require one form of ID to get a concealed-carry permit, but two forms of ID to pay for the permit by check? Making a mistake on a gun permit is an abstract problem, but a bad check actually costs some department money.

In the decades before 9/11, why did the airlines fight every security measure except the photo-ID check? Increased security annoys their customers, but the photo-ID check solved a security problem of a different kind: the resale of nonrefundable tickets. So the airlines were on board for that one.

And why does the TSA confiscate liquids at airport security, on the off chance that a terrorist will try to make a liquid explosive instead of using the more common solid ones? Because the officials in charge of the decision used CYA security measures to prevent specific, known tactics rather than broad, general ones.

The same misplaced incentives explain the ongoing problem of innocent prisoners spending years in places like Guantanamo and Abu Ghraib. The solution might seem obvious: Release the innocent ones, keep the guilty ones, and figure out whether the ones we aren’t sure about are innocent or guilty. But the incentives are more perverse than that. Who is going to sign the order releasing one of those prisoners? Which military officer is going to accept the risk, no matter how small, of being wrong?

I read almost five years ago that prisoners were being held by the United States far longer than they should, because ”no one wanted to be responsible for releasing the next Osama bin Laden.” That incentive to do nothing hasn’t changed. It might have even gotten stronger, as these innocents languish in prison.

In all these cases, the best way to change the trade-off is to change the incentives. Look at why the Whole Foods case works. Store employees don’t have to apprehend shoplifters, because society created a special organization specifically authorized to lay hands on people the grocery store points to as shoplifters: the police. If we want more rationality out of the TSA, there needs to be someone with a broader perspective willing to deal with general threats rather than specific targets or tactics.

For prisoners, society has created a special organization specifically entrusted with the role of judging the evidence against them and releasing them if appropriate: the judiciary. It’s only because the George W. Bush administration decided to remove the Guantanamo prisoners from the legal system that we are now stuck with these perverse incentives. Our country would be smart to move as many of these people through the court system as we can.

This essay originally appeared on Wired.com.

Posted on March 2, 2009 at 7:10 AMView Comments

Confessions Corrupt Eyewitnesses

People confess to crimes they don’t commit. They do it a lot. What’s interesting about this research is that confessions—whether false or true—corrupt other eyewitnesses:

Abstract

A confession is potent evidence, persuasive to judges and juries. Is it possible that a confession can also affect other evidence? The present study tested the hypothesis that a confession will alter eyewitnesses’ identification decisions. Two days after witnessing a staged theft and making an identification decision from a lineup that did not include the thief, participants were told that certain lineup members had confessed or denied guilt during a subsequent interrogation. Among those participants who had made a selection but were told that another lineup member confessed, 61% changed their identifications. Among those participants who had not made an identification, 50% went on to select the confessor when his identity was known. These findings challenge the presumption in law that different forms of evidence are independent and suggest an important overlooked mechanism by which innocent confessors are wrongfully convicted: Potentially exculpatory evidence is corrupted by a confession itself.

More:

When asked to explain their change, subjects revealed they were actually convinced by the confessor, and not simply complying with it, saying, “His face now looks more familiar than the one I chose before.”

Posted on February 4, 2009 at 6:35 AMView Comments

The Exclusionary Rule and Security

Earlier this month, the Supreme Court ruled that evidence gathered as a result of errors in a police database is admissible in court. Their narrow decision is wrong, and will only ensure that police databases remain error-filled in the future.

The specifics of the case are simple. A computer database said there was a felony arrest warrant pending for Bennie Herring when there actually wasn’t. When the police came to arrest him, they searched his home and found illegal drugs and a gun. The Supreme Court was asked to rule whether the police had the right to arrest him for possessing those items, even though there was no legal basis for the search and arrest in the first place.

What’s at issue here is the exclusionary rule, which basically says that unconstitutionally or illegally collected evidence is inadmissible in court. It might seem like a technicality, but excluding what is called “the fruit of the poisonous tree” is a security system designed to protect us all from police abuse.

We have a number of rules limiting what the police can do: rules governing arrest, search, interrogation, detention, prosecution, and so on. And one of the ways we ensure that the police follow these rules is by forbidding the police to receive any benefit from breaking them. In fact, we design the system so that the police actually harm their own interests by breaking them, because all evidence that stems from breaking the rules is inadmissible.

And that’s what the exclusionary rule does. If the police search your home without a warrant and find drugs, they can’t arrest you for possession. Since the police have better things to do than waste their time, they have an incentive to get a warrant.

The Herring case is more complicated, because the police thought they did have a warrant. The error was not a police error, but a database error. And, in fact, Judge Roberts wrote for the majority: “The exclusionary rule serves to deter deliberate, reckless, or grossly negligent conduct, or in some circumstances recurring or systemic negligence. The error in this case does not rise to that level.”

Unfortunately, Roberts is wrong. Government databases are filled with errors. People often can’t see data about themselves, and have no way to correct the errors if they do learn of any. And more and more databases are trying to exempt themselves from the Privacy Act of 1974, and specifically the provisions that require data accuracy. The legal argument for excluding this evidence was best made by an amicus curiae brief filed by the Electronic Privacy Information Center, but in short, the court should exclude the evidence because it’s the only way to ensure police database accuracy.

We are protected from becoming a police state by limits on police power and authority. This is not a trade-off we make lightly: we deliberately hamper law enforcement’s ability to do its job because we recognize that these limits make us safer. Without the exclusionary rule, your only remedy against an illegal search is to bring legal action against the police—and that can be very difficult. We, the people, would rather have you go free than motivate the police to ignore the rules that limit their power.

By not applying the exclusionary rule in the Herring case, the Supreme Court missed an important opportunity to motivate the police to purge errors from their databases. Constitutional lawyers have written many articles about this ruling, but the most interesting idea comes from George Washington University professor Daniel J. Solove, who proposes this compromise: “If a particular database has reasonable protections and deterrents against errors, then the Fourth Amendment exclusionary rule should not apply. If not, then the exclusionary rule should apply. Such a rule would create an incentive for law enforcement officials to maintain accurate databases, to avoid all errors, and would ensure that there would be a penalty or consequence for errors.”

Increasingly, we are being judged by the trail of data we leave behind us. Increasingly, data accuracy is vital to our personal safety and security. And if errors made by police databases aren’t held to the same legal standard as errors made by policemen, then more and more innocent Americans will find themselves the victims of incorrect data.

This essay originally appeared on the Wall Street Journal website.

EDITED TO ADD (2/1): More on the assault on the exclusionary rule.

EDITED TO ADD (2/9): Here’s another recent court case involving the exclusionary rule, and a thoughtful analysis by Orin Kerr.

Posted on January 28, 2009 at 7:12 AMView Comments

Audit

As the first digital president, Barack Obama is learning the hard way how difficult it can be to maintain privacy in the information age. Earlier this year, his passport file was snooped by contract workers in the State Department. In October, someone at Immigration and Customs Enforcement leaked information about his aunt’s immigration status. And in November, Verizon employees peeked at his cell phone records.

What these three incidents illustrate is not that computerized databases are vulnerable to hacking—we already knew that, and anyway the perpetrators all had legitimate access to the systems they used—but how important audit is as a security measure.

When we think about security, we commonly think about preventive measures: locks to keep burglars out of our homes, bank safes to keep thieves from our money, and airport screeners to keep guns and bombs off airplanes. We might also think of detection and response measures: alarms that go off when burglars pick our locks or dynamite open bank safes, sky marshals on airplanes who respond when a hijacker manages to sneak a gun through airport security. But audit, figuring out who did what after the fact, is often far more important than any of those other three.

Most security against crime comes from audit. Of course we use locks and alarms, but we don’t wear bulletproof vests. The police provide for our safety by investigating crimes after the fact and prosecuting the guilty: that’s audit.

Audit helps ensure that people don’t abuse positions of trust. The cash register, for example, is basically an audit system. Cashiers have to handle the store’s money. To ensure they don’t skim from the till, the cash register keeps an audit trail of every transaction. The store owner can look at the register totals at the end of the day and make sure the amount of money in the register is the amount that should be there.

The same idea secures us from police abuse, too. The police have enormous power, including the ability to intrude into very intimate aspects of our life in order to solve crimes and keep the peace. This is generally a good thing, but to ensure that the police don’t abuse this power, we put in place systems of audit like the warrant process.

The whole NSA warrantless eavesdropping scandal was about this. Some misleadingly painted it as allowing the government to eavesdrop on foreign terrorists, but the government always had that authority. What the government wanted was to not have to submit a warrant, even after the fact, to a secret FISA court. What they wanted was to not be subject to audit.

That would be an incredibly bad idea. Law enforcement systems that don’t have good audit features designed in, or are exempt from this sort of audit-based oversight, are much more prone to abuse by those in power—because they can abuse the system without the risk of getting caught. Audit is essential as the NSA increases its domestic spying. And large police databases, like the FBI Next Generation Identification System, need to have strong audit features built in.

For computerized database systems like that—systems entrusted with other people’s information—audit is a very important security mechanism. Hospitals need to keep databases of very personal health information, and doctors and nurses need to be able to access that information quickly and easily. A good audit record of who accessed what when is the best way to ensure that those trusted with our medical information don’t abuse that trust. It’s the same with IRS records, credit reports, police databases, telephone records – anything personal that someone might want to peek at during the course of his job.

Which brings us back to President Obama. In each of those three examples, someone in a position of trust inappropriately accessed personal information. The difference between how they played out is due to differences in audit. The State Department’s audit worked best; they had alarm systems in place that alerted superiors when Obama’s passport files were accessed and who accessed them. Verizon’s audit mechanisms worked less well; they discovered the inappropriate account access and have narrowed the culprits down to a few people. Audit at Immigration and Customs Enforcement was far less effective; they still don’t know who accessed the information.

Large databases filled with personal information, whether managed by governments or corporations, are an essential aspect of the information age. And they each need to be accessed, for legitimate purposes, by thousands or tens of thousands of people. The only way to ensure those people don’t abuse the power they’re entrusted with is through audit. Without it, we will simply never know who’s peeking at what.

This essay first appeared on the Wall Street Journal website.

Posted on December 10, 2008 at 2:21 PMView Comments

Government Can Determine Location of Cell Phones without Telco Help

Interesting:

Triggerfish, also known as cell-site simulators or digital analyzers, are nothing new: the technology was used in the 1990s to hunt down renowned hacker Kevin Mitnick. By posing as a cell tower, triggerfish trick nearby cell phones into transmitting their serial numbers, phone numbers, and other data to law enforcement. Most previous descriptions of the technology, however, suggested that because of range limitations, triggerfish were only useful for zeroing in on a phone's precise location once cooperative cell providers had given a general location.

This summer, however, the American Civil Liberties Union and Electronic Frontier Foundation sued the Justice Department, seeking documents related to the FBI's cell-phone tracking practices. Since August, they've received a stream of documents—the most recent batch on November 6—that were posted on the Internet last week. In a post on the progressive blog Daily Kos, ACLU spokesperson Rachel Myers drew attention to language in several of those documents implying that triggerfish have broader application than previously believed.

Posted on November 26, 2008 at 6:06 AMView Comments

RIAA Lawsuits May Be Unconstitutional

Harvard law professor Charles Nesson is arguing, in court, that the Digital Theft Deterrence and Copyright Damages Improvement Act of 1999 is unconstitutional:

He makes the argument that the Digital Theft Deterrence and Copyright Damages Improvement Act of 1999 is very much unconstitutional, in that its hefty fines for copyright infringement (misleadingly called “theft” in the title of the bill) show that the bill is effectively a criminal statute, yet for a civil crime. That’s because it really focuses on punitive damages, rather than making private parties whole again. Even worse, it puts the act of enforcing the criminal statute in the hands of a private body (the RIAA) who uses it for profit motive in being able to get hefty fines.

Imagine a statute which, in the name of deterrence, provides for a $750 fine for each mile-per-hour that a driver exceeds the speed limit, with the fine escalating to $150,000 per mile over the limit if the driver knew he or she was speeding. Imagine that the fines are not publicized, and most drivers do not know they exist. Imagine that enforcement of the fines is put in the hands of a private, self-interested police force, that has no political accountability, that can pursue any defendant it chooses at its own whim, that can accept or reject payoffs in exchange for not prosecuting the tickets, and that pockets for itself all payoffs and fines. Imagine that a significant percentage of these fines were never contested, regardless of whether they had merit, because the individuals being fined have limited financial resources and little idea of whether they can prevail in front of an objective judicial body.

Another news story.

Posted on November 19, 2008 at 1:33 PMView Comments

U.S. Court Rules that Hashing = Searching

Really interesting post by Orin Kerr on whether, by taking hash values of someone’s hard drive, the police conducted a “search”:

District Court Holds that Running Hash Values on Computer Is A Search: The case is United States v. Crist, 2008 WL 4682806 (M.D.Pa. October 22 2008) (Kane, C.J.). It’s a child pornography case involving a warrantless search that raises a very interesting and important question of first impression: Is running a hash a Fourth Amendment search? (For background on what a “hash” is and why it matters, see here).

First, the facts. Crist is behind on his rent payments, and his landlord starts to evict him by hiring Sell to remove Crist’s belongings and throw them away. Sell comes across Crist’s computer, and he hands over the computer to his friend Hipple who he knows is looking for a computer. Hipple starts to look through the files, and he comes across child pornography: Hipple freaks out and calls the police. The police then conduct a warrantless forensic examination of the computer:

In the forensic examination, Agent Buckwash used the following procedure. First, Agent Buckwash created an “MD5 hash value” of Crist’s hard drive. An MD5 hash value is a unique alphanumeric representation of the data, a sort of “fingerprint” or “digital DNA.” When creating the hash value, Agent Buckwash used a “software write protect” in order to ensure that “nothing can be written to that hard drive.” Supp. Tr. 88. Next, he ran a virus scan, during which he identified three relatively innocuous viruses. After that, he created an “image,” or exact copy, of all the data on Crist’s hard drive.

Agent Buckwash then opened up the image (not the actual hard drive) in a software program called EnCase, which is the principal tool in the analysis. He explained that EnCase does not access the hard drive in the traditional manner, i.e., through the computer’s operating system. Rather, EnCase “reads the hard drive itself.” Supp. Tr. 102. In other words, it reads every file-bit by bit, cluster by cluster-and creates a index of the files contained on the hard drive. EnCase can, therefore, bypass user-defined passwords, “break down complex file structures for examination,” and recover “deleted” files as long as those files have not been written over. Supp. Tr. 102-03.

Once in EnCase, Agent Buckwash ran a “hash value and signature analysis on all of the files on the hard drive.” Supp. Tr. 89. In doing so, he was able to “ingerprint” each file in the computer. Once he generated hash values of the files, he compared those hash values to the hash values of files that are known or suspected to contain child pornography. Agent Buckwash discovered five videos containing known child pornography. Attachment 5. He discovered 171 videos containing suspected child pornography.

One of the interesting questions here is whether the search that resulted was within the scope of Hipple’s private search; different courts have approached this question differently. But for now the most interesting question is whether running the hash was a Fourth Amendment search. The Court concluded that it was, and that the evidence of child pornography discovered had to be suppressed:

The Government argues that no search occurred in running the EnCase program because the agents “didn’t look at any files, they simply accessed the computer.” 2d Supp. Tr. 16. The Court rejects this view and finds that the “running of hash values” is a search protected by the Fourth Amendment.

Computers are composed of many compartments, among them a “hard drive,” which in turn is composed of many “platters,” or disks. To derive the hash values of Crist’s computer, the Government physically removed the hard drive from the computer, created a duplicate image of the hard drive without physically invading it, and applied the EnCase program to each compartment, disk, file, folder, and bit.2d Supp. Tr. 18-19. By subjecting the entire computer to a hash value analysis-every file, internet history, picture, and “buddy list” became available for Government review. Such examination constitutes a search.

I think this is generally a correct result: See my article Searches and Seizures in a Digital World, 119 Harv. L. Rev. 531 (2005), for the details. Still, given the lack of analysis here it’s somewhat hard to know what to make of the decision. Which stage was the search—the creating the duplicate? The running of the hash? It’s not really clear. I don’t think it matters very much to this case, because the agent who got the positive hit on the hashes didn’t then get a warrant. Instead, he immediately switched over to the EnCase “gallery view” function to see the images, which seems to be to be undoudtedly a search. Still, it’s a really interesting question.

Posted on November 5, 2008 at 8:28 AMView Comments

1 17 18 19 20 21 29

Sidebar photo of Bruce Schneier by Joe MacInnis.