Entries Tagged "courts"

Page 25 of 29

Faulty Data and the Arar Case

Maher Arar is a Syrian-born Canadian citizen. On September 26, 2002, he tried to fly from Switzerland to Toronto. Changing planes in New York, he was detained by the U.S. authorities, and eventually shipped to Syria where he was tortured. He’s 100% innocent. (Background here.)

The Canadian government has completed its “Commission of Inquiry into the Actions of Canadian Officials in Relation to Maher Arar,” the results of which are public. From their press release:

On Maher Arar, the Commissioner comes to one important conclusion: “I am able to say categorically that there is no evidence to indicate that Mr. Arar has committed any offence or that his activities constitute a threat to the security of Canada.”

Certainly something that everyone who supports the U.S.’s right to detain and torture people without having to demonstrate their guilt should think about. But what’s more interesting to readers of this blog is the role that inaccurate data played in the deportation and ultimately torture of an innocent man.

Privacy International summarizes the report. These are among their bullet points:

  • The RCMP provided the U.S. with an entire database of information relating to a terrorism investigation (three CDs of information), in a way that did not comply with RCMP policies that require screening for relevance, reliability, and personal information. In fact, this action was without precedent.
  • The RCMP provided the U.S. with inaccurate information about Arar that portrayed him in an infairly negative fashion and overstated his importance to a RCMP investigation. They included some “erroneous notes.”
  • While he was detained in the U.S., the RCMP provided information regarding him to the U.S. Federal Bureau of Investigation (FBI), “some of which portrayed him in an inaccurate and unfair way.” The RCMP provided inaccurate information to the U.S. authorities that tended to link Arar to other terrorist suspects; and told the U.S. authorities that Arar had previously refused to be interviewed, which was also incorrect; and the RCMP also said that soon after refusing the interview he suddenly left Canada for Tunisia. “The statement about the refusal to be interviewed had the potential to arouse suspicion, especially among law enforcement officers, that Mr. Arar had something to hide.” The RCMP’s information to the U.S. authorities also placed Arar in the vicinity of Washington DC on September 11, 2001 when he was instead in California.

Judicial oversight is a security mechanism. It prevents the police from incarcerating the wrong person. The point of habeas corpus is that the police need to present their evidence in front of a neutral third party, and not indefinitely detain or torture people just because they believe they’re guilty. We are all less secure if we water down these security measures.

Posted on September 29, 2006 at 7:06 AMView Comments

FairUse4WM News

A couple of weeks I ago I wrote about the battle between Microsoft’s DRM system and FairUse4WM, which breaks it. The news for this week is that Microsoft has patched their security against FairUseWM 1.2 and filed a lawsuit against the program’s anonymous authors, and those same anonymous authors have released FairUse4WM 1.3, which breaks the latest Microsoft patch.

We asked Viodentia about Redmond’s accusation that he and/or his associates broke into its systems in order to obtain the IP necessary to crack PlaysForSure; Vio replied that he’s “utterly shocked” by the charge. “I didn’t use any Microsoft source code. However, I believe that this lawsuit is a fishing expedition to get identity information, which can then be used to either bring more targeted lawsuits, or to cause other trouble.” We’re sure Microsoft would like its partners and the public to think that its DRM is generally infallible and could only be cracked by stealing its IP, so Viodentia’s conclusion about its legal tactics seems pretty fair, obvious, and logical to us.

What’s interesting about this continuing saga is how different it is from the normal find-vulnerability-then-patch sequence. The authors of FairUse4WM aren’t finding bugs and figuring out how to exploit them, forcing Microsoft to patch them. This is a sequence of crack, fix, re-crack, re-fix, etc.

The reason we’re seeing this—and this is going to be the norm for DRM systems—is that DRM is fundamentally an impossible problem. Making it work at all involves tricks, and breaking DRM is akin to “fixing” the software so the tricks don’t work. Anyone looking for a demonstation that technical DRM is doomed should watch this story unfold. (If Microsoft has any chance of winning at all, it’s via the legal route.)

Posted on September 28, 2006 at 12:55 PMView Comments

Scorecard from the War on Terror

This is absolutely essential reading for anyone interested in how the U.S. is prosecuting terrorism. Put aside the rhetoric and the posturing; this is what is actually happening.

Among the key findings about the year-by-year enforcement trends in the period were the following:

  • In the twelve months immediately after 9/11, the prosecution of individuals the government classified as international terrorists surged sharply higher than in the previous year. But timely data show that five years later, in the latest available period, the total number of these prosecutions has returned to roughly what they were just before the attacks. Given the widely accepted belief that the threat of terrorism in all parts of the world is much larger today than it was six or seven years ago, the extent of the recent decline in prosecutions is unexpected. See Figure 1 and supporting table.
  • Federal prosecutors by law and custom are authorized to decline cases that are brought to them for prosecution by the investigative agencies. And over the years the prosecutors have used this power to weed out matters that for one reason or another they felt should be dropped. For international terrorism the declination rate has been high, especially in recent years. In fact, timely data show that in the first eight months of FY 2006 the assistant U.S. Attorneys rejected slightly more than nine out of ten of the referrals. Given the assumption that the investigation of international terrorism must be the single most important target area for the FBI and other agencies, the turn-down rate is hard to understand. See Figure 2 and supporting table.
  • The typical sentences recently imposed on individuals considered to be international terrorists are not impressive. For all those convicted as a result of cases initiated in the two years after 9//11, for example, the median sentence—half got more and half got less—was 28 days. For those referrals that came in more recently—through May 31, 2006—the median sentence was 20 days. For cases started in the two year period before the 9/11 attack, the typical sentence was much longer, 41 months. See Figure 3.

Transactional Records Access Clearinghouse (TRAC) puts this data together by looking at Justice Department records. The data research organization is connected to Syracuse University, and has been doing this sort of thing—tracking what federal agencies actually do rather than what they say they do—for over fifteen years.

I am particularly entertained by the Justice Department’s rebuttal, which basically just calls the study names without offering any substantive criticism:

The Justice Department took issue with the study’s methodology and its conclusions.

The study “ignores the reality of how the war on terrorism is prosecuted in federal courts across the country and the value of early disruption of potential terrorist acts by proactive prosecution,” said Bryan Sierra, a Justice Department spokesman.

“The report presents misleading analysis of Department of Justice statistics to suggest the threat of terrorism may be inaccurate or exaggerated. The Department of Justice disagrees with this suggestion.”

How do I explain it? Most “terrorism” arrests are not for actual terrorism; they’re for other things. The cases are either thrown out for lack of evidence, or the penalties are more in line with the actual crimes. I don’t care what anyone from the Justice Department says: someone who is jailed for four weeks did not commit a terrorist act.

Posted on September 5, 2006 at 6:04 AMView Comments

Random Bag Searches in Subways

Last year, New York City implemented a program of random bag searches in the subways. It was a silly idea, and I wrote about it then. Recently the U.S. Court of Appeals for the 2nd Circuit upheld the program. Daniel Solove wrote about the ruling:

The 2nd Circuit panel concluded that the program was “reasonable” under the 4th Amendment’s special needs doctrine. Under the special needs doctrine, if there are exceptional circumstances that make the warrant and probable cause requirements unnecessary, then the search should be analyzed in terms of whether it is “reasonable.” Reasonableness is determined by balancing privacy against the government ‘s need. The problem with the 2nd Circuit decision is that under its reasoning, nearly any search, no matter how intrusive into privacy, would be justified. This is because of the way it assesses the government’s side of the balance. When the government’s interest is preventing the detonation of a bomb on a crowded subway, with the potential of mass casualties, it is hard for anything to survive when balanced against it.

The key to the analysis should be the extent to which the search program will effectively improve subway safety. In other words, the goals of the program may be quite laudable, but nobody questions the importance of subway safety. Its weight is so hefty that little can outweigh it. The important issue is whether the search program is a sufficiently effective way of achieving those goals that it is worth the trade-off in civil liberties. On this question, unfortunately, the 2nd Circuit punts. It defers to the law enforcement officials:

That decision is best left to those with “a unique understanding of, and responsibility for, limited public resources, including a finite number of police officers.” Accordingly, we ought not conduct a “searching examination of effectiveness.” Instead, we need only determine whether the Program is “a reasonably effective means of addressing” the government interest in deterring and detecting a terrorist attack on the subway system…

Instead, plaintiffs claim that the Program can have no meaningful deterrent effect because the NYPD employs too few checkpoints. In support of that claim, plaintiffs rely upon various statistical manipulations of the sealed checkpoint data.

We will not peruse, parse, or extrapolate four months’ worth of data in an attempt to divine how many checkpoints the City ought to deploy in the exercise of its day to day police power. Counter terrorism experts and politically accountable officials have undertaken the delicate and esoteric task of deciding how best to marshal their available resources in light of the conditions prevailing on any given day. We will not and may not second guess the minutiae of their considered decisions. (internal citations omitted)

Although courts should not take a “know it all” attitude, they must not defer on such a critical question. The problem with many security measures is that they are not a very wise expenditure of resources. It is costly to have a lot of police officers engage in these random searches when they could be doing other things or money could be spent on other measures. A very small number of random searches in a subway system of over 4 million riders a day seems more symbolic that effective. If courts don’t question the efficacy of security measures in the name of terrorism, then it allows law enforcement officials to win nearly all the time. The government just needs to come into court and say “terrorism” and little else will matter.

Posted on August 16, 2006 at 3:32 PMView Comments

Click Fraud and the Problem of Authenticating People

Google’s $6 billion-a-year advertising business is at risk because it can’t be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.

With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It’s fraud if you sit at the computer and repeatedly click on the ad or—better yet—write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people’s computers to generate the fake clicks.

The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money—sometimes a lot of money—on nothing. (Here’s a company that will commit click fraud for you.)

Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever … and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn’t doing enough. My guess is that everyone is right: It’s in Google’s interest both to solve and to downplay the importance of the problem.

But the overarching problem is both hard to solve and important: How do you tell if there’s an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn’t automated his responses, and isn’t being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that’s infected with a Trojan.

This problem manifests itself in other areas as well.

For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn’t see.

Playing is less fun if everyone else is computer-assisted, but unless there’s a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players—or even computers playing without a real person at all—have the potential to drive all the human players away from the game.

Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it’s a real person visiting a website, not just a bot on a computer. Standard testing doesn’t work online, because the tester can’t be sure that the test taker doesn’t have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that’s not always practical and obviates the benefits of internet testing.

This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant’s computer committed some hacking offense, but the defense argued that it wasn’t the defendant who did it—that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties—and it wasn’t him who’d downloaded the porn.

Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it—in both senses of that word.

And it’s a problem that will get worse as computers get better at imitating people.

Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don’t pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It’s a hard model to make work—Google would become more of a partner in the final sale instead of an indifferent displayer of advertising—but it’s the right security response to click fraud: Change the rules of the game so that click fraud doesn’t matter.

That’s how to solve a security problem.

This essay appeared on Wired.com.

EDITED TO ADD (7/13): Click Monkeys is a hoax site.

EDITED TO ADD (7/25): An evalution of Google’s anti-click-fraud efforts, as part of the Lane Gifts case. I’m not sure if this expert report was done for Google, for Lane Gifts, or for the judge.

Posted on July 13, 2006 at 5:22 AMView Comments

Unreliable Programming

One response to software liability:

Now suppose that there was a magical wand for taking snapshots of computer states just before crashes. Or that the legal system would permit claims on grounds of only the second part of the proof. Then there would be a strong positive incentive to write software that fails unreproducibly: “If our software’s errors cannot be demonstrated reliably in court, we will never lose money in product liability cases.”

Follow the link for examples.

Posted on July 11, 2006 at 7:47 AMView Comments

Yet Another Redacting Failure

This sort of thing happens so often it’s no longer news:

Conte’s e-mails were intended to be blacked out in a 51-page electronic filing Wednesday in which the government argued against the Chronicle’s motion to quash the subpoena. Eight of those pages were not supposed to be public.

But the redacted parts in the computer file could be seen by copying them and pasting the material in a word processing program.

Another news article here.

Posted on June 26, 2006 at 12:29 PMView Comments

Lying to Government Agents

“How to Avoid Going to Jail under 18 U.S.C. Section 1001 for Lying to Government Agents”

Title 18, United States Code, Section 1001 makes it a crime to: 1) knowingly and willfully; 2) make any materially false, fictitious or fraudulent statement or representation; 3) in any matter within the jurisdiction of the executive, legislative or judicial branch of the United States. Your lie does not even have to be made directly to an employee of the national government as long as it is “within the jurisdiction” of the ever expanding federal bureaucracy. Though the falsehood must be “material” this requirement is met if the statement has the “natural tendency to influence or [is] capable of influencing, the decision of the decisionmaking body to which it is addressed.” United States v. Gaudin, 515 U.S. 506, 510 (1995). (In other words, it is not necessary to show that your particular lie ever really influenced anyone.) Although you must know that your statement is false at the time you make it in order to be guilty of this crime, you do not have to know that lying to the government is a crime or even that the matter you are lying about is “within the jurisdiction” of a government agency. United States v. Yermian, 468 U.S. 63, 69 (1984). For example, if you lie to your employer on your time and attendance records and, unbeknownst to you, he submits your records, along with those of other employees, to the federal government pursuant to some regulatory duty, you could be criminally liable.

Posted on June 5, 2006 at 1:24 PMView Comments

Man Sues Compaq for False Advertising

Convicted felon Michael Crooker is suing Compaq (now HP) for false advertising. He bought a computer promised to be secure, but the FBI got his data anyway:

He bought it in September 2002, expressly because it had a feature called DriveLock, which freezes up the hard drive if you don’t have the proper password.

The computer’s manual claims that “if one were to lose his Master Password and his User Password, then the hard drive is useless and the data cannot be resurrected even by Compaq’s headquarters staff,” Crooker wrote in the suit.

Crooker has a copy of an ATF search warrant for files on the computer, which includes a handwritten notation: “Computer lock not able to be broken/disabled. Computer forwarded to FBI lab.” Crooker says he refused to give investigators the password, and was told the computer would be broken into “through a backdoor provided by Compaq,” which is now part of HP.

It’s unclear what was done with the laptop, but Crooker says a subsequent search warrant for his e-mail account, issued in January 2005, showed investigators had somehow gained access to his 40 gigabyte hard drive. The FBI had broken through DriveLock and accessed his e-mails (both deleted and not) as well as lists of websites he’d visited and other information. The only files they couldn’t read were ones he’d encrypted using Wexcrypt, a software program freely available on the Internet.

I think this is great. It’s about time that computer companies were held liable for their advertising claims.

But his lawsuit against HP may be a long shot. Crooker appears to face strong counterarguments to his claim that HP is guilty of breach of contract, especially if the FBI made the company provide a backdoor.

“If they had a warrant, then I don’t see how his case has any merit at all,” said Steven Certilman, a Stamford attorney who heads the Technology Law section of the Connecticut Bar Association. “Whatever means they used, if it’s covered by the warrant, it’s legitimate.”

If HP claimed DriveLock was unbreakable when the company knew it was not, that might be a kind of false advertising.

But while documents on HP’s web site do claim that without the correct passwords, a DriveLock’ed hard drive is “permanently unusable,” such warnings may not constitute actual legal guarantees.

According to Certilman and other computer security experts, hardware and software makers are careful not to make themselves liable for the performance of their products.

“I haven’t heard of manufacturers, at least for the consumer market, making a promise of computer security. Usually you buy naked hardware and you’re on your own,” Certilman said. In general, computer warrantees are “limited only to replacement and repair of the component, and not to incidental consequential damages such as the exposure of the underlying data to snooping third parties,” he said. “So I would be quite surprised if there were a gaping hole in their warranty that would allow that kind of claim.”

That point meets with agreement from the noted computer security skeptic Bruce Schneier, the chief technology officer at Counterpane Internet Security in Mountain View, Calif.

“I mean, the computer industry promises nothing,” he said last week. “Did you ever read a shrink-wrapped license agreement? You should read one. It basically says, if this product deliberately kills your children, and we knew it would, and we decided not to tell you because it might harm sales, we’re not liable. I mean, it says stuff like that. They’re absurd documents. You have no rights.”

My final quote in the article:

“Unfortunately, this probably isn’t a great case,” Schneier said. “Here’s a man who’s not going to get much sympathy. You want a defendant who bought the Compaq computer, and then, you know, his competitor, or a rogue employee, or someone who broke into his office, got the data. That’s a much more sympathetic defendant.”

Posted on May 3, 2006 at 9:26 AMView Comments

Da Vinci Code Ruling Code

There is a code embedded in the ruling in The Da Vinci Code plagiarism case.

You can find it by searching for the characters in italic and boldface scattered throughout the ruling. The first characters spell out “SMITHCODE”: that’s the name of the judge who wrote the ruling The rest remains unsolved.

According to The Times, the remaining letters are: J, a, e, i, e, x, t, o, s, t, p, s, a, c, g, r, e, a, m, q, w, f, k, a, d, p, m, q, z.

According to The Register, the remaining letters are: j a e i e x t o s t g p s a c g r e a m q w f k a d p m q z v.

According to one of my readers, who says he “may have missed some letters,” it’s: SMITHYCODEJAEIEXTOSTGPSACGREAMQWFKADPMQZV.

I think a bunch of us need to check for ourselves, and then compare notes.

And then we have to start working on solving the thing.

From the BBC:

Although he would not be drawn on his code and its meaning, Mr Justice Smith said he would probably confirm it if someone cracked it, which was “not a difficult thing to do”.

As an aside, I am mentioned in Da Vinci Code. No, really. Page 199 of the American hardcover edition. “Da Vinci had been a cryptography pioneer, Sophie knew, although he was seldom given credit. Sophie’s university instructors, while presenting computer encryption methods for securing data, praised modern cryptologists like Zimmermann and Schneier but failed to mention that it was Leonardo who had invented one of the first rudimentary forms of public key encryption centuries ago.”

That’s right. I am a realistic background detail.

EDITED TO ADD (4/28): The code is broken. Details are in The New York Times:

Among Justice Smith’s hints, he told decoders to look at page 255 in the British paperback edition of “The Da Vinci Code,” where the protagonists discuss the Fibonacci Sequence, a famous numerical series in which each number is the sum of the two preceding ones. Omitting the zero as Dan Brown, “The Da Vinci Code” author, does the series begins 1, 1, 2, 3, 5, 8, 13, 21.

Solving the judge’s code requires repeatedly applying the Fibonacci Sequence, through the number 21, to the apparently random coded letters that appear in boldfaced italics in the text of his ruling: JAEIEXTOSTGPSACGREAMQWFKADPMQZVZ.

For example, the fourth letter of the coded message is I. The fourth number of the Fibonacci Sequence, as used in “The Da Vinci Code,” is 3. Therefore, decoding the I requires an alphabet that starts at the third letter of the regular alphabet, C. I is the ninth letter regularly; the ninth letter of the alphabet starting with C is K; thus, the I in the coded message stands for the letter K.

The judge inserted two twists to confound codebreakers. One is a typographical error: a letter that should have been an H in both the coded message and its translation is instead a T. The other is drawn from “Holy Blood, Holy Grail,” the other book in the copy right case. It concerns the number 2 in the Fibonacci series, which becomes a requirement to count two letters back in the regular alphabet rather than a signal to use an alphabet that begins with B. For instance, the first E in the coded message, which corresponds to a 2 in the Fibonacci series, becomes a C in the answer.

The message reads: “Jackie Fisher who are you Dreadnought.”

I’m disappointed, actually. That was a whopper of a hint, and I would have preferred the judge to keep quiet.

EDITED TO ADD (5/8): Commentary on my name being in The Da Vinci Code.

Posted on April 27, 2006 at 6:47 PMView Comments

1 23 24 25 26 27 29

Sidebar photo of Bruce Schneier by Joe MacInnis.