Entries Tagged "courts"

Page 25 of 29

Hacking Indictment

It’s been a while since I’ve seen one of these sorts of news stories:

A Romanian man has been indicted on charges of hacking into more than 150 U.S. government computers, causing disruptions that cost NASA, the Energy Department and the Navy nearly $1.5 million.

The federal indictment charged Victor Faur, 26, of Arad, Romania, with nine counts of computer intrusion and one count of conspiracy. He faces up to 54 years in prison if convicted of all counts, said Thom Mrozek, spokesman for the U.S. Attorney’s office, on Thursday.

Faur was being prosecuted by authorities in Romania on separate computer hacking charges, Mrozek said, and will be brought to Los Angeles upon resolution of that case. It was not known whether Faur had retained a lawyer in the United States.

Posted on December 4, 2006 at 12:48 PMView Comments

The Zotob Worm and the DHS

On August 18 of last year, the Zotob worm badly infected computers at the Department of Homeland Security, particularly the 1,300 workstations running the US-VISIT application at border crossings. Wired News filed a Freedom of Information Act request for details, which was denied.

After we sued, CBP released three internal documents, totaling five pages, and a copy of Microsoft’s security bulletin on the plug-and-play vulnerability. Though heavily redacted, the documents were enough to establish that Zotob had infiltrated US-VISIT after CBP made the strategic decision to leave the workstations unpatched. Virtually every other detail was blacked out. In the ensuing court proceedings, CBP claimed the redactions were necessary to protect the security of its computers, and acknowledged it had an additional 12 documents, totaling hundreds of pages, which it withheld entirely on the same grounds.

U.S. District Judge Susan Illston reviewed all the documents in chambers, and ordered an additional four documents to be released last month. The court also directed DHS to reveal much of what it had previously hidden beneath thick black pen strokes in the original five pages.

“Although defendant repeatedly asserts that this information would render the CBP computer system vulnerable, defendant has not articulated how this general information would do so,” Illston wrote in her ruling (emphasis is lllston’s).

The details say nothing about the technical details of the computer systems, and only point to the incompetence of the DHS in handling the incident.

Details are in the Wired News article.

Posted on November 6, 2006 at 12:11 PMView Comments

Faulty Data and the Arar Case

Maher Arar is a Syrian-born Canadian citizen. On September 26, 2002, he tried to fly from Switzerland to Toronto. Changing planes in New York, he was detained by the U.S. authorities, and eventually shipped to Syria where he was tortured. He’s 100% innocent. (Background here.)

The Canadian government has completed its “Commission of Inquiry into the Actions of Canadian Officials in Relation to Maher Arar,” the results of which are public. From their press release:

On Maher Arar, the Commissioner comes to one important conclusion: “I am able to say categorically that there is no evidence to indicate that Mr. Arar has committed any offence or that his activities constitute a threat to the security of Canada.”

Certainly something that everyone who supports the U.S.’s right to detain and torture people without having to demonstrate their guilt should think about. But what’s more interesting to readers of this blog is the role that inaccurate data played in the deportation and ultimately torture of an innocent man.

Privacy International summarizes the report. These are among their bullet points:

  • The RCMP provided the U.S. with an entire database of information relating to a terrorism investigation (three CDs of information), in a way that did not comply with RCMP policies that require screening for relevance, reliability, and personal information. In fact, this action was without precedent.
  • The RCMP provided the U.S. with inaccurate information about Arar that portrayed him in an infairly negative fashion and overstated his importance to a RCMP investigation. They included some “erroneous notes.”
  • While he was detained in the U.S., the RCMP provided information regarding him to the U.S. Federal Bureau of Investigation (FBI), “some of which portrayed him in an inaccurate and unfair way.” The RCMP provided inaccurate information to the U.S. authorities that tended to link Arar to other terrorist suspects; and told the U.S. authorities that Arar had previously refused to be interviewed, which was also incorrect; and the RCMP also said that soon after refusing the interview he suddenly left Canada for Tunisia. “The statement about the refusal to be interviewed had the potential to arouse suspicion, especially among law enforcement officers, that Mr. Arar had something to hide.” The RCMP’s information to the U.S. authorities also placed Arar in the vicinity of Washington DC on September 11, 2001 when he was instead in California.

Judicial oversight is a security mechanism. It prevents the police from incarcerating the wrong person. The point of habeas corpus is that the police need to present their evidence in front of a neutral third party, and not indefinitely detain or torture people just because they believe they’re guilty. We are all less secure if we water down these security measures.

Posted on September 29, 2006 at 7:06 AMView Comments

FairUse4WM News

A couple of weeks I ago I wrote about the battle between Microsoft’s DRM system and FairUse4WM, which breaks it. The news for this week is that Microsoft has patched their security against FairUseWM 1.2 and filed a lawsuit against the program’s anonymous authors, and those same anonymous authors have released FairUse4WM 1.3, which breaks the latest Microsoft patch.

We asked Viodentia about Redmond’s accusation that he and/or his associates broke into its systems in order to obtain the IP necessary to crack PlaysForSure; Vio replied that he’s “utterly shocked” by the charge. “I didn’t use any Microsoft source code. However, I believe that this lawsuit is a fishing expedition to get identity information, which can then be used to either bring more targeted lawsuits, or to cause other trouble.” We’re sure Microsoft would like its partners and the public to think that its DRM is generally infallible and could only be cracked by stealing its IP, so Viodentia’s conclusion about its legal tactics seems pretty fair, obvious, and logical to us.

What’s interesting about this continuing saga is how different it is from the normal find-vulnerability-then-patch sequence. The authors of FairUse4WM aren’t finding bugs and figuring out how to exploit them, forcing Microsoft to patch them. This is a sequence of crack, fix, re-crack, re-fix, etc.

The reason we’re seeing this—and this is going to be the norm for DRM systems—is that DRM is fundamentally an impossible problem. Making it work at all involves tricks, and breaking DRM is akin to “fixing” the software so the tricks don’t work. Anyone looking for a demonstation that technical DRM is doomed should watch this story unfold. (If Microsoft has any chance of winning at all, it’s via the legal route.)

Posted on September 28, 2006 at 12:55 PMView Comments

Scorecard from the War on Terror

This is absolutely essential reading for anyone interested in how the U.S. is prosecuting terrorism. Put aside the rhetoric and the posturing; this is what is actually happening.

Among the key findings about the year-by-year enforcement trends in the period were the following:

  • In the twelve months immediately after 9/11, the prosecution of individuals the government classified as international terrorists surged sharply higher than in the previous year. But timely data show that five years later, in the latest available period, the total number of these prosecutions has returned to roughly what they were just before the attacks. Given the widely accepted belief that the threat of terrorism in all parts of the world is much larger today than it was six or seven years ago, the extent of the recent decline in prosecutions is unexpected. See Figure 1 and supporting table.
  • Federal prosecutors by law and custom are authorized to decline cases that are brought to them for prosecution by the investigative agencies. And over the years the prosecutors have used this power to weed out matters that for one reason or another they felt should be dropped. For international terrorism the declination rate has been high, especially in recent years. In fact, timely data show that in the first eight months of FY 2006 the assistant U.S. Attorneys rejected slightly more than nine out of ten of the referrals. Given the assumption that the investigation of international terrorism must be the single most important target area for the FBI and other agencies, the turn-down rate is hard to understand. See Figure 2 and supporting table.
  • The typical sentences recently imposed on individuals considered to be international terrorists are not impressive. For all those convicted as a result of cases initiated in the two years after 9//11, for example, the median sentence—half got more and half got less—was 28 days. For those referrals that came in more recently—through May 31, 2006—the median sentence was 20 days. For cases started in the two year period before the 9/11 attack, the typical sentence was much longer, 41 months. See Figure 3.

Transactional Records Access Clearinghouse (TRAC) puts this data together by looking at Justice Department records. The data research organization is connected to Syracuse University, and has been doing this sort of thing—tracking what federal agencies actually do rather than what they say they do—for over fifteen years.

I am particularly entertained by the Justice Department’s rebuttal, which basically just calls the study names without offering any substantive criticism:

The Justice Department took issue with the study’s methodology and its conclusions.

The study “ignores the reality of how the war on terrorism is prosecuted in federal courts across the country and the value of early disruption of potential terrorist acts by proactive prosecution,” said Bryan Sierra, a Justice Department spokesman.

“The report presents misleading analysis of Department of Justice statistics to suggest the threat of terrorism may be inaccurate or exaggerated. The Department of Justice disagrees with this suggestion.”

How do I explain it? Most “terrorism” arrests are not for actual terrorism; they’re for other things. The cases are either thrown out for lack of evidence, or the penalties are more in line with the actual crimes. I don’t care what anyone from the Justice Department says: someone who is jailed for four weeks did not commit a terrorist act.

Posted on September 5, 2006 at 6:04 AMView Comments

Random Bag Searches in Subways

Last year, New York City implemented a program of random bag searches in the subways. It was a silly idea, and I wrote about it then. Recently the U.S. Court of Appeals for the 2nd Circuit upheld the program. Daniel Solove wrote about the ruling:

The 2nd Circuit panel concluded that the program was “reasonable” under the 4th Amendment’s special needs doctrine. Under the special needs doctrine, if there are exceptional circumstances that make the warrant and probable cause requirements unnecessary, then the search should be analyzed in terms of whether it is “reasonable.” Reasonableness is determined by balancing privacy against the government ‘s need. The problem with the 2nd Circuit decision is that under its reasoning, nearly any search, no matter how intrusive into privacy, would be justified. This is because of the way it assesses the government’s side of the balance. When the government’s interest is preventing the detonation of a bomb on a crowded subway, with the potential of mass casualties, it is hard for anything to survive when balanced against it.

The key to the analysis should be the extent to which the search program will effectively improve subway safety. In other words, the goals of the program may be quite laudable, but nobody questions the importance of subway safety. Its weight is so hefty that little can outweigh it. The important issue is whether the search program is a sufficiently effective way of achieving those goals that it is worth the trade-off in civil liberties. On this question, unfortunately, the 2nd Circuit punts. It defers to the law enforcement officials:

That decision is best left to those with “a unique understanding of, and responsibility for, limited public resources, including a finite number of police officers.” Accordingly, we ought not conduct a “searching examination of effectiveness.” Instead, we need only determine whether the Program is “a reasonably effective means of addressing” the government interest in deterring and detecting a terrorist attack on the subway system…

Instead, plaintiffs claim that the Program can have no meaningful deterrent effect because the NYPD employs too few checkpoints. In support of that claim, plaintiffs rely upon various statistical manipulations of the sealed checkpoint data.

We will not peruse, parse, or extrapolate four months’ worth of data in an attempt to divine how many checkpoints the City ought to deploy in the exercise of its day to day police power. Counter terrorism experts and politically accountable officials have undertaken the delicate and esoteric task of deciding how best to marshal their available resources in light of the conditions prevailing on any given day. We will not and may not second guess the minutiae of their considered decisions. (internal citations omitted)

Although courts should not take a “know it all” attitude, they must not defer on such a critical question. The problem with many security measures is that they are not a very wise expenditure of resources. It is costly to have a lot of police officers engage in these random searches when they could be doing other things or money could be spent on other measures. A very small number of random searches in a subway system of over 4 million riders a day seems more symbolic that effective. If courts don’t question the efficacy of security measures in the name of terrorism, then it allows law enforcement officials to win nearly all the time. The government just needs to come into court and say “terrorism” and little else will matter.

Posted on August 16, 2006 at 3:32 PMView Comments

Click Fraud and the Problem of Authenticating People

Google’s $6 billion-a-year advertising business is at risk because it can’t be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.

With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It’s fraud if you sit at the computer and repeatedly click on the ad or—better yet—write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people’s computers to generate the fake clicks.

The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money—sometimes a lot of money—on nothing. (Here’s a company that will commit click fraud for you.)

Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever … and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn’t doing enough. My guess is that everyone is right: It’s in Google’s interest both to solve and to downplay the importance of the problem.

But the overarching problem is both hard to solve and important: How do you tell if there’s an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn’t automated his responses, and isn’t being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that’s infected with a Trojan.

This problem manifests itself in other areas as well.

For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn’t see.

Playing is less fun if everyone else is computer-assisted, but unless there’s a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players—or even computers playing without a real person at all—have the potential to drive all the human players away from the game.

Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it’s a real person visiting a website, not just a bot on a computer. Standard testing doesn’t work online, because the tester can’t be sure that the test taker doesn’t have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that’s not always practical and obviates the benefits of internet testing.

This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant’s computer committed some hacking offense, but the defense argued that it wasn’t the defendant who did it—that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties—and it wasn’t him who’d downloaded the porn.

Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it—in both senses of that word.

And it’s a problem that will get worse as computers get better at imitating people.

Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don’t pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It’s a hard model to make work—Google would become more of a partner in the final sale instead of an indifferent displayer of advertising—but it’s the right security response to click fraud: Change the rules of the game so that click fraud doesn’t matter.

That’s how to solve a security problem.

This essay appeared on Wired.com.

EDITED TO ADD (7/13): Click Monkeys is a hoax site.

EDITED TO ADD (7/25): An evalution of Google’s anti-click-fraud efforts, as part of the Lane Gifts case. I’m not sure if this expert report was done for Google, for Lane Gifts, or for the judge.

Posted on July 13, 2006 at 5:22 AMView Comments

Unreliable Programming

One response to software liability:

Now suppose that there was a magical wand for taking snapshots of computer states just before crashes. Or that the legal system would permit claims on grounds of only the second part of the proof. Then there would be a strong positive incentive to write software that fails unreproducibly: “If our software’s errors cannot be demonstrated reliably in court, we will never lose money in product liability cases.”

Follow the link for examples.

Posted on July 11, 2006 at 7:47 AMView Comments

Yet Another Redacting Failure

This sort of thing happens so often it’s no longer news:

Conte’s e-mails were intended to be blacked out in a 51-page electronic filing Wednesday in which the government argued against the Chronicle’s motion to quash the subpoena. Eight of those pages were not supposed to be public.

But the redacted parts in the computer file could be seen by copying them and pasting the material in a word processing program.

Another news article here.

Posted on June 26, 2006 at 12:29 PMView Comments

Lying to Government Agents

“How to Avoid Going to Jail under 18 U.S.C. Section 1001 for Lying to Government Agents”

Title 18, United States Code, Section 1001 makes it a crime to: 1) knowingly and willfully; 2) make any materially false, fictitious or fraudulent statement or representation; 3) in any matter within the jurisdiction of the executive, legislative or judicial branch of the United States. Your lie does not even have to be made directly to an employee of the national government as long as it is “within the jurisdiction” of the ever expanding federal bureaucracy. Though the falsehood must be “material” this requirement is met if the statement has the “natural tendency to influence or [is] capable of influencing, the decision of the decisionmaking body to which it is addressed.” United States v. Gaudin, 515 U.S. 506, 510 (1995). (In other words, it is not necessary to show that your particular lie ever really influenced anyone.) Although you must know that your statement is false at the time you make it in order to be guilty of this crime, you do not have to know that lying to the government is a crime or even that the matter you are lying about is “within the jurisdiction” of a government agency. United States v. Yermian, 468 U.S. 63, 69 (1984). For example, if you lie to your employer on your time and attendance records and, unbeknownst to you, he submits your records, along with those of other employees, to the federal government pursuant to some regulatory duty, you could be criminally liable.

Posted on June 5, 2006 at 1:24 PMView Comments

1 23 24 25 26 27 29

Sidebar photo of Bruce Schneier by Joe MacInnis.