Entries Tagged "courts"

Page 11 of 31

Federal Trade Commissioner Julie Brill on Obscurity

I think this is good:

Obscurity means that personal information isn’t readily available to just anyone. It doesn’t mean that information is wiped out or even locked up; rather, it means that some combination of factors makes certain types of information relatively hard to find.

Obscurity has always been an important component of privacy. It is a helpful concept because it encapsulates how a broad range of social, economic, and technological changes affects norms and consumer expectations.

Posted on April 24, 2015 at 12:42 PMView Comments

Australia Outlaws Warrant Canaries

In the US, certain types of warrants can come with gag orders preventing the recipient from disclosing the existence of warrant to anyone else. A warrant canary is basically a legal hack of that prohibition. Instead of saying “I just received a warrant with a gag order,” the potential recipient keeps repeating “I have not received any warrants.” If the recipient stops saying that, the rest of us are supposed to assume that he has been served one.

Lots of organizations maintain them. Personally, I have never believed this trick would work. It relies on the fact that a prohibition against speaking doesn’t prevent someone from not speaking. But courts generally aren’t impressed by this sort of thing, and I can easily imagine a secret warrant that includes a prohibition against triggering the warrant canary. And for all I know, there are right now secret legal proceedings on this very issue.

Australia has sidestepped all of this by outlawing warrant canaries entirely:

Section 182A of the new law says that a person commits an offense if he or she discloses or uses information about “the existence or non-existence of such a [journalist information] warrant.” The penalty upon conviction is two years imprisonment.

Expect that sort of wording in future US surveillance bills, too.

Posted on March 31, 2015 at 7:14 AMView Comments

Reforming the FISA Court

The Brennan Center has a long report on what’s wrong with the FISA Court and how to fix it.

At the time of its creation, many lawmakers saw constitutional problems in a court that operated in total secrecy and outside the normal “adversarial” process…. But the majority of Congress was reassured by similarities between FISA Court proceedings and the hearings that take place when the government seeks a search warrant in a criminal investigation. Moreover, the rules governing who could be targeted for “foreign intelligence” purposes were narrow enough to mitigate concerns that the FISA Court process might be used to suppress political dissent in the U.S.—or to avoid the stricter standards that apply in domestic criminal cases.

In the years since then, however, changes in technology and the law have altered the constitutional calculus. Technological advances have revolutionized communications. People are communicating at a scale unimaginable just a few years ago. International phone calls, once difficult and expensive, are now as simple as flipping a light switch, and the Internet provides countless additional means of international communication. Globalization makes such exchanges as necessary as they are easy. As a result of these changes, the amount of information about Americans that the NSA intercepts, even when targeting foreigners overseas, has exploded.

Instead of increasing safeguards for Americans’ privacy as technology advances, the law has evolved in the opposite direction since 9/11…. While surveillance involving Americans previously required individualized court orders, it now happens through massive collection programs…involving no case-by-case judicial review. The pool of permissible targets is no longer limited to foreign powers—such as foreign governments or terrorist groups—and their agents. Furthermore, the government may invoke the FISA Court process even if its primary purpose is to gather evidence for a domestic criminal prosecution rather than to thwart foreign threats.

…[T]hese developments…have had a profound effect on the role exercised by the FISA Court. They have caused the court to veer off course, departing from its traditional role of ensuring that the government has sufficient cause to intercept communications or obtain records in particular cases and instead authorizing broad surveillance programs. It is questionable whether the court’s new role comports with Article III of the Constitution, which mandates that courts must adjudicate concrete disputes rather than issuing advisory opinions on abstract questions. The constitutional infirmity is compounded by the fact that the court generally hears only from the government, while the people whose communications are intercepted have no meaningful opportunity to challenge the surveillance, even after the fact.

Moreover, under current law, the FISA Court does not provide the check on executive action that the Fourth Amendment demands. Interception of communications generally requires the government to obtain a warrant based on probable cause of criminal activity. Although some courts have held that a traditional warrant is not needed to collect foreign intelligence, they have imposed strict limits on the scope of such surveillance and have emphasized the importance of close judicial scrutiny in policing these limits. The FISA Court’s minimal involvement in overseeing programmatic surveillance does not meet these constitutional standards.

[…]

Fundamental changes are needed to fix these flaws. Congress should end programmatic surveillance and require the government to obtain judicial approval whenever it seeks to obtain communications or information involving Americans. It should shore up the Article III soundness of the FISA Court by ensuring that the interests of those affected by surveillance are represented in court proceedings, increasing transparency, and facilitating the ability of affected individuals to challenge surveillance programs in regular federal courts. Finally, Congress should address additional Fourth Amendment concerns by narrowing the permissible scope of “foreign intelligence surveillance” and ensuring that it cannot be used as an end-run around the constitutional standards for criminal investigations.

Just Security post—where I copied the above excerpt. Lawfare post.

Posted on March 24, 2015 at 9:04 AMView Comments

When Thinking Machines Break the Law

Last year, two Swiss artists programmed a Random Botnot Shopper, which every week would spend $100 in bitcoin to buy a random item from an anonymous Internet black market…all for an art project on display in Switzerland. It was a clever concept, except there was a problem. Most of the stuff the bot purchased was benign­—fake Diesel jeans, a baseball cap with a hidden camera, a stash can, a pair of Nike trainers—but it also purchased ten ecstasy tablets and a fake Hungarian passport.

What do we do when a machine breaks the law? Traditionally, we hold the person controlling the machine responsible. People commit the crimes; the guns, lockpicks, or computer viruses are merely their tools. But as machines become more autonomous, the link between machine and controller becomes more tenuous.

Who is responsible if an autonomous military drone accidentally kills a crowd of civilians? Is it the military officer who keyed in the mission, the programmers of the enemy detection software that misidentified the people, or the programmers of the software that made the actual kill decision? What if those programmers had no idea that their software was being used for military purposes? And what if the drone can improve its algorithms by modifying its own software based on what the entire fleet of drones learns on earlier missions?

Maybe our courts can decide where the culpability lies, but that’s only because while current drones may be autonomous, they’re not very smart. As drones get smarter, their links to the humans that originally built them become more tenuous.

What if there are no programmers, and the drones program themselves? What if they are both smart and autonomous, and make strategic as well as tactical decisions on targets? What if one of the drones decides, based on whatever means it has at its disposal, that it no longer maintains allegiance to the country that built it and goes rogue?

Our society has many approaches, using both informal social rules and more formal laws, for dealing with people who won’t follow the rules of society. We have informal mechanisms for small infractions, and a complex legal system for larger ones. If you are obnoxious at a party I throw, I won’t invite you back. Do it regularly, and you’ll be shamed and ostracized from the group. If you steal some of my stuff, I might report you to the police. Steal from a bank, and you’ll almost certainly go to jail for a long time. A lot of this might seem more ad hoc than situation-specific, but we humans have spent millennia working this all out. Security is both political and social, but it’s also psychological. Door locks, for example, only work because our social and legal prohibitions on theft keep the overwhelming majority of us honest. That’s how we live peacefully together at a scale unimaginable for any other species on the planet.

How does any of this work when the perpetrator is a machine with whatever passes for free will? Machines probably won’t have any concept of shame or praise. They won’t refrain from doing something because of what other machines might think. They won’t follow laws simply because it’s the right thing to do, nor will they have a natural deference to authority. When they’re caught stealing, how can they be punished? What does it mean to fine a machine? Does it make any sense at all to incarcerate it? And unless they are deliberately programmed with a self-preservation function, threatening them with execution will have no meaningful effect.

We are already talking about programming morality into thinking machines, and we can imagine programming other human tendencies into our machines, but we’re certainly going to get it wrong. No matter how much we try to avoid it, we’re going to have machines that break the law.

This, in turn, will break our legal system. Fundamentally, our legal system doesn’t prevent crime. Its effectiveness is based on arresting and convicting criminals after the fact, and their punishment providing a deterrent to others. This completely fails if there’s no punishment that makes sense.

We already experienced a small example of this after 9/11, which was when most of us first started thinking about suicide terrorists and how post-facto security was irrelevant to them. That was just one change in motivation, and look at how those actions affected the way we think about security. Our laws will have the same problem with thinking machines, along with related problems we can’t even imagine yet. The social and legal systems that have dealt so effectively with human rulebreakers of all sorts will fail in unexpected ways in the face of thinking machines.

A machine that thinks won’t always think in the ways we want it to. And we’re not ready for the ramifications of that.

This essay previously appeared on Edge.org as one of the answers to the 2015 Edge Question: “What do you think about machines that think?”

EDITED TO ADD: The Random Botnet Shopper is “under arrest.”

Posted on January 23, 2015 at 4:55 AMView Comments

Merry Christmas from the NSA

On Christmas Eve, the NSA released a bunch of audit reports on illegal spying using EO 12333 from 2001 to 2013.

Bloomberg article.

The heavily-redacted reports include examples of data on Americans being e-mailed to unauthorized recipients, stored in unsecured computers and retained after it was supposed to be destroyed, according to the documents. They were posted on the NSA’s website at around 1:30 p.m. on Christmas Eve.

In a 2012 case, for example, an NSA analyst “searched her spouse’s personal telephone directory without his knowledge to obtain names and telephone numbers for targeting,” according to one report. The analyst “has been advised to cease her activities,” it said.

The documents were released in response to an ACLU lawsuit.

Another article.

EDITED TO ADD (12/27): Remember Edward Snowden’s comment that he could eavesdrop on anybody? “I, sitting at my desk, certainly had the authorities to wiretap anyone, from you, or your accountant, to a federal judge, to even the President if I had a personal email.” Lots of people have accused him of lying. Here’s former NSA General Counsel Stewart Baker: “All that makes Snowden’s claim about being able to wiretap anyone extremely unlikely—and certainly not demonstrated by the latest disclosures, despite Glenn Greenwald’s claims to the contrary.”

These documents demonstrate that Snowden is probably correct. In these documents, NSA agents target all sorts of random Americans.

Posted on December 26, 2014 at 6:29 AMView Comments

1 9 10 11 12 13 31

Sidebar photo of Bruce Schneier by Joe MacInnis.