Entries Tagged "police"

Page 5 of 27

Child Arrested Because Adults Are Stupid

A Texas 9th-grader makes an electronic clock and brings it to school. Teachers immediately become stupid and call the police:

The bell rang at least twice, he said, while the officers searched his belongings and questioned his intentions. The principal threatened to expel him if he didn’t make a written statement, he said.

“They were like, ‘So you tried to make a bomb?'” Ahmed said.

“I told them no, I was trying to make a clock.”

“He said, It looks like a movie bomb to me.'”

The student’s name is Ahmed Mohamed, which certainly didn’t help.

I am reminded of the 2007 story of an MIT student getting arrested for bringing a piece of wearable electronic art to the airport. And I wrote about the “war on the unexpected” back in 2007, too.

We simply have to stop terrorizing ourselves. We just look stupid when we do it.

EDITED TO ADD: New York Times article. Glenn Greenwald commentary.

EDITED TO ADD (9/21): There’s more to the story. He’s been invited to the White House, Google, MIT, and Facebook, and offered internships by Reddit and Twitter. On the other hand, Sarah Palin doesn’t believe it was just a clock. And he’s changing schools.

EDITED TO ADD (10/13): Two more essays.

Posted on September 16, 2015 at 10:09 AMView Comments

Another Salvo in the Second Crypto War (of Words)

Prosecutors from New York, London, Paris, and Madrid wrote an op-ed in yesterday’s New York Times in favor of backdoors in cell phone encryption. There are a number of flaws in their argument, ranging from how easy it is to get data off an encrypted phone to the dangers of designing a backdoor in the first place, but all of that has been said before. And since anecdote can be more persuasive than data, the op-ed started with one:

In June, a father of six was shot dead on a Monday afternoon in Evanston, Ill., a suburb 10 miles north of Chicago. The Evanston police believe that the victim, Ray C. Owens, had also been robbed. There were no witnesses to his killing, and no surveillance footage either.

With a killer on the loose and few leads at their disposal, investigators in Cook County, which includes Evanston, were encouraged when they found two smartphones alongside the body of the deceased: an iPhone 6 running on Apple’s iOS 8 operating system, and a Samsung Galaxy S6 Edge running on Google’s Android operating system. Both devices were passcode protected.

You can guess the rest. A judge issued a warrant, but neither Apple nor Google could unlock the phones. “The homicide remains unsolved. The killer remains at large.”

The Intercept researched the example, and it seems to be real. The phones belonged to the victim, and…

According to Commander Joseph Dugan of the Evanston Police Department, investigators were able to obtain records of the calls to and from the phones, but those records did not prove useful. By contrast, interviews with people who knew Owens suggested that he communicated mainly through text messages—the kind that travel as encrypted data—and had made plans to meet someone shortly before he was shot.

The information on his phone was not backed up automatically on Apple’s servers—apparently because he didn’t use wi-fi, which backups require.

[…]

But Dugan also wasn’t as quick to lay the blame solely on the encrypted phones. “I don’t know if getting in there, getting the information, would solve the case,” he said, “but it definitely would give us more investigative leads to follow up on.”

This is the first actual example I’ve seen illustrating the value of a backdoor. Unlike the increasingly common example of an ISIL handler abroad communicating securely with a radicalized person in the US, it’s an example where a backdoor might have helped. I say “might have,” because the Galaxy S6 is not encrypted by default, which means the victim deliberately turned the encryption on. If the native smartphone encryption had been backdoored, we don’t know if the victim would have turned it on nevertheless, or if he would have employed a different, non-backdoored, app.

The authors’ other examples are much sloppier:

Between October and June, 74 iPhones running the iOS 8 operating system could not be accessed by investigators for the Manhattan district attorney’s office—despite judicial warrants to search the devices. The investigations that were disrupted include the attempted murder of three individuals, the repeated sexual abuse of a child, a continuing sex trafficking ring and numerous assaults and robberies.

[…]

In France, smartphone data was vital to the swift investigation of the Charlie Hebdo terrorist attacks in January, and the deadly attack on a gas facility at Saint-Quentin-Fallavier, near Lyon, in June. And on a daily basis, our agencies rely on evidence lawfully retrieved from smartphones to fight sex crimes, child abuse, cybercrime, robberies or homicides.

We’ve heard that 74 number before. It’s over nine months, in an office that handles about 100,000 cases a year: less than 0.1% of the time. Details about those cases would be useful, so we can determine if encryption was just an impediment to investigation, or resulted in a criminal going free. The government needs to do a better job of presenting empirical data to support its case for backdoors. That they’re unable to do so suggests very strongly that an empirical analysis wouldn’t favor the government’s case.

As to the Charlie Hebdo case, it’s not clear how much of that vital smartphone data was actual data, and how much of it was unable-to-be-encrypted metadata. I am reminded of the examples that then-FBI-Director Louis Freeh would give during the First Crypto Wars in the 1990s. The big one used to illustrate the dangers of encryption was Mafia boss John Gotti. But the surveillance that convicted him was a room bug, not a wiretap. Given that the examples from FBI Director James Comey’s “going dark” speech last year were bogus, skepticism in the face of anecdote seems prudent.

So much of this “going dark” versus the “golden age of surveillance” debate depends on where you start from. Referring to that first Evanston example and the inability to get evidence from the victim’s phones, the op-ed authors write: “Until very recently, this situation would not have occurred.” That’s utter nonsense. From the beginning of time until very recently, this was the only situation that could have occurred. Objects in the vicinity of an event were largely mute about the past. Few things, save for eyewitnesses, could ever reach back in time and produce evidence. Even 15 years ago, the victim’s cell phone would have had no evidence on it that couldn’t have been obtained elsewhere, and that’s if the victim had been carrying a cell phone at all.

For most of human history, surveillance has been expensive. Over the last couple of decades, it has become incredibly cheap and almost ubiquitous. That a few bits and pieces are becoming expensive again isn’t a cause for alarm.

This essay originally appeared on Lawfare.

EDITED TO ADD (8/13): Excellent parody/commentary: “When Curtains Block Justice.”

Posted on August 12, 2015 at 2:18 PMView Comments

Incenting Drug Dealers to Snitch on Each Other

Local police are trying to convince drug dealers to turn each other in by pointing out that it reduces competition.

It’s a comical tactic with serious results: “We offer a free service to help you eliminate your drug competition!” Under a large marijuana leaf, the flier contained a blank form encouraging drug dealers to identify the competition and provide contact information. It also asked respondents to identify the hours the competition was most active.

Posted on August 11, 2015 at 6:41 AMView Comments

Shooting Down Drones

A Kentucky man shot down a drone that was hovering in his backyard:

“It was just right there,” he told Ars. “It was hovering, I would never have shot it if it was flying. When he came down with a video camera right over my back deck, that’s not going to work. I know they’re neat little vehicles, but one of those uses shouldn’t be flying into people’s yards and videotaping.”

Minutes later, a car full of four men that he didn’t recognize rolled up, “looking for a fight.”

“Are you the son of a bitch that shot my drone?” one said, according to Merideth.

His terse reply to the men, while wearing a 10mm Glock holstered on his hip: “If you cross that sidewalk onto my property, there’s going to be another shooting.”

He was arrested, but what’s the law?

In the view of drone lawyer Brendan Schulman and robotics law professor Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

But a different and bolder argument, put forward by law professor Michael Froomkin, could provide Meredith some cover. In a paper, Froomkin argues that it’s reasonable to assume robotic intrusions are not harmless, and that people may have a right to “employ violent self-help.”

Froomkin’s paper is well worth reading:

Abstract: Robots can pose—or can appear to pose—a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions easier and fairer to all concerned.

The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases—drone overflights, autonomous cars, national, state, and even local regulation—may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass and (2) responses to perceived threats other than physical attack by robots notably the risk that the robot (or drone) may be spying – perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.

We argue that the scope of permissible self-help in defending one’s privacy should be quite broad. There is exigency in that resort to legally administered remedies would be impracticable; and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great—or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.

The paper concludes with a brief examination of what if anything our survey of a person’s right to defend against robots might tell us about the current state of robot rights against people.

Note that there are drones that shoot back.

Here are two books that talk about these topics. And an article from 2012.

EDITED TO ADD (8/9): How to shoot down a drone.

Posted on August 4, 2015 at 8:24 AMView Comments

Bizarre High-Tech Kidnapping

This is a story of a very high-tech kidnapping:

FBI court filings unsealed last week showed how Denise Huskins’ kidnappers used anonymous remailers, image sharing sites, Tor, and other people’s Wi-Fi to communicate with the police and the media, scrupulously scrubbing meta data from photos before sending. They tried to use computer spyware and a DropCam to monitor the aftermath of the abduction and had a Parrot radio-controlled drone standing by to pick up the ransom by remote control.

The story also demonstrates just how effective the FBI is tracing cell phone usage these days. They had a blocked call from the kidnappers to the victim’s cell phone. First they used a search warrant to AT&T to get the actual calling number. After learning that it was an AT&T prepaid Tracfone, they called AT&T to find out where the burner was bought, what the serial numbers were, and the location where the calls were made from.

The FBI reached out to Tracfone, which was able to tell the agents that the phone was purchased from a Target store in Pleasant Hill on March 2 at 5:39 pm. Target provided the bureau with a surveillance-cam photo of the buyer: a white male with dark hair and medium build. AT&T turned over records showing the phone had been used within 650 feet of a cell site in South Lake Tahoe.

Here’s the criminal complaint. It borders on surreal. Were it an episode of CSI:Cyber, you would never believe it.

Posted on July 29, 2015 at 6:34 AMView Comments

Stink Bombs for Riot Control

They’re coming to the US:

It’s called Skunk, a type of “malodorant,” or in plainer language, a foul-smelling liquid. Technically nontoxic but incredibly disgusting, it has been described as a cross between “dead animal and human excrement.” Untreated, the smell lingers for weeks.

The Israeli Defense Forces developed Skunk in 2008 as a crowd-control weapon for use against Palestinians. Now Mistral, a company out of Bethesda, Md., says they are providing it to police departments in the United States.

[…]

The Israelis first used it in 2008 to disperse Palestinians protesting in the West Bank. A BBC video shows its first use in action, sprayed by a hose, a system that has come to be known as the “crap cannon.”

Mistral reps say Skunk, once deployed, can be “neutralized” with a special soap ­ and only with that soap. In another BBC video, an IDF spokesman describes how any attempt to wash it via regular means only exacerbates its effects. Six weeks after IDF forces used it against Palestinians at a security barrier, it still lingered in the air.

Posted on May 26, 2015 at 6:18 AMView Comments

The Further Democratization of Stingray

Stingray is the code name for an IMSI-catcher, which is basically a fake cell phone tower sold by Harris Corporation to various law enforcement agencies. (It’s actually just one of a series of devices with fish names—Amberjack is another—but it’s the name used in the media.) What is basically does is trick nearby cell phones into connecting to it. Once that happens, the IMSI-catcher can collect identification and location information of the phones and, in some cases, eavesdrop on phone conversations, text messages, and web browsing. (IMSI stands for International Mobile Subscriber Identity, which is the unique serial number your cell phone broadcasts so that the cellular system knows where you are.)

The use of IMSI-catchers in the US used to be a massive police secret. The FBI is so scared of explaining this capability in public that the agency makes local police sign nondisclosure agreements before using the technique, and has instructed them to lie about their use of it in court. When it seemed possible that local police in Sarasota, Florida, might release documents about Stingray cell phone interception equipment to plaintiffs in civil rights litigation against them, federal marshals seized the documents. More recently, St. Louis police dropped a case rather than talk about the technology in court. And Baltimore police admitted using Stingray over 25,000 times.

The truth is that it’s no longer a massive police secret. We now know a lot about IMSI-catchers. And the US government does not have a monopoly over the use of IMSI-catchers. I wrote in Data and Goliath:

There are dozens of these devices scattered around Washington, DC, and the rest of the country run by who-knows-what government or organization. Criminal uses are next.

From the Washington Post:

How rife? Turner and his colleagues assert that their specially outfitted smartphone, called the GSMK CryptoPhone, had detected signs of as many as 18 IMSI catchers in less than two days of driving through the region. A map of these locations, released Wednesday afternoon, looks like a primer on the geography of Washington power, with the surveillance devices reportedly near the White House, the Capitol, foreign embassies and the cluster of federal contractors near Dulles International Airport.

At the RSA Conference last week, Pwnie Express demonstrated their IMSI-catcher detector.

Building your own IMSI-catcher isn’t hard or expensive. At Def Con in 2010, researcher Chris Paget (now Kristin Paget) demonstrated a homemade IMSI-catcher. The whole thing cost $1,500, which is cheap enough for both criminals and nosy hobbyists.

It’s even cheaper and easier now. Anyone with a HackRF software-defined radio card can turn their laptop into an amateur IMSI-catcher. And this is why companies are building detectors into their security monitoring equipment.

Two points here. The first is that the FBI should stop treating Stingray like it’s a big secret, so we can start talking about policy.

The second is that we should stop pretending that this capability is exclusive to law enforcement, and recognize that we’re all at risk because of it. If we continue to allow our cellular networks to be vulnerable to IMSI-catchers, then we are all vulnerable to any foreign government, criminal, hacker, or hobbyist that builds one. If we instead engineer our cellular networks to be secure against this sort of attack, then we are safe against all those attackers.

Me:

We have one infrastructure. We can’t choose a world where the US gets to spy and the Chinese don’t. We get to choose a world where everyone can spy, or a world where no one can spy. We can be secure from everyone, or vulnerable to anyone.

Like QUANTUM, we have the choice of building our cellular infrastructure for security or for surveillance. Let’s choose security.

EDITED TO ADD (5/2): Here’s an IMSI catcher for sale on alibaba.com. At this point, every dictator in the world is using this technology against its own citizens. They’re used extensively in China to send SMS spam without paying the telcos any fees. On a Food Network show called Mystery Diners—episode 108, “Cabin Fever”—someone used an IMSI catcher to intercept a phone call between two restaurant employees.

The new model of the IMSI catcher from Harris Corporation is called Hailstorm. It has the ability to remotely inject malware into cell phones. Other Harris IMSI-catcher codenames are Kingfish, Gossamer, Triggerfish, Amberjack and Harpoon. The competitor is DRT, made by the Boeing subsidiary Digital Receiver Technology, Inc.

EDITED TO ADD (5/2): Here’s an IMSI catcher called Piranha, sold by the Israeli company Rayzone Corp. It claims to work on GSM 2G, 3G, and 4G networks (plus CDMA, of course). The basic Stingray only works on GSM 2G networks, and intercepts phones on the more modern networks by forcing them to downgrade to the 2G protocols. We believe that the more modern ISMI catchers also work against 3G and 4G networks.

EDITED TO ADD (5/13): The FBI recently released more than 5,000 pages of documents about Stingray, but nearly everything is redacted.

Posted on April 27, 2015 at 6:27 AMView Comments

1 3 4 5 6 7 27

Sidebar photo of Bruce Schneier by Joe MacInnis.