EDITED TO ADD (2/11): Cryptanalysis of intercepted Israeli drone feeds.
Entries Tagged "drones"
Page 3 of 6
You can wrap your house in tinfoil, but when you start shining bright lights to defend yourself against alien attack, you’ve gone too far.
In general, society puts limits on what types of security you are allowed to use, especially when that use can affect others. You can’t place landmines on your lawn or shoot down drones hovering over your property.
WDRB News reported that the camera drone’s owners soon showed up at the home of the shooter, William H. Merideth: “Four guys came over to confront me about it, and I happened to be armed, so that changed their minds,” Merideth said. “They asked me, ‘Are you the S-O-B that shot my drone?’ and I said, ‘Yes I am,'” he said. “I had my 40 mm Glock on me and they started toward me and I told them, ‘If you cross my sidewalk, there’s gonna be another shooting.'” Police charged Meredith with criminal mischief and wanton endangerment.
Technology changes everything. Specifically, it upends long-standing societal balances around issues like security and privacy. When a capability becomes possible, or cheaper, or more common, the changes can be far-reaching. Rebalancing security and privacy after technology changes capabilities can be very difficult, and take years. And we’re not very good at it.
The security threats from drones are real, and the government is taking them seriously. In January, a man lost control of his drone, which crashed on the White House lawn. In May, another man was arrested for trying to fly his drone over the White House fence, and another last week for flying a drone into the stadium where the U.S. Open was taking place.
Law enforcement can deploy these technologies, but under current law it’s illegal to shoot down a drone, even if it’s hovering above your own property. In our society, you’re generally not allowed to take the law into your own hands. You’re expected to call the police and let them deal with it.
There’s an alternate theory, though, from law professor Michael Froomkin. He argues that self-defense should be permissible against drones simply because you don’t know their capabilities. We know, for example, that people have mounted guns on drones, which means they could pose a threat to life. Note that this legal theory has not been tested in court.
Increasingly, government is regulating drones and drone flights both at the state level and by the FAA. There are proposals to require that drones have an identifiable transponder, or no-fly zones programmed into the drone software.
Still, a large number of security issues remain unresolved. How do we feel about drones with long-range listening devices, for example? Or drones hovering outside our property and photographing us through our windows?
What’s going on is that drones have changed how we think about security and privacy within our homes, by removing the protections we used to get from fences and walls. Of course, being spied on and shot at from above is nothing new, but access to those technologies was expensive and largely the purview of governments and some corporations. Drones put these capabilities into the hands of hobbyists, and we don’t know what to do about it.
The issues around drones will get worse as we move from remotely piloted aircraft to true drones: aircraft that operate autonomously from a computer program. For the first time, autonomous robots—with ever-increasing intelligence and capabilities at an ever-decreasing cost—will have access to public spaces. This will create serious problems for society, because our legal system is largely based on deterring human miscreants rather than their proxies.
Our desire to shoot down a drone hovering nearby is understandable, given its potential threat. Society’s need for people not to take the law into their own hands—and especially not to fire guns into the air—is also understandable. These two positions are increasingly coming into conflict, and will require increasing government regulation to sort out. But more importantly, we need to rethink our assumptions of security and privacy in a world of autonomous drones, long-range cameras, face recognition, and the myriad other technologies that are increasingly in the hands of everyone.
This essay previously appeared on CNN.com.
It’s not just humans who dislike the small flying objects. YouTube has videos of drones being stared at quizzically by a moose, harassed by a raven, attacked by a hawk, butted by a ram, knocked out of the sky by a chimpanzee (who planned the whole thing) and a goose, and punched out of the sky by a kangaroo.
And bears hate them, even if they don’t actually attack.
A Kentucky man shot down a drone that was hovering in his backyard:
“It was just right there,” he told Ars. “It was hovering, I would never have shot it if it was flying. When he came down with a video camera right over my back deck, that’s not going to work. I know they’re neat little vehicles, but one of those uses shouldn’t be flying into people’s yards and videotaping.”
Minutes later, a car full of four men that he didn’t recognize rolled up, “looking for a fight.”
“Are you the son of a bitch that shot my drone?” one said, according to Merideth.
His terse reply to the men, while wearing a 10mm Glock holstered on his hip: “If you cross that sidewalk onto my property, there’s going to be another shooting.”
He was arrested, but what’s the law?
In the view of drone lawyer Brendan Schulman and robotics law professor Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.
But a different and bolder argument, put forward by law professor Michael Froomkin, could provide Meredith some cover. In a paper, Froomkin argues that it’s reasonable to assume robotic intrusions are not harmless, and that people may have a right to “employ violent self-help.”
Froomkin’s paper is well worth reading:
Abstract: Robots can pose—or can appear to pose—a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions easier and fairer to all concerned.
The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases—drone overflights, autonomous cars, national, state, and even local regulation—may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass and (2) responses to perceived threats other than physical attack by robots notably the risk that the robot (or drone) may be spying – perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.
We argue that the scope of permissible self-help in defending one’s privacy should be quite broad. There is exigency in that resort to legally administered remedies would be impracticable; and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great—or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.
The paper concludes with a brief examination of what if anything our survey of a person’s right to defend against robots might tell us about the current state of robot rights against people.
Note that there are drones that shoot back.
EDITED TO ADD (8/9): How to shoot down a drone.
This is a story of a very high-tech kidnapping:
FBI court filings unsealed last week showed how Denise Huskins’ kidnappers used anonymous remailers, image sharing sites, Tor, and other people’s Wi-Fi to communicate with the police and the media, scrupulously scrubbing meta data from photos before sending. They tried to use computer spyware and a DropCam to monitor the aftermath of the abduction and had a Parrot radio-controlled drone standing by to pick up the ransom by remote control.
The story also demonstrates just how effective the FBI is tracing cell phone usage these days. They had a blocked call from the kidnappers to the victim’s cell phone. First they used a search warrant to AT&T to get the actual calling number. After learning that it was an AT&T prepaid Tracfone, they called AT&T to find out where the burner was bought, what the serial numbers were, and the location where the calls were made from.
The FBI reached out to Tracfone, which was able to tell the agents that the phone was purchased from a Target store in Pleasant Hill on March 2 at 5:39 pm. Target provided the bureau with a surveillance-cam photo of the buyer: a white male with dark hair and medium build. AT&T turned over records showing the phone had been used within 650 feet of a cell site in South Lake Tahoe.
Here’s the criminal complaint. It borders on surreal. Were it an episode of CSI:Cyber, you would never believe it.
The marketing firm Adnear is using drones to track cell phone users:
The capture does not involve conversations or personally identifiable information, according to director of marketing and research Smriti Kataria. It uses signal strength, cell tower triangulation, and other indicators to determine where the device is, and that information is then used to map the user’s travel patterns.
“Let’s say someone is walking near a coffee shop,” Kataria said by way of example.
The coffee shop may want to offer in-app ads or discount coupons to people who often walk by but don’t enter, as well as to frequent patrons when they are elsewhere. Adnear’s client would be the coffee shop or other retailers who want to entice passersby.
The system identifies a given user through the device ID, and the location info is used to flesh out the user’s physical traffic pattern in his profile. Although anonymous, the user is “identified” as a code. The company says that no name, phone number, router ID, or other personally identifiable information is captured, and there is no photography or video.
Does anyone except this company believe that device ID is not personally identifiable information?
Sidebar photo of Bruce Schneier by Joe MacInnis.