Shooting Down Drones

A Kentucky man shot down a drone that was hovering in his backyard:

"It was just right there," he told Ars. "It was hovering, I would never have shot it if it was flying. When he came down with a video camera right over my back deck, that's not going to work. I know they're neat little vehicles, but one of those uses shouldn't be flying into people's yards and videotaping."

Minutes later, a car full of four men that he didn't recognize rolled up, "looking for a fight."

"Are you the son of a bitch that shot my drone?" one said, according to Merideth.

His terse reply to the men, while wearing a 10mm Glock holstered on his hip: "If you cross that sidewalk onto my property, there's going to be another shooting."

He was arrested, but what's the law?

In the view of drone lawyer Brendan Schulman and robotics law professor Ryan Calo, home owners can't just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

But a different and bolder argument, put forward by law professor Michael Froomkin, could provide Meredith some cover. In a paper, Froomkin argues that it's reasonable to assume robotic intrusions are not harmless, and that people may have a right to "employ violent self-help."

Froomkin's paper is well worth reading:

Abstract: Robots can pose -- or can appear to pose -- a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions easier and fairer to all concerned.

The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases -- drone overflights, autonomous cars, national, state, and even local regulation -- may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass and (2) responses to perceived threats other than physical attack by robots notably the risk that the robot (or drone) may be spying - perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.

We argue that the scope of permissible self-help in defending one's privacy should be quite broad. There is exigency in that resort to legally administered remedies would be impracticable; and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great -- or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot's capabilities, and RFID chips and serial numbers that would uniquely identify the robot's owner.

The paper concludes with a brief examination of what if anything our survey of a person's right to defend against robots might tell us about the current state of robot rights against people.

Note that there are drones that shoot back.

Here are two books that talk about these topics. And an article from 2012.

Posted on August 4, 2015 at 8:24 AM60 Comments

Vulnerabilities in Brink's Smart Safe

Brink's sells an Internet-enabled smart safe called the CompuSafe Galileo. Despite being sold as a more secure safe, it's wildly insecure:

Vulnerabilities found in CompuSafe Galileo safes, smart safes made by the ever-reliable Brinks company that are used by retailers, restaurants, and convenience stores, would allow a rogue employee or anyone else with physical access to them to command their doors to open and relinquish their cash....

The hack has the makings of the perfect crime, because a thief could also erase any evidence that the theft occurred simply by altering data in a back-end database where the smartsafe logs how much money is inside and who accessed it.

Nothing about these vulnerabilities is a surprise to anyone who works in computer security:

But the safes have an external USB port on the side of the touchscreens that allows service technicians to troubleshoot and obtain a backup of the database. This, unfortunately, creates an easy entrypoint for thieves to take complete, administrative control of the devices.

"Once you're able to plug into that USB port, you're able to access lots of things that you shouldn't normally be able to access," Petro told WIRED. "There is a full operating system...that you're able to...fully take over...and make [the safe] do whatever you want it to do."

The researchers created a malicious script that, once inserted into a safe on a USB stick, lets a thief automatically open the safe doors by emulating certain mouse and keyboard actions and bypassing standard application controls. "You plug in this little gizmo, wait about 60 seconds, and the door just pops open," says Petro.

If it sounds like the people who designed this e-safe ignored all of the things we've learned about computer security in the last few decades, you're right. And that's the problem with Internet-of-Things security: it's often designed by people who don't know computer or Internet security.

They also haven't learned the lessons of full disclosure or rapid patching:

They notified Brinks about the vulnerabilities more than a year ago, but say the company appears to have done nothing to resolve the issues. Although Brinks could disable driver software associated with the USB port to prevent someone from controlling the safes in this way, or lock down the system and database so it's not running in administrative mode and the database can't be changed, but so far the company appears to have done none of these.

.

Again, this all sounds familiar. The computer industry learned its lessons over a decade ago. Before then they ignored security vulnerabilities, threatened researchers, and generally behaved very badly. I expect the same things to happen with Internet-of-Things companies.

Posted on August 3, 2015 at 1:27 PM28 Comments

Help with Mailing List Hosting

I could use some help with finding a host for my monthly newsletter, Crypto-Gram. My old setup just wasn't reliable enough. I had a move planned, but that fell through when the new host's bounce processing system turned out to be buggy and they admitted the problem might never be fixed.

Clearly I need something a lot more serious. My criteria include subscriber privacy, reasonable cost, and a proven track record of reliability with large mailing lists. (I would use MailChimp, but it has mandatory click tracking for new accounts.)

One complication is that SpamCop, a popular anti-spam service, tells me I have at least one of their "spamtrap" addresses on the list. Spamtraps are addresses that -- in theory -- have never been used, so they shouldn't be on any legitimate list. I don't know how they got on my list, since I make people confirm their subscriptions by replying to an e-mail or clicking on an e-mailed link. But I used to make rare exceptions for people who just asked to join, so maybe a bad address or two got on that way. Spamtraps don't work if you tell people what they are, so I can't just find and remove them. And this has caused no end of problems for subscribers who use SpamCop's blacklist.

At a minimum, I need to be sure that a new host won't kick me out for couple of spamtraps. And if the solution to this problem involves making all 100,000 people on the list reconfirm their subscriptions, then that has to be as simple and user-friendly a process as possible.

If you can recommend a host that would work, I'm interested. Even better would be talking to an expert with lots of experience running large mailing lists who can guide me. If you know a person like that, or if you are one, please leave a comment or e-mail me at the address on my Contact page.

Posted on August 3, 2015 at 5:58 AM42 Comments

Friday Squid Blogging: Russian Sailors Video Colossal Squid

It tried to steal their catch.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on July 31, 2015 at 4:17 PM145 Comments

Schneier Speaking Schedule

I'm speaking at an Infoedge event at Bali Hai Golf Club in Las Vegas, at 5 pm on August 5, 2015.

I'm speaking at Def Con 23 on Friday, August 7, 2015.

I'm speaking -- remotely via Skype -- at LinuxCon in Seattle on August 18, 2015.

I'm speaking at CloudSec in Singapore on August 25, 2015.

I'm speaking at MindTheSec in São Paulo, Brazil, on August 27, 2015.

I'm speaking on the future of privacy at a public seminar sponsored by the Institute for Future Studies, in Stockholm, Sweden on September 21, 2015.

I'm speaking at Next Generation Threats 2015 in Stockholm, Sweden, on September 22, 2015.

I'm speaking at Next Generation Threats 2015 in Gothenburg, Sweden, on September 23, 2015.

I'm speaking at Free and Safe in Cyberspace in Brussels on September 24, 2015.

I'll be on a panel at Privacy. Security. Risk. 2015 in Las Vegas on September 30, 2015.

I'm speaking at the Privacy + Security Forum, October 21-23, 2015, at The Marvin Center in Washington, DC.

I'm speaking at the Boston Book Festival on October 24, 2015.

I'm speaking at the 4th Annual Cloud Security Congress EMEA in Berlin on November 17, 2015.

Posted on July 31, 2015 at 2:21 PM13 Comments

HAMMERTOSS: New Russian Malware

FireEye has a detailed report of a sophisticated piece of Russian malware: HAMMERTOSS. It uses some clever techniques to hide:

The Hammertoss backdoor malware looks for a different Twitter handle each day -- automatically prompted by a list generated by the tool -- to get its instructions. If the handle it's looking for is not registered that day, it merely returns the next day and checks for the Twitter handle designated for that day. If the account is active, Hammertoss searches for a tweet with a URL and hashtag, and then visits the URL.

That's where a legit-looking image is grabbed and then opened by Hammertoss: the image contains encrypted instructions, which Hammertoss decrypts. The commands, which include instructions for obtaining files from the victim's network, typically then lead the malware to send that stolen information to a cloud-based storage service.

Another article. Reddit thread.

Posted on July 31, 2015 at 11:12 AM15 Comments

Backdoors Won't Solve Comey's Going Dark Problem

At the Aspen Security Forum two weeks ago, James Comey (and others) explicitly talked about the "going dark" problem, describing the specific scenario they are concerned about. Maybe others have heard the scenario before, but it was a first for me. It centers around ISIL operatives abroad and ISIL-inspired terrorists here in the US. The FBI knows who the Americans are, can get a court order to carry out surveillance on their communications, but cannot eavesdrop on the conversations, because they are encrypted. They can get the metadata, so they know who is talking to who, but they can't find out what's being said.

"ISIL's M.O. is to broadcast on Twitter, get people to follow them, then move them to Twitter Direct Messaging" to evaluate if they are a legitimate recruit, he said. "Then they'll move them to an encrypted mobile-messaging app so they go dark to us."

[...]

The FBI can get court-approved access to Twitter exchanges, but not to encrypted communication, Comey said. Even when the FBI demonstrates probable cause and gets a judicial order to intercept that communication, it cannot break the encryption for technological reasons, according to Comey.

If this is what Comey and the FBI are actually concerned about, they're getting bad advice -- because their proposed solution won't solve the problem. Comey wants communications companies to give them the capability to eavesdrop on conversations without the conversants' knowledge or consent; that's the "backdoor" we're all talking about. But the problem isn't that most encrypted communications platforms are security encrypted, or even that some are -- the problem is that there exists at least one securely encrypted communications platform on the planet that ISIL can use.

Imagine that Comey got what he wanted. Imagine that iMessage and Facebook and Skype and everything else US-made had his backdoor. The ISIL operative would tell his potential recruit to use something else, something secure and non-US-made. Maybe an encryption program from Finland, or Switzerland, or Brazil. Maybe Mujahedeen Secrets. Maybe anything. (Sure, some of these will have flaws, and they'll be identifiable by their metadata, but the FBI already has the metadata, and the better software will rise to the top.) As long as there is something that the ISIL operative can move them to, some software that the American can download and install on their phone or computer, or hardware that they can buy from abroad, the FBI still won't be able to eavesdrop.

And by pushing these ISIL operatives to non-US platforms, they lose access to the metadata they otherwise have.

Convincing US companies to install backdoors isn't enough; in order to solve this going dark problem, the FBI has to ensure that an American can only use backdoored software. And the only way to do that is to prohibit the use of non-backdoored software, which is the sort of thing that the UK's David Cameron said he wanted for his country in January:

But the question is are we going to allow a means of communications which it simply isn't possible to read. My answer to that question is: no, we must not.

And that, of course, is impossible. Jonathan Zittrain explained why. And Cory Doctorow outlined what trying would entail:

For David Cameron's proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you've downloaded hasn't been tampered with.

[...]

This, then, is what David Cameron is proposing:

* All Britons' communications must be easy for criminals, voyeurs and foreign spies to intercept.

* Any firms within reach of the UK government must be banned from producing secure software.

* All major code repositories, such as Github and Sourceforge, must be blocked.

* Search engines must not answer queries about web-pages that carry secure software.

* Virtually all academic security work in the UK must cease -- security research must only take place in proprietary research environments where there is no onus to publish one's findings, such as industry R&D and the security services.

* All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped.

* Existing walled gardens (like IOs and games consoles) must be ordered to ban their users from installing secure software.

* Anyone visiting the country from abroad must have their smartphones held at the border until they leave.

* Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons.

* Free/open source operating systems -- that power the energy, banking, ecommerce, and infrastructure sectors -- must be banned outright.

As extreme as it reads, without all of that, the ISIL operative would be able to communicate securely with his potential American recruit. And all of this is not going to happen.

Last week, former NSA director Mike McConnell, former DHS secretary Michael Chertoff, and former deputy defense secretary William Lynn published a Washington Post op-ed opposing backdoors in encryption software. They wrote:

Today, with almost everyone carrying a networked device on his or her person, ubiquitous encryption provides essential security. If law enforcement and intelligence organizations face a future without assured access to encrypted communications, they will develop technologies and techniques to meet their legitimate mission goals.

I believe this is true. Already one is being talked about in the academic literature: lawful hacking.

Perhaps the FBI's reluctance to accept this is based on their belief that all encryption software comes from the US, and therefore is under their influence. Back in the 1990s, during the first Crypto Wars, the US government had a similar belief. To convince them otherwise, George Washington University surveyed the cryptography market in 1999 and found that there were over 500 companies in 70 countries manufacturing or distributing non-US cryptography products. Maybe we need a similar study today.

This essay previously appeared on Lawfare.

Posted on July 31, 2015 at 6:08 AM87 Comments

Comparing the Security Practices of Experts and Non-Experts

New paper: "'...no one can hack my mind': Comparing Expert and Non-Expert Security Practices," by Iulia Ion, Rob Reeder, and Sunny Consolvo.

Abstract: The state of advice given to people today on how to stay safe online has plenty of room for improvement. Too many things are asked of them, which may be unrealistic, time consuming, or not really worth the effort. To improve the security advice, our community must find out what practices people use and what recommendations, if messaged well, are likely to bring the highest benefit while being realistic to ask of people. In this paper, we present the results of a study which aims to identify which practices people do that they consider most important at protecting their security on-line. We compare self-reported security practices of non-experts to those of security experts (i.e., participants who reported having five or more years of experience working in computer security). We report on the results of two online surveys -- ­one with 231 security experts and one with 294 MTurk participants­ -- on what the practices and attitudes of each group are. Our findings show a discrepancy between the security practices that experts and non-experts report taking. For instance, while experts most frequently report installing software updates, using two-factor authentication and using a password manager to stay safe online, non-experts report using antivirus software, visiting only known websites, and changing passwords frequently.

Posted on July 30, 2015 at 2:21 PM25 Comments

The NSA, Metadata, and the Failure of Stopping 9/11

It's common wisdom that the NSA was unable to intercept phone calls from Khalid al-Mihdhar in San Diego to Bin Ladin in Yemen because of legal restrictions. This has been used to justify the NSA's massive phone metadata collection programs. James Bamford argues that there were no legal restrictions, and that the NSA screwed up.

Posted on July 30, 2015 at 6:13 AM30 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.