Blog: August 2015 Archives

Using Samsung's Internet-Enabled Refrigerator for Man-in-the-Middle Attacks

This is interesting research:

Whilst the fridge implements SSL, it FAILS to validate SSL certificates, thereby enabling man-in-the-middle attacks against most connections. This includes those made to Google’s servers to download Gmail calendar information for the on-screen display.

So, MITM the victim’s fridge from next door, or on the road outside and you can potentially steal their Google credentials.

The notable exception to the rule above is when the terminal connects to the update server—we were able to isolate the URL https://www.samsungotn.net which is the same used by TVs, etc. We generated a set of certificates with the exact same contents as those on the real website (fake server cert + fake CA signing cert) in the hope that the validation was weak but it failed.

The terminal must have a copy of the CA and is making sure that the server’s cert is signed against that one. We can’t hack this without access to the file system where we could replace the CA it is validating against. Long story short we couldn’t intercept communications between the fridge terminal and the update server.

When I think about the security implications of the Internet of things, this is one of my primary worries. As we connect things to each other, vulnerabilities on one of them affect the security of another. And because so many of the things we connect to the Internet will be poorly designed, and low cost, there will be lots of vulnerabilities in them. Expect a lot more of this kind of thing as we move forward.

EDITED TO ADD (9/11): Dave Barry reblogged me.

Posted on August 31, 2015 at 1:56 PM53 Comments

Mickens on Security

James Mickens, for your amusement. A somewhat random sample:

My point is that security people need to get their priorities straight. The “threat model” section of a security paper resembles the script for a telenovela that was written by a paranoid schizophrenic: there are elaborate narratives and grand conspiracy theories, and there are heroes and villains with fantastic (yet oddly constrained) powers that necessitate a grinding battle of emotional and technical attrition. In the real world, threat models are much simpler (see Figure 1). Basically, you’re either dealing with Mossad or not-Mossad. If your adversary is not-Mossad, then you’ll probably be fine if you pick a good password and don’t respond to emails from ChEaPestPAiNPi11s@virus-basket.biz.ru. If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them. In summary, https:// and two dollars will get you a bus ticket to nowhere. Also, SANTA CLAUS ISN’T REAL. When it rains, it pours.

Posted on August 28, 2015 at 3:58 PM55 Comments

Iranian Phishing

CitizenLab is reporting on Iranian hacking attempts against activists, which include a real-time man-in-the-middle attack against Google’s two-factor authentication.

This report describes an elaborate phishing campaign against targets in Iran’s diaspora, and at least one Western activist. The ongoing attacks attempt to circumvent the extra protections conferred by two-factor authentication in Gmail, and rely heavily on phone-call based phishing and “real time” login attempts by the attackers. Most of the attacks begin with a phone call from a UK phone number, with attackers speaking in either English or Farsi.

The attacks point to extensive knowledge of the targets’ activities, and share infrastructure and tactics with campaigns previously linked to Iranian threat actors. We have documented a growing number of these attacks, and have received reports that we cannot confirm of targets and victims of highly similar attacks, including in Iran. The report includes extra detail to help potential targets recognize similar attacks. The report closes with some security suggestions, highlighting the importance of two-factor authentication.

The report quotes my previous writing on the vulnerabilities of two-factor authentication:

As researchers have observed for at least a decade, a range of attacks are available against 2FA. Bruce Schneier anticipated in 2005, for example, that attackers would develop real time attacks using both man-in-the-middle attacks, and attacks against devices. The”real time” phishing against 2FA that Schneier anticipated were reported at least 9 years ago.

Today, researchers regularly point out the rise of “real-time” 2FA phishing, much of it in the context of online fraud. A 2013 academic article provides a systematic overview of several of these vectors. These attacks can take the form of theft of 2FA credentials from devices (e.g. “Man in the Browser” attacks), or by using 2FA login pages. Some of the malware-based campaigns that target 2FA have been tracked for several years, are highly involved, and involve convincing targets to install separate Android apps to capture one-time passwords. Another category of these attacks works by exploiting phone number changes, SIM card registrations, and badly protected voicemail

Boing Boing article. Hacker News thread.

Posted on August 27, 2015 at 12:36 PM16 Comments

Defending All the Targets Is Impossible

In the wake of the recent averted mass shooting on the French railroads, officials are realizing that there are just too many potential targets to defend.

The sheer number of militant suspects combined with a widening field of potential targets have presented European officials with what they concede is a nearly insurmountable surveillance task. The scale of the challenge, security experts fear, may leave the Continent entering a new climate of uncertainty, with added risk attached to seemingly mundane endeavors, like taking a train.

The article talks about the impossibility of instituting airport-like security at train stations, but of course even if were feasible to do that, it would only serve to move the threat to some other crowded space.

Posted on August 27, 2015 at 6:57 AM106 Comments

Regularities in Android Lock Patterns

Interesting:

Marte Løge, a 2015 graduate of the Norwegian University of Science and Technology, recently collected and analyzed almost 4,000 ALPs as part of her master’s thesis. She found that a large percentage of them­—44 percent­—started in the top left-most node of the screen. A full 77 percent of them started in one of the four corners. The average number of nodes was about five, meaning there were fewer than 9,000 possible pattern combinations. A significant percentage of patterns had just four nodes, shrinking the pool of available combinations to 1,624. More often than not, patterns moved from left to right and top to bottom, another factor that makes guessing easier.

EDITED TO ADD (9/10): Similar research on this sort of thing.

Posted on August 26, 2015 at 6:24 AM23 Comments

Movie Plot Threat: Terrorists Attacking US Prisons

Kansas Senator Pat Roberts wins an award for his movie-plot threat: terrorists attacking the maximum-security federal prison at Ft. Leavenworth:

In an Aug. 14 letter to Defense Secretary Ashton B. Carter, Roberts stressed that Kansas in general—and Leavenworth, in particular—are not ideal for a domestic detention facility.

“Fort Leavenworth is neither the ideal nor right location for moving Guantánamo detainees,” Roberts wrote to Defense Secretary Ashton B. Carter. “The installation lies right on the Missouri River, providing terrorists with the possibility of covert travel underwater and attempting access to the detention facility.”

Not just terrorists, but terrorists with a submarine! This is why Ft. Leavenworth, a prison from which no one has ever escaped, is unsuitable for housing Guantanamo detainees.

I’ve never understood the argument that terrorists are too dangerous to house in US prisons. They’re just terrorists, it’s not like they’re Magneto.

Posted on August 25, 2015 at 2:19 PM64 Comments

Are Data Breaches Getting Larger?

This research says that data breaches are not getting larger over time.

Hype and Heavy Tails: A Closer Look at Data Breaches,” by Benjamin Edwards, Steven Hofmeyr, and Stephanie Forrest:

Abstract: Recent widely publicized data breaches have exposed the
personal information of hundreds of millions of people. Some reports point to alarming increases in both the size and frequency of data breaches, spurring institutions around the world to address what appears to be a worsening situation. But, is the problem actually growing worse? In this paper, we study a popular public dataset and develop Bayesian Generalized Linear Models to investigate trends in data breaches. Analysis of the model shows that neither size nor frequency of data breaches has increased over the past decade. We find that the increases that have attracted attention can be explained by the heavy-tailed statistical distributions underlying the dataset. Specifically, we find that data breach size is log-normally distributed and that the daily frequency of breaches is described by a negative binomial distribution. These distributions may provide clues to the generative mechanisms that are responsible for the breaches. Additionally, our model predicts the likelihood of breaches of a particular size in the future. For example, we find that in the next year there is only a 31% chance of a breach of 10 million records or more in the US. Regardless of any trend, data breaches are costly, and we combine the model with two different cost models to project that in the next three years breaches could cost up to $55 billion.

The paper was presented at WEIS 2015.

Posted on August 25, 2015 at 6:27 AM23 Comments

The Advertising Value of Intrusive Tracking

Here’s an interesting research paper that tries to calculate the differential value of privacy-invasive advertising practices.

The researchers used data from a mobile ad network and was able to see how different personalized advertising practices affected customer purchasing behavior. The details are interesting, but basically, most personal information had little value. Overall, the ability to target advertising produces a 29% greater return on an advertising budget, mostly by knowing the right time to show someone a particular ad.

The paper was presented at WEIS 2015.

Posted on August 24, 2015 at 5:50 AM26 Comments

Friday Squid Blogging: Calamari Ripieni Recipe

Nice and easy Calamari Ripieni recipe, along with general instructions on cooking squid:

Tenderizing squid is as simple as pounding it flat—if you’re going to turn it into a steak. Otherwise, depending on the size of the squid, you can simply trim off the tentacles and slice the squid body, or mantle, into rings that can be grilled, sautéed, breaded and fried, added to soup, added to salad or pasta, or marinated. You can also ­ as chef Accursio Lota of Solare does—stuff the squid with bread crumbs and aromatics and quickly bake it or grill it to serve with salad.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on August 21, 2015 at 4:07 PM208 Comments

NSA Plans for a Post-Quantum World

Quantum computing is a novel way to build computers—one that takes advantage of the quantum properties of particles to perform operations on data in a very different way than traditional computers. In some cases, the algorithm speedups are extraordinary.

Specifically, a quantum computer using something called Shor’s algorithm can efficiently factor numbers, breaking RSA. A variant can break Diffie-Hellman and other discrete log-based cryptosystems, including those that use elliptic curves. This could potentially render all modern public-key algorithms insecure. Before you panic, note that the largest number to date that has been factored by a quantum computer is 143. So while a practical quantum computer is still science fiction, it’s not stupid science fiction.

(Note that this is completely different from quantum cryptography, which is a way of passing bits between two parties that relies on physical quantum properties for security. The only thing quantum computation and quantum cryptography have to do with each other is their first words. It is also completely different from the NSA’s QUANTUM program, which is its code name for a packet-injection system that works directly in the Internet backbone.)

Practical quantum computation doesn’t mean the end of cryptography. There are lesser-known public-key algorithms such as McEliece and lattice-based algorithms that, while less efficient than the ones we use, are currently secure against a quantum computer. And quantum computation only speeds up a brute-force keysearch by a factor of a square root, so any symmetric algorithm can be made secure against a quantum computer by doubling the key length.

We know from the Snowden documents that the NSA is conducting research on both quantum computation and quantum cryptography. It’s not a lot of money, and few believe that the NSA has made any real advances in theoretical or applied physics in this area. My guess has been that we’ll see a practical quantum computer within 30 to 40 years, but not much sooner than that.

This all means that now is the time to think about what living in a post-quantum world would be like. NIST is doing its part, having hosted a conference on the topic earlier this year. And the NSA announced that it is moving towards quantum-resistant algorithms.

Earlier this week, the NSA’s Information Assurance Directorate updated its list of Suite B cryptographic algorithms. It explicitly talked about the threat of quantum computers:

IAD will initiate a transition to quantum resistant algorithms in the not too distant future. Based on experience in deploying Suite B, we have determined to start planning and communicating early about the upcoming transition to quantum resistant algorithms. Our ultimate goal is to provide cost effective security against a potential quantum computer. We are working with partners across the USG, vendors, and standards bodies to ensure there is a clear plan for getting a new suite of algorithms that are developed in an open and transparent manner that will form the foundation of our next Suite of cryptographic algorithms.

Until this new suite is developed and products are available implementing the quantum resistant suite, we will rely on current algorithms. For those partners and vendors that have not yet made the transition to Suite B elliptic curve algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition.

Suite B is a family of cryptographic algorithms approved by the NSA. It’s all part of the NSA’s Cryptographic Modernization Program. Traditionally, NSA algorithms were classified and could only be used in specially built hardware modules. Suite B algorithms are public, and can be used in anything. This is not to say that Suite B algorithms are second class, or breakable by the NSA. They’re being used to protect US secrets: “Suite A will be used in applications where Suite B may not be appropriate. Both Suite A and Suite B can be used to protect foreign releasable information, US-Only information, and Sensitive Compartmented Information (SCI).”

The NSA is worried enough about advances in the technology to start transitioning away from algorithms that are vulnerable to a quantum computer. Does this mean that the agency is close to a working prototype in their own classified labs? Unlikely. Does this mean that they envision practical quantum computers sooner than my 30-to-40-year estimate? Certainly.

Unlike most personal and corporate applications, the NSA routinely deals with information it wants kept secret for decades. Even so, we should all follow the NSA’s lead and transition our own systems to quantum-resistant algorithms over the next decade or so—possibly even sooner.

The essay previously appeared on Lawfare.

EDITED TO ADD: The computation that factored 143 also accidentally “factored much larger numbers such as 3599, 11663, and 56153, without the awareness of the authors of that work,” which shows how weird this all is.

EDITED TO ADD: Seems that I need to be clearer: I do not stand by my 30-40-year prediction. The NSA is acting like practical quantum computers will exist long before then, and I am deferring to their expertise.

Posted on August 21, 2015 at 12:36 PM45 Comments

SS7 Phone-Switch Flaw Enabled Surveillance

Interesting:

Remember that vulnerability in the SS7 inter-carrier network that lets hackers and spies track your cellphone virtually anywhere in the world? It’s worse than you might have thought. Researchers speaking to Australia’s 60 Minutes have demonstrated that it’s possible for anyone to intercept phone calls and text messages through that same network. So long as the attackers have access to an SS7 portal, they can forward your conversations to an online recording device and reroute the call to its intended destination. This helps anyone bent on surveillance, of course, but it also means that a well-equipped criminal could grab your verification messages (such as the kind used in two-factor authentication) and use them before you’ve even seen them.

I wrote about cell phone tracking based on SS7 in Data & Goliath (pp. 2-3):

The US company Verint sells cell phone tracking systems to both corporations and governments worldwide. The company’s website says that it’s “a global leader in Actionable Intelligence solutions for customer engagement optimization, security intelligence, and fraud, risk and compliance,” with clients in “more than 10,000 organizations in over 180 countries.” The UK company Cobham sells a system that allows someone to send a “blind” call to a phone—one that doesn’t ring, and isn’t detectable. The blind call forces the phone to transmit on a certain frequency, allowing the sender to track that phone to within one meter. The company boasts government customers in Algeria, Brunei, Ghana, Pakistan, Saudi Arabia, Singapore, and the United States. Defentek, a company mysteriously registered in Panama, sells a system that can “locate and track any phone number in the world…undetected and unknown by the network, carrier, or the target.” It’s not an idle boast; telecommunications researcher Tobias Engel demonstrated the same thing at a hacker conference in 2008. Criminals do the same today.

Posted on August 21, 2015 at 6:47 AM28 Comments

No-Fly List Uses Predictive Assessments

The US government has admitted that it uses predictive assessments to put people on the no-fly list:

In a little-noticed filing before an Oregon federal judge, the US Justice Department and the FBI conceded that stopping US and other citizens from travelling on airplanes is a matter of “predictive assessments about potential threats,” the government asserted in May.

“By its very nature, identifying individuals who ‘may be a threat to civil aviation or national security’ is a predictive judgment intended to prevent future acts of terrorism in an uncertain context,” Justice Department officials Benjamin C Mizer and Anthony J Coppolino told the court on 28 May.

“Judgments concerning such potential threats to aviation and national security call upon the unique prerogatives of the Executive in assessing such threats.”

It is believed to be the government’s most direct acknowledgement to date that people are not allowed to fly because of what the government believes they might do and not what they have already done.

When you have a secret process that can judge and penalize people without due process or oversight, this is the kind of thing that happens.

Posted on August 20, 2015 at 6:19 AM37 Comments

Nasty Cisco Attack

This is serious:

Cisco Systems officials are warning customers of a series of attacks that completely hijack critical networking gear by swapping out the valid ROMMON firmware image with one that’s been maliciously altered.

The attackers use valid administrator credentials, an indication the attacks are being carried out either by insiders or people who have otherwise managed to get hold of the highly sensitive passwords required to update and make changes to the Cisco hardware. Short for ROM Monitor, ROMMON is the means for booting Cisco’s IOS operating system. Administrators use it to perform a variety of configuration tasks, including recovering lost passwords, downloading software, or in some cases running the router itself.

There’s no indication of who is doing these attacks, but it’s exactly the sort of thing you’d expect out of a government attacker. Regardless of which government initially discovered this, assume that they’re all exploiting it by now—and will continue to do so until it’s fixed.

Posted on August 19, 2015 at 12:34 PM30 Comments

AVA: A Social Engineering Vulnerability Scanner

This is interesting:

First, it integrates with corporate directories such as Active Directory and social media sites like LinkedIn to map the connections between employees, as well as important outside contacts. Bell calls this the “real org chart.” Hackers can use such information to choose people they ought to impersonate while trying to scam employees.

From there, AVA users can craft custom phishing campaigns, both in email and Twitter, to see how employees respond. Finally, and most importantly, it helps organizations track the results of these campaigns. You could use AVA to evaluate the effectiveness of two different security training programs, see which employees need more training, or find places where additional security is needed.

Of course, the problem is that both good guys and bad guys can use this tool. Which makes it like pretty much every other vulnerability scanner.

Posted on August 19, 2015 at 7:11 AM14 Comments

Did Kaspersky Fake Malware?

Two former Kaspersky employees have accused the company of faking malware to harm rival antivirus products. They would falsely classify legitimate files as malicious, tricking other antivirus companies that blindly copied Kaspersky’s data into deleting them from their customers’ computers.

In one technique, Kaspersky’s engineers would take an important piece of software commonly found in PCs and inject bad code into it so that the file looked like it was infected, the ex-employees said. They would send the doctored file anonymously to VirusTotal.

Then, when competitors ran this doctored file through their virus detection engines, the file would be flagged as potentially malicious. If the doctored file looked close enough to the original, Kaspersky could fool rival companies into thinking the clean file was problematic as well.

[…]

The former Kaspersky employees said Microsoft was one of the rivals that were targeted because many smaller security companies followed the Redmond, Washington-based company’s lead in detecting malicious files. They declined to give a detailed account of any specific attack.

Microsoft’s antimalware research director, Dennis Batchelder, told Reuters in April that he recalled a time in March 2013 when many customers called to complain that a printer code had been deemed dangerous by its antivirus program and placed in “quarantine.”

Batchelder said it took him roughly six hours to figure out that the printer code looked a lot like another piece of code that Microsoft had previously ruled malicious. Someone had taken a legitimate file and jammed a wad of bad code into it, he said. Because the normal printer code looked so much like the altered code, the antivirus program quarantined that as well.

Over the next few months, Batchelder’s team found hundreds, and eventually thousands, of good files that had been altered to look bad.

Kaspersky denies it.

EDITED TO ADD (8/19): Here’s an October 2013 presentation by Microsoft on the attacks.

EDITED TO ADD (9/11): A dissenting opinion.

Posted on August 18, 2015 at 2:35 PM46 Comments

More on Mail Cover

I’ve previously written about mail cover—the practice of recording the data on mail envelopes. Sai has been covering the issue in more detail, and recently received an unredacted copy of a 2014 audit report. The New York Times has an article on it:

In addition to raising privacy concerns, the audit questioned the Postal Service’s efficiency and accuracy in handling mail cover requests. Many requests were processed late, the audit said, which delayed surveillance, and computer errors caused the same tracking number to be assigned to different requests.

[…]

The inspector general also found that the Postal Inspection Service did not have “sufficient controls” in place to ensure that its employees followed the agency’s policies in handling the national security mail covers.

According to the audit, about 10 percent of requests did not include the dates for the period covered by surveillance. Without the dates in the files, auditors were unable to determine if the Postal Service had followed procedures for allowing law enforcement agencies to monitor mail for a specific period of time.

Additionally, 15 percent of the inspectors who handled the mail covers did not have the proper nondisclosure agreements on file for handling classified materials, records that must be maintained for 50 years. The agreements would prohibit the postal workers from discussing classified information.

And the inspector general found that in about 32 percent of cases, postal inspectors did not include, as required, the date on which they visited facilities where mail covers were being processed. In another 32 percent of cases, law enforcement agencies did not return documents to the Postal Inspection Service’s Office of Counsel, which handles the national security mail covers, within the prescribed 60 days after a case was closed.

Posted on August 18, 2015 at 6:48 AM16 Comments

Oracle CSO Rant Against Security Experts

Oracle’s CSO Mary Ann Davidson wrote a blog post ranting against security experts finding vulnerabilities in her company’s products. The blog post has been taken down by the company, but was saved for posterity by others. There’s been lots of commentary.

It’s easy to just mock Davidson’s stance, but it’s dangerous to our community. Yes, if researchers don’t find vulnerabilities in Oracle products, then the company won’t look bad and won’t have to patch things. But the real attackers—whether they be governments, criminals, or cyberweapons arms manufacturers who sell to government and criminals—will continue to find vulnerabilities in her products. And while they won’t make a press splash and embarrass her, they will exploit them.

Posted on August 17, 2015 at 6:45 AM66 Comments

NSA's Partnership with AT&T

There’s a new article, published jointly by The New York Times and ProPublica, aboutNSA’s longstanding relationship with AT&T. It’s based on the Snowden documents, and there are a bunch of new pages published.

AT&T’s cooperation has involved a broad range of classified activities, according to the documents, which date from 2003 to 2013. AT&T has given the N.S.A. access, through several methods covered under different legal rules, to billions of emails as they have flowed across its domestic networks. It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T.

[…]
In 2011, AT&T began handing over 1.1 billion domestic cellphone calling records a day to the N.S.A. after “a push to get this flow operational prior to the 10th anniversary of 9/11,” according to an internal agency newsletter. This revelation is striking because after Mr. Snowden disclosed the program of collecting the records of Americans’ phone calls, intelligence officials told reporters that, for technical reasons, it consisted mostly of landline phone records.

Lots of details in the article and in the documents. Here’s commentary from the EFF.

EDITED TO ADD (8/16): ProPublica has published a companion piece showing how they linked both AT&T and Verizon with the NSA’s codenames. And Marcy Wheeler has some good commentary.

More commenary.

Posted on August 15, 2015 at 12:44 PM43 Comments

Cosa Nostra Dead Drops

Good operational security is hard, and often uses manual technologies:

Investigators described how Messina Denaro, 53, disdains telecommunications and relies on handwritten notes, or “pizzini,'” to relay orders. The notes were wadded tight, covered in tape and hidden under rocks or dug into soil until go-betweens retrieved them. The messages were ordered destroyed after being read.

That’s a classic dead drop.

Posted on August 13, 2015 at 6:33 AM12 Comments

Another Salvo in the Second Crypto War (of Words)

Prosecutors from New York, London, Paris, and Madrid wrote an op-ed in yesterday’s New York Times in favor of backdoors in cell phone encryption. There are a number of flaws in their argument, ranging from how easy it is to get data off an encrypted phone to the dangers of designing a backdoor in the first place, but all of that has been said before. And since anecdote can be more persuasive than data, the op-ed started with one:

In June, a father of six was shot dead on a Monday afternoon in Evanston, Ill., a suburb 10 miles north of Chicago. The Evanston police believe that the victim, Ray C. Owens, had also been robbed. There were no witnesses to his killing, and no surveillance footage either.

With a killer on the loose and few leads at their disposal, investigators in Cook County, which includes Evanston, were encouraged when they found two smartphones alongside the body of the deceased: an iPhone 6 running on Apple’s iOS 8 operating system, and a Samsung Galaxy S6 Edge running on Google’s Android operating system. Both devices were passcode protected.

You can guess the rest. A judge issued a warrant, but neither Apple nor Google could unlock the phones. “The homicide remains unsolved. The killer remains at large.”

The Intercept researched the example, and it seems to be real. The phones belonged to the victim, and…

According to Commander Joseph Dugan of the Evanston Police Department, investigators were able to obtain records of the calls to and from the phones, but those records did not prove useful. By contrast, interviews with people who knew Owens suggested that he communicated mainly through text messages—the kind that travel as encrypted data—and had made plans to meet someone shortly before he was shot.

The information on his phone was not backed up automatically on Apple’s servers—apparently because he didn’t use wi-fi, which backups require.

[…]

But Dugan also wasn’t as quick to lay the blame solely on the encrypted phones. “I don’t know if getting in there, getting the information, would solve the case,” he said, “but it definitely would give us more investigative leads to follow up on.”

This is the first actual example I’ve seen illustrating the value of a backdoor. Unlike the increasingly common example of an ISIL handler abroad communicating securely with a radicalized person in the US, it’s an example where a backdoor might have helped. I say “might have,” because the Galaxy S6 is not encrypted by default, which means the victim deliberately turned the encryption on. If the native smartphone encryption had been backdoored, we don’t know if the victim would have turned it on nevertheless, or if he would have employed a different, non-backdoored, app.

The authors’ other examples are much sloppier:

Between October and June, 74 iPhones running the iOS 8 operating system could not be accessed by investigators for the Manhattan district attorney’s office—despite judicial warrants to search the devices. The investigations that were disrupted include the attempted murder of three individuals, the repeated sexual abuse of a child, a continuing sex trafficking ring and numerous assaults and robberies.

[…]

In France, smartphone data was vital to the swift investigation of the Charlie Hebdo terrorist attacks in January, and the deadly attack on a gas facility at Saint-Quentin-Fallavier, near Lyon, in June. And on a daily basis, our agencies rely on evidence lawfully retrieved from smartphones to fight sex crimes, child abuse, cybercrime, robberies or homicides.

We’ve heard that 74 number before. It’s over nine months, in an office that handles about 100,000 cases a year: less than 0.1% of the time. Details about those cases would be useful, so we can determine if encryption was just an impediment to investigation, or resulted in a criminal going free. The government needs to do a better job of presenting empirical data to support its case for backdoors. That they’re unable to do so suggests very strongly that an empirical analysis wouldn’t favor the government’s case.

As to the Charlie Hebdo case, it’s not clear how much of that vital smartphone data was actual data, and how much of it was unable-to-be-encrypted metadata. I am reminded of the examples that then-FBI-Director Louis Freeh would give during the First Crypto Wars in the 1990s. The big one used to illustrate the dangers of encryption was Mafia boss John Gotti. But the surveillance that convicted him was a room bug, not a wiretap. Given that the examples from FBI Director James Comey’s “going dark” speech last year were bogus, skepticism in the face of anecdote seems prudent.

So much of this “going dark” versus the “golden age of surveillance” debate depends on where you start from. Referring to that first Evanston example and the inability to get evidence from the victim’s phones, the op-ed authors write: “Until very recently, this situation would not have occurred.” That’s utter nonsense. From the beginning of time until very recently, this was the only situation that could have occurred. Objects in the vicinity of an event were largely mute about the past. Few things, save for eyewitnesses, could ever reach back in time and produce evidence. Even 15 years ago, the victim’s cell phone would have had no evidence on it that couldn’t have been obtained elsewhere, and that’s if the victim had been carrying a cell phone at all.

For most of human history, surveillance has been expensive. Over the last couple of decades, it has become incredibly cheap and almost ubiquitous. That a few bits and pieces are becoming expensive again isn’t a cause for alarm.

This essay originally appeared on Lawfare.

EDITED TO ADD (8/13): Excellent parody/commentary: “When Curtains Block Justice.”

Posted on August 12, 2015 at 2:18 PM71 Comments

Intimidating Military Personnel by Targeting Their Families

This FBI alert is interesting:

(U//FOUO) In May 2015, the wife of a US military member was approached in front of her home by two Middle-Eastern males. The men stated that she was the wife of a US interrogator. When she denied their claims, the men laughed. The two men left the area in a dark-colored, four-door sedan with two other Middle-Eastern males in the vehicle. The woman had observed the vehicle in the neighborhood on previous occasions.

(U//FOUO) Similar incidents in Wyoming have been reported to the FBI throughout June 2015. On numerous occasions, family members of military personnel were confronted by Middle-Eastern males in front of their homes. The males have attempted to obtain personal information about the military member and family members through intimidation. The family members have reported feeling scared.

The report says nothing about whether these are isolated incidents, a trend, or part of a larger operation. But it has gotten me thinking about the new ways military personnel can be intimidated. More and more military personnel live here and work there, remotely as drone pilots, intelligence analysts, and so on, and their military and personal lives intertwine to a degree we have not seen before. There will be some interesting security repercussions from that.

Posted on August 12, 2015 at 5:49 AM46 Comments

Incenting Drug Dealers to Snitch on Each Other

Local police are trying to convince drug dealers to turn each other in by pointing out that it reduces competition.

It’s a comical tactic with serious results: “We offer a free service to help you eliminate your drug competition!” Under a large marijuana leaf, the flier contained a blank form encouraging drug dealers to identify the competition and provide contact information. It also asked respondents to identify the hours the competition was most active.

Posted on August 11, 2015 at 6:41 AM59 Comments

Detecting Betrayal in Diplomacy Games

Interesting research detecting betrayal in the game of Diplomacy by analyzing interplayer messages.

One harbinger was a shift in politeness. Players who were excessively polite in general were more likely to betray, and people who were suddenly more polite were more likely to become victims of betrayal, study coauthor and Cornell graduate student Vlad Niculae reported July 29 at the Annual Meeting of the Association for Computational Linguistics in Beijing. Consider this exchange from one round:

    Germany: Can I suggest you move your armies east and then I will support you? Then next year you move [there] and dismantle Turkey. I will deal with England and France, you take out Italy.

    Austria: Sounds like a perfect plan! Happy to follow through. And­—thank you Bruder!

Austria’s next move was invading German territory. Bam! Betrayal.

An increase planning-related language by the soon-to-be victim also indicated impending betrayal, a signal that emerges a few rounds before the treachery ensues. And correspondence of soon-to-be betrayers had an uptick in positive sentiment in the lead-up to their breach.

Working from these linguistic cues, a computer program could peg future betrayal 57 percent of the time. That might not sound like much, but it was better than the accuracy of the human players, who never saw it coming. And remember that by definition, a betrayer conceals the intention to betray; the breach is unexpected (that whole trust thing). Given that inherent deceit, 57 percent isn’t so bad.

Back when I was in high school, I briefly published a postal Diplomacy zine.

Academic paper.

Posted on August 10, 2015 at 3:32 PM25 Comments

Friday Squid Blogging: Divers Find Squid Eggs

Divers discover a large mass of Ommastrephes bartramii eggs:

Earlier this month, a team of divers swimming off the coast of Turkey discovered something unexpected: a 4-meter wide gelatinous mass of what turned out to be one of the biggest mass of squid eggs ever discovered.

Another article, with a photo.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on August 7, 2015 at 4:58 PM221 Comments

Nicholas Weaver on iPhone Security

Excellent essay:

Yes, an iPhone configured with a proper password has enough protection that, turned off, I’d be willing to hand mine over to the DGSE, NSA, or Chinese. But many (perhaps most) users don’t configure their phones right. Beyond just waiting for the suspect to unlock his phone, most people either use a weak 4-digit passcode (that can be brute-forced) or use the fingerprint reader (which the officer has a day to force the subject to use).

Furthermore, most iPhones have a lurking security landmine enabled by default: iCloud backup. A simple warrant to Apple can obtain this backup, which includes all photographs (so there is the selfie) and all undeleted iMessages! About the only information of value not included in this backup are the known WiFi networks and the suspect’s email, but a suspect’s email is a different warrant away anyway.

Finally, there is iMessage, whose “end-to-end” nature, despite FBI complaints, contains some significant weaknesses and deserves scare-quotes. To start with, iMessage’s encryption does not obscure any metadata, and as the saying goes, “the Metadata is the Message”. So with a warrant to Apple, the FBI can obtain all the information about every message sent and received except the message contents, including time, IP addresses, recipients, and the presence and size of attachments. Apple can’t hide this metadata, because Apple needs to use this metadata to deliver messages.

He explains how Apple could enable surveillance on iMessage and FaceTime:

So to tap Alice, it is straightforward to modify the keyserver to present an additional FBI key for Alice to everyone but Alice. Now the FBI (but not Apple) can decrypt all iMessages sent to Alice in the future. A similar modification, adding an FBI key to every request Alice makes for any keys other than her own, enables tapping all messages sent by Alice. There are similar architectural vulnerabilities which enable tapping of “end-to-end secure” FaceTime calls.

There’s a persistent rumor going around that Apple is in the secret FISA Court, fighting a government order to make its platform more surveillance-friendly—and they’re losing. This might explain Apple CEO Tim Cook’s somewhat sudden vehemence about privacy. I have not found any confirmation of the rumor.

Posted on August 6, 2015 at 6:09 AM39 Comments

Face Recognition by Thermal Imaging

New research can identify a person by reading their thermal signature in complete darkness and then matching it with ordinary photographs.

Research paper:

Abstract: Cross modal face matching between the thermal and visible spectrum is a much desired capability for night-time surveillance and security applications. Due to a very large modality gap, thermal-to-visible face recognition is one of the most challenging face matching problem. In this paper, we present an approach to bridge this modality gap by a significant margin. Our approach captures the highly non-linear relationship be- tween the two modalities by using a deep neural network. Our model attempts to learn a non-linear mapping from visible to thermal spectrum while preserving the identity in- formation. We show substantive performance improvement on a difficult thermal-visible face dataset. The presented approach improves the state-of-the-art by more than 10% in terms of Rank-1 identification and bridge the drop in performance due to the modality gap by more than 40%.

Posted on August 5, 2015 at 6:02 AM19 Comments

Shooting Down Drones

A Kentucky man shot down a drone that was hovering in his backyard:

“It was just right there,” he told Ars. “It was hovering, I would never have shot it if it was flying. When he came down with a video camera right over my back deck, that’s not going to work. I know they’re neat little vehicles, but one of those uses shouldn’t be flying into people’s yards and videotaping.”

Minutes later, a car full of four men that he didn’t recognize rolled up, “looking for a fight.”

“Are you the son of a bitch that shot my drone?” one said, according to Merideth.

His terse reply to the men, while wearing a 10mm Glock holstered on his hip: “If you cross that sidewalk onto my property, there’s going to be another shooting.”

He was arrested, but what’s the law?

In the view of drone lawyer Brendan Schulman and robotics law professor Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

But a different and bolder argument, put forward by law professor Michael Froomkin, could provide Meredith some cover. In a paper, Froomkin argues that it’s reasonable to assume robotic intrusions are not harmless, and that people may have a right to “employ violent self-help.”

Froomkin’s paper is well worth reading:

Abstract: Robots can pose—or can appear to pose—a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions easier and fairer to all concerned.

The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases—drone overflights, autonomous cars, national, state, and even local regulation—may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass and (2) responses to perceived threats other than physical attack by robots notably the risk that the robot (or drone) may be spying – perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.

We argue that the scope of permissible self-help in defending one’s privacy should be quite broad. There is exigency in that resort to legally administered remedies would be impracticable; and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great—or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.

The paper concludes with a brief examination of what if anything our survey of a person’s right to defend against robots might tell us about the current state of robot rights against people.

Note that there are drones that shoot back.

Here are two books that talk about these topics. And an article from 2012.

EDITED TO ADD (8/9): How to shoot down a drone.

Posted on August 4, 2015 at 8:24 AM113 Comments

Vulnerabilities in Brink's Smart Safe

Brink’s sells an Internet-enabled smart safe called the CompuSafe Galileo. Despite being sold as a more secure safe, it’s wildly insecure:

Vulnerabilities found in CompuSafe Galileo safes, smart safes made by the ever-reliable Brinks company that are used by retailers, restaurants, and convenience stores, would allow a rogue employee or anyone else with physical access to them to command their doors to open and relinquish their cash….

The hack has the makings of the perfect crime, because a thief could also erase any evidence that the theft occurred simply by altering data in a back-end database where the smartsafe logs how much money is inside and who accessed it.

Nothing about these vulnerabilities is a surprise to anyone who works in computer security:

But the safes have an external USB port on the side of the touchscreens that allows service technicians to troubleshoot and obtain a backup of the database. This, unfortunately, creates an easy entrypoint for thieves to take complete, administrative control of the devices.

“Once you’re able to plug into that USB port, you’re able to access lots of things that you shouldn’t normally be able to access,” Petro told WIRED. “There is a full operating system…that you’re able to…fully take over…and make [the safe] do whatever you want it to do.”

The researchers created a malicious script that, once inserted into a safe on a USB stick, lets a thief automatically open the safe doors by emulating certain mouse and keyboard actions and bypassing standard application controls. “You plug in this little gizmo, wait about 60 seconds, and the door just pops open,” says Petro.

If it sounds like the people who designed this e-safe ignored all of the things we’ve learned about computer security in the last few decades, you’re right. And that’s the problem with Internet-of-Things security: it’s often designed by people who don’t know computer or Internet security.

They also haven’t learned the lessons of full disclosure or rapid patching:

They notified Brinks about the vulnerabilities more than a year ago, but say the company appears to have done nothing to resolve the issues. Although Brinks could disable driver software associated with the USB port to prevent someone from controlling the safes in this way, or lock down the system and database so it’s not running in administrative mode and the database can’t be changed, but so far the company appears to have done none of these.

.

Again, this all sounds familiar. The computer industry learned its lessons over a decade ago. Before then they ignored security vulnerabilities, threatened researchers, and generally behaved very badly. I expect the same things to happen with Internet-of-Things companies.

Posted on August 3, 2015 at 1:27 PM36 Comments

Help with Mailing List Hosting

I could use some help with finding a host for my monthly newsletter, Crypto-Gram. My old setup just wasn’t reliable enough. I had a move planned, but that fell through when the new host’s bounce processing system turned out to be buggy and they admitted the problem might never be fixed.

Clearly I need something a lot more serious. My criteria include subscriber privacy, reasonable cost, and a proven track record of reliability with large mailing lists. (I would use MailChimp, but it has mandatory click tracking for new accounts.)

One complication is that SpamCop, a popular anti-spam service, tells me I have at least one of their “spamtrap” addresses on the list. Spamtraps are addresses that—in theory—have never been used, so they shouldn’t be on any legitimate list. I don’t know how they got on my list, since I make people confirm their subscriptions by replying to an e-mail or clicking on an e-mailed link. But I used to make rare exceptions for people who just asked to join, so maybe a bad address or two got on that way. Spamtraps don’t work if you tell people what they are, so I can’t just find and remove them. And this has caused no end of problems for subscribers who use SpamCop’s blacklist.

At a minimum, I need to be sure that a new host won’t kick me out for couple of spamtraps. And if the solution to this problem involves making all 100,000 people on the list reconfirm their subscriptions, then that has to be as simple and user-friendly a process as possible.

If you can recommend a host that would work, I’m interested. Even better would be talking to an expert with lots of experience running large mailing lists who can guide me. If you know a person like that, or if you are one, please leave a comment or e-mail me at the address on my Contact page.

Posted on August 3, 2015 at 5:58 AM48 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.