Blog: October 2017 Archives

Attack on Old ANSI Random Number Generator

Almost 20 years ago, I wrote a paper that pointed to a potential flaw in the ANSI X9.17 RNG standard. Now, new research has found that the flaw exists in some implementations of the RNG standard.

Here’s the research paper, the website—complete with cute logo—for the attack, and Matthew Green’s excellent blog post on the research.

Posted on October 31, 2017 at 10:29 AM32 Comments

Google Login Security for High-Risk Users

Google has a new login service for high-risk users. It’s good, but unforgiving.

Logging in from a desktop will require a special USB key, while accessing your data from a mobile device will similarly require a Bluetooth dongle. All non-Google services and apps will be exiled from reaching into your Gmail or Google Drive. Google’s malware scanners will use a more intensive process to quarantine and analyze incoming documents. And if you forget your password, or lose your hardware login keys, you’ll have to jump through more hoops than ever to regain access, the better to foil any intruders who would abuse that process to circumvent all of Google’s other safeguards.

It’s called Advanced Protection.

Posted on October 30, 2017 at 12:23 PM43 Comments

The Science of Interrogation

Fascinating article about two psychologists who are studying interrogation techniques.

Now, two British researchers are quietly revolutionising the study and practice of interrogation. Earlier this year, in a meeting room at the University of Liverpool, I watched a video of the Diola interview alongside Laurence Alison, the university’s chair of forensic psychology, and Emily Alison, a professional counsellor. My permission to view the tape was negotiated with the counter-terrorist police, who are understandably wary of allowing outsiders access to such material. Details of the interview have been changed to protect the identity of the officers involved, though the quotes are verbatim.

The Alisons, husband and wife, have done something no scholars of interrogation have been able to do before. Working in close cooperation with the police, who allowed them access to more than 1,000 hours of tapes, they have observed and analysed hundreds of real-world interviews with terrorists suspected of serious crimes. No researcher in the world has ever laid hands on such a haul of data before. Based on this research, they have constructed the world’s first empirically grounded and comprehensive model of interrogation tactics.

The Alisons’ findings are changing the way law enforcement and security agencies approach the delicate and vital task of gathering human intelligence. “I get very little, if any, pushback from practitioners when I present the Alisons’ work,” said Kleinman, who now teaches interrogation tactics to military and police officers. “Even those who don’t have a clue about the scientific method, it just resonates with them.” The Alisons have done more than strengthen the hand of advocates of non-coercive interviewing: they have provided an unprecedentedly authoritative account of what works and what does not, rooted in a profound understanding of human relations. That they have been able to do so is testament to a joint preoccupation with police interviews that stretches back more than 20 years.

Posted on October 26, 2017 at 5:09 AM74 Comments

CSE Releases Malware Analysis Tool

The Communications Security Establishment of Canada—basically, Canada’s version of the NSA—has released a suite of malware analysis tools:

Assemblyline is described by CSE as akin to a conveyor belt: files go in, and a handful of small helper applications automatically comb through each one in search of malicious clues. On the way out, every file is given a score, which lets analysts sort old, familiar threats from the new and novel attacks that typically require a closer, more manual approach to analysis.

Posted on October 25, 2017 at 6:07 AM11 Comments

Reaper Botnet

It’s based on the Mirai code, but much more virulent:

While Mirai caused widespread outages, it impacted IP cameras and internet routers by simply exploiting their weak or default passwords. The latest botnet threat, known as alternately as IoT Troop or Reaper, has evolved that strategy, using actual software-hacking techniques to break into devices instead. It’s the difference between checking for open doors and actively picking locks­—and it’s already enveloped devices on a million networks and counting.

It’s already infected a million IoT devices.

EDITED TO ADD (11/14): Brian Krebs on Reaper.

Posted on October 24, 2017 at 6:01 AM41 Comments

Denuvo DRM Cracked within a Day of Release

Denuvo is probably the best digital-rights management system, used to protect computer games. It’s regularly cracked within a day.

If Denuvo can no longer provide even a single full day of protection from cracks, though, that protection is going to look a lot less valuable to publishers. But that doesn’t mean Denuvo will stay effectively useless forever. The company has updated its DRM protection methods with a number of “variants” since its rollout in 2014, and chatter in the cracking community indicates a revamped “version 5” will launch any day now. That might give publishers a little more breathing room where their games can exist uncracked and force the crackers back to the drawing board for another round of the never-ending DRM battle.

BoingBoing post. Slashdot thread.

Related: Vice has a good history of DRM.

Posted on October 20, 2017 at 9:17 AM19 Comments

IoT Cybersecurity: What's Plan B?

In August, four US Senators introduced a bill designed to improve Internet of Things (IoT) security. The IoT Cybersecurity Improvement Act of 2017 is a modest piece of legislation. It doesn’t regulate the IoT market. It doesn’t single out any industries for particular attention, or force any companies to do anything. It doesn’t even modify the liability laws for embedded software. Companies can continue to sell IoT devices with whatever lousy security they want.

What the bill does do is leverage the government’s buying power to nudge the market: any IoT product that the government buys must meet minimum security standards. It requires vendors to ensure that devices can not only be patched, but are patched in an authenticated and timely manner; don’t have unchangeable default passwords; and are free from known vulnerabilities. It’s about as low a security bar as you can set, and that it will considerably improve security speaks volumes about the current state of IoT security. (Full disclosure: I helped draft some of the bill’s security requirements.)

The bill would also modify the Computer Fraud and Abuse and the Digital Millennium Copyright Acts to allow security researchers to study the security of IoT devices purchased by the government. It’s a far narrower exemption than our industry needs. But it’s a good first step, which is probably the best thing you can say about this legislation.

However, it’s unlikely this first step will even be taken. I am writing this column in August, and have no doubt that the bill will have gone nowhere by the time you read it in October or later. If hearings are held, they won’t matter. The bill won’t have been voted on by any committee, and it won’t be on any legislative calendar. The odds of this bill becoming law are zero. And that’s not just because of current politics—I’d be equally pessimistic under the Obama administration.

But the situation is critical. The Internet is dangerous—and the IoT gives it not just eyes and ears, but also hands and feet. Security vulnerabilities, exploits, and attacks that once affected only bits and bytes now affect flesh and blood.

Markets, as we’ve repeatedly learned over the past century, are terrible mechanisms for improving the safety of products and services. It was true for automobile, food, restaurant, airplane, fire, and financial-instrument safety. The reasons are complicated, but basically, sellers don’t compete on safety features because buyers can’t efficiently differentiate products based on safety considerations. The race-to-the-bottom mechanism that markets use to minimize prices also minimizes quality. Without government intervention, the IoT remains dangerously insecure.

The US government has no appetite for intervention, so we won’t see serious safety and security regulations, a new federal agency, or better liability laws. We might have a better chance in the EU. Depending on how the General Data Protection Regulation on data privacy pans out, the EU might pass a similar security law in 5 years. No other country has a large enough market share to make a difference.

Sometimes we can opt out of the IoT, but that option is becoming increasingly rare. Last year, I tried and failed to purchase a new car without an Internet connection. In a few years, it’s going to be nearly impossible to not be multiply connected to the IoT. And our biggest IoT security risks will stem not from devices we have a market relationship with, but from everyone else’s cars, cameras, routers, drones, and so on.

We can try to shop our ideals and demand more security, but companies don’t compete on IoT safety—and we security experts aren’t a large enough market force to make a difference.

We need a Plan B, although I’m not sure what that is. Comment if you have any ideas.

This essay previously appeared in the September/October issue of IEEE Security & Privacy.

Posted on October 18, 2017 at 9:58 AM73 Comments

Security Flaw in Infineon Smart Cards and TPMs

A security flaw in Infineon smart cards and TPMs allows an attacker to recover private keys from the public keys. Basically, the key generation algorithm sometimes creates public keys that are vulnerable to Coppersmith’s attack:

While all keys generated with the library are much weaker than they should be, it’s not currently practical to factorize all of them. For example, 3072-bit and 4096-bit keys aren’t practically factorable. But oddly enough, the theoretically stronger, longer 4096-bit key is much weaker than the 3072-bit key and may fall within the reach of a practical (although costly) factorization if the researchers’ method improves.

To spare time and cost, attackers can first test a public key to see if it’s vulnerable to the attack. The test is inexpensive, requires less than 1 millisecond, and its creators believe it produces practically zero false positives and zero false negatives. The fingerprinting allows attackers to expend effort only on keys that are practically factorizable.

This is the flaw in the Estonian national ID card we learned about last month.

The paper isn’t online yet. I’ll post it when it is.

Ouch. This is a bad vulnerability, and it’s in systems—like the Estonian national ID card—that are critical.

EDITED TO ADD (11/14): More information from the researchers.

Posted on October 17, 2017 at 9:24 AM22 Comments

New KRACK Attack Against Wi-Fi Encryption

Mathy Vanhoef has just published a devastating attack against WPA2, the 14-year-old encryption protocol used by pretty much all Wi-Fi systems. It’s an interesting attack, where the attacker forces the protocol to reuse a key. The authors call this attack KRACK, for Key Reinstallation Attacks.

This is yet another of a series of marketed attacks; with a cool name, a website, and a logo. The Q&A on the website answers a lot of questions about the attack and its implications. And lots of good information in this ArsTechnica article.

There is an academic paper, too:

“Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2,” by Mathy Vanhoef and Frank Piessens.

Abstract: We introduce the key reinstallation attack. This attack abuses design or implementation flaws in cryptographic protocols to reinstall an already-in-use key. This resets the key’s associated parameters such as transmit nonces and receive replay counters. Several types of cryptographic Wi-Fi handshakes are affected by the attack. All protected Wi-Fi networks use the 4-way handshake to generate a fresh session key. So far, this 14-year-old handshake has remained free from attacks, and is even proven secure. However, we show that the 4-way handshake is vulnerable to a key reinstallation attack. Here, the adversary tricks a victim into reinstalling an already-in-use key. This is achieved by manipulating and replaying handshake messages. When reinstalling the key, associated parameters such as the incremental transmit packet number (nonce) and receive packet number (replay counter) are reset to their initial value. Our key reinstallation attack also breaks the PeerKey, group key, and Fast BSS Transition (FT) handshake. The impact depends on the handshake being attacked, and the data-confidentiality protocol in use. Simplified, against AES-CCMP an adversary can replay and decrypt (but not forge) packets. This makes it possible to hijack TCP streams and inject malicious data into them. Against WPA-TKIP and GCMP the impact is catastrophic: packets can be replayed, decrypted, and forged. Because GCMP uses the same authentication key in both communication directions, it is especially affected.

Finally, we confirmed our findings in practice, and found that every Wi-Fi device is vulnerable to some variant of our attacks. Notably, our attack is exceptionally devastating against Android 6.0: it forces the client into using a predictable all-zero encryption key.

I’m just reading about this now, and will post more information as I learn it.

EDITED TO ADD: More news.

EDITED TO ADD: This meets my definition of brilliant. The attack is blindingly obvious once it’s pointed out, but for over a decade no one noticed it.

EDITED TO ADD: Matthew Green has a blog post on what went wrong. The vulnerability is in the interaction between two protocols. At a meta level, he blames the opaque IEEE standards process:

One of the problems with IEEE is that the standards are highly complex and get made via a closed-door process of private meetings. More importantly, even after the fact, they’re hard for ordinary security researchers to access. Go ahead and google for the IETF TLS or IPSec specifications—you’ll find detailed protocol documentation at the top of your Google results. Now go try to Google for the 802.11i standards. I wish you luck.

The IEEE has been making a few small steps to ease this problem, but they’re hyper-timid incrementalist bullshit. There’s an IEEE program called GET that allows researchers to access certain standards (including 802.11) for free, but only after they’ve been public for six months—coincidentally, about the same time it takes for vendors to bake them irrevocably into their hardware and software.

This whole process is dumb and—in this specific case—probably just cost industry tens of millions of dollars. It should stop.

Nicholas Weaver explains why most people shouldn’t worry about this:

So unless your Wi-Fi password looks something like a cat’s hairball (e.g. “:SNEIufeli7rc”—which is not guessable with a few million tries by a computer), a local attacker had the capability to determine the password, decrypt all the traffic, and join the network before KRACK.

KRACK is, however, relevant for enterprise Wi-Fi networks: networks where you needed to accept a cryptographic certificate to join initially and have to provide both a username and password. KRACK represents a new vulnerability for these networks. Depending on some esoteric details, the attacker can decrypt encrypted traffic and, in some cases, inject traffic onto the network.

But in none of these cases can the attacker join the network completely. And the most significant of these attacks affects Linux devices and Android phones, they don’t affect Macs, iPhones, or Windows systems. Even when feasible, these attacks require physical proximity: An attacker on the other side of the planet can’t exploit KRACK, only an attacker in the parking lot can.

EDITED TO ADD (11/13): The official link to the paper blocks anonymous users. Here’s an alternate.

Posted on October 16, 2017 at 8:39 AM129 Comments

My Blogging

Blog regulars will notice that I haven’t been posting as much lately as I have in the past. There are two reasons. One, it feels harder to find things to write about. So often it’s the same stories over and over. I don’t like repeating myself. Two, I am busy writing a book. The title is still: Click Here to Kill Everybody: Peril and Promise in a Hyper-Connected World. The book is a year late, and as a very different table of contents than it had in 2016. I have been writing steadily since mid-August. The book is due to the publisher at the end of March 2018, and will be published in the beginning of September.

This is the current table of contents (subject to change, of course):

  • Introduction: Everything is Becoming a Computer
  • Part 1: The Trends
    • 1. Capitalism Continues to Drive the Internet
    • 2. Customer/User Control is Next
    • 3. Government Surveillance and Control is Also Increasing
    • 4. Cybercrime is More Profitable Than Ever
    • 5. Cyberwar is the New Normal
    • 6. Algorithms, Automation, and Autonomy Bring New Dangers
    • 7. What We Know About Computer Security
    • 8. Agile is Failing as a Security Paradigm
    • 9. Authentication and Identification are Getting Harder
    • 10. Risks are Becoming Catastrophic
  • Part 2: The Solutions
    • 11. We Need to Regulate the Internet of Things
    • 12. We Need to Defend Critical Infrastructure
    • 13. We Need to Prioritize Defense Over Offense
    • 14. We Need to Make Smarter Decisions About Connecting
    • 15. What’s Likely to Happen, and What We Can Do in Response
    • 16. Where Policy Can Go Wrong
  • Conclusion: Technology and Policy, Together

So that’s what’s been happening.

Posted on October 13, 2017 at 2:13 PM72 Comments

Technology to Out Sex Workers

Two related stories:

PornHub is using machine learning algorithms to identify actors in different videos, so as to better index them. People are worried that it can really identify them, by linking their stage names to their real names.

Facebook somehow managed to link a sex worker’s clients under her fake name to her real profile.

Sometimes people have legitimate reasons for having two identities. That is becoming harder and harder.

Posted on October 13, 2017 at 6:57 AM45 Comments

Impersonating iOS Password Prompts

This is an interesting security vulnerability: because it is so easy to impersonate iOS password prompts, a malicious app can steal your password just by asking.

Why does this work?

iOS asks the user for their iTunes password for many reasons, the most common ones are recently installed iOS operating system updates, or iOS apps that are stuck during installation.

As a result, users are trained to just enter their Apple ID password whenever iOS prompts you to do so. However, those popups are not only shown on the lock screen, and the home screen, but also inside random apps, e.g. when they want to access iCloud, GameCenter or In-App-Purchases.

This could easily be abused by any app, just by showing an UIAlertController, that looks exactly like the system dialog.

Even users who know a lot about technology have a hard time detecting that those alerts are phishing attacks.

The essay proposes some solutions, but I’m not sure they’ll work. We’re all trained to trust our computers and the applications running on them.

Posted on October 12, 2017 at 6:43 AM22 Comments

More on Kaspersky and the Stolen NSA Attack Tools

Both the New York Times and the Washington Post are reporting that Israel has penetrated Kaspersky’s network and detected the Russian operation.

From the New York Times:

Israeli intelligence officers informed the NSA that, in the course of their Kaspersky hack, they uncovered evidence that Russian government hackers were using Kaspersky’s access to aggressively scan for American government classified programs and pulling any findings back to Russian intelligence systems. [Israeli intelligence] provided their NSA counterparts with solid evidence of the Kremlin campaign in the form of screenshots and other documentation, according to the people briefed on the events.

Kaspersky first noticed the Israeli intelligence operation in 2015.

The Washington Post writes about the NSA tools being on the home computer in the first place:

The employee, whose name has not been made public and is under investigation by federal prosecutors, did not intend to pass the material to a foreign adversary. “There wasn’t any malice,” said one person familiar with the case, who, like others interviewed, spoke on the condition of anonymity to discuss an ongoing case. “It’s just that he was trying to complete the mission, and he needed the tools to do it.

I don’t buy this. People with clearances are told over and over not to take classified material home with them. It’s not just mentioned occasionally; it’s a core part of the job.

More news articles.

Posted on October 11, 2017 at 2:54 PM105 Comments

Changes in Password Best Practices

NIST recently published its four-volume SP800-63b Digital Identity Guidelines. Among other things, it makes three important suggestions when it comes to passwords:

  1. Stop it with the annoying password complexity rules. They make passwords harder to remember. They increase errors because artificially complex passwords are harder to type in. And they don’t help that much. It’s better to allow people to use pass phrases.
  2. Stop it with password expiration. That was an old idea for an old way we used computers. Today, don’t make people change their passwords unless there’s indication of compromise.
  3. Let people use password managers. This is how we deal with all the passwords we need.

These password rules were failed attempts to fix the user. Better we fix the security systems.

Posted on October 10, 2017 at 6:19 AM112 Comments

White House Chief of Staff John Kelly's Cell Phone was Tapped

Politico reports that White House Chief of Staff John Kelly’s cell phone was compromised back in December.

I know this is news because of who he is, but I hope every major government official of any country assumes that their commercial off-the-shelf cell phone is compromised. Even allies spy on allies; remember the reports that the NSA tapped the cell phone of German Chancellor Angela Merkel?

Posted on October 9, 2017 at 6:10 AM65 Comments

Yet Another Russian Hack of the NSA—This Time with Kaspersky's Help

The Wall Street Journal has a bombshell of a story. Yet another NSA contractor took classified documents home with him. Yet another Russian intelligence operation stole copies of those documents. The twist this time is that the Russians identified the documents because the contractor had Kaspersky Labs anti-virus installed on his home computer.

This is a huge deal, both for the NSA and Kaspersky. The Wall Street Journal article contains no evidence, only unnamed sources. But I am having trouble seeing how the already embattled Kaspersky Labs survives this.

WSJ follow up. Four more news articles.

EDITED TO ADD: This is either an example of the Russians subverting a perfectly reasonable security feature in Kaspersky’s products, or Kaspersky adding a plausible feature at the request of Russian intelligence. In the latter case, it’s a nicely deniable Russian information operation. In either case, it’s an impressive Russian information operation.

What’s getting a lot less press is yet another NSA contractor stealing top-secret cyberattack software. What is it with the NSA’s inability to keep anything secret anymore?

EDITED TO ADD (10/8): Another article.

Posted on October 6, 2017 at 8:06 AM136 Comments

HP Shared ArcSight Source Code with Russians

Reuters is reporting that HP Enterprise gave the Russians a copy of the ArcSight source code.

The article highlights that ArcSight is used by the Pentagon to protect classified networks, but the security risks are much broader. Any weaknesses the Russians discover could be used against any ArcSight customer.

What is HP Enterprise thinking? Near as I can tell, they only gave it away because the Russians asked nicely.

Supply chain security is very difficult. The article says that Russia demands source code because it’s worried about supply chain security: “One reason Russia requests the reviews before allowing sales to government agencies and state-run companies is to ensure that U.S. intelligence services have not placed spy tools in the software.” That’s a reasonable thing to worry about, considering what we know about NSA’s interdiction of commercial hardware and software products. But how can Group A convince Group B of the integrity and security of hardware/software without putting itself at risk from Group B?

This is one of the areas where open-source software has a security edge. If everyone has access to the source code—and security doesn’t depend on its secrecy—then there’s no advantage in getting a copy. As long as companies rely on obscurity for their security, these sorts of attacks are possible and profitable.

I wonder what sorts of assurances HP Enterprise gave its customers that it would secure its source code, and if any of those customers have negligence options against HP Enterprise.

News articles.

EDITED TO ADD (10/5): Commentary.

Posted on October 4, 2017 at 8:08 AM40 Comments

E-Mail Tracking

Interesting survey paper: on the privacy implications of e-mail tracking:

Abstract: We show that the simple act of viewing emails contains privacy pitfalls for the unwary. We assembled a corpus of commercial mailing-list emails, and find a network of hundreds of third parties that track email recipients via methods such as embedded pixels. About 30% of emails leak the recipient’s email address to one or more of these third parties when they are viewed. In the majority of cases, these leaks are intentional on the part of email senders, and further leaks occur if the recipient clicks links in emails. Mail servers and clients may employ a variety of defenses, but we analyze 16 servers and clients and find that they are far from comprehensive. We propose, prototype, and evaluate a new defense, namely stripping tracking tags from emails based on enhanced versions of existing web tracking protection lists.

Blog post on the research.

Posted on October 3, 2017 at 6:45 AM30 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.