Entries Tagged "essays"

Page 4 of 47

Vulnerabilities in Weapons Systems

“If you think any of these systems are going to work as expected in wartime, you’re fooling yourself.”

That was Bruce’s response at a conference hosted by US Transportation Command in 2017, after learning that their computerized logistical systems were mostly unclassified and on the Internet. That may be necessary to keep in touch with civilian companies like FedEx in peacetime or when fighting terrorists or insurgents. But in a new era facing off with China or Russia, it is dangerously complacent.

Any twenty-first century war will include cyber operations. Weapons and support systems will be successfully attacked. Rifles and pistols won’t work properly. Drones will be hijacked midair. Boats won’t sail, or will be misdirected. Hospitals won’t function. Equipment and supplies will arrive late or not at all.

Our military systems are vulnerable. We need to face that reality by halting the purchase of insecure weapons and support systems and by incorporating the realities of offensive cyberattacks into our military planning.

Over the past decade, militaries have established cyber commands and developed cyberwar doctrine. However, much of the current discussion is about offense. Increasing our offensive capabilities without being able to secure them is like having all the best guns in the world, and then storing them in an unlocked, unguarded armory. They just won’t be stolen; they’ll be subverted.

During that same period, we’ve seen increasingly brazen cyberattacks by everyone from criminals to governments. Everything is now a computer, and those computers are vulnerable. Cars, medical devices, power plants, and fuel pipelines have all been targets. Military computers, whether they’re embedded inside weapons systems or on desktops managing the logistics of those weapons systems, are similarly vulnerable. We could see effects as stodgy as making a tank impossible to start up, or sophisticated as retargeting a missile midair.

Military software is unlikely to be any more secure than commercial software. Although sensitive military systems rely on domestically manufactured chips as part of the Trusted Foundry program, many military systems contain the same foreign chips and code that commercial systems do: just like everyone around the world uses the same mobile phones, networking equipment, and computer operating systems. For example, there has been serious concern over Chinese-made 5G networking equipment that might be used by China to install “backdoors” that would allow the equipment to be controlled. This is just one of many risks to our normal civilian computer supply chains. And since military software is vulnerable to the same cyberattacks as commercial software, military supply chains have many of the same risks.

This is not speculative. A 2018 GAO report expressed concern regarding the lack of secure and patchable US weapons systems. The report observed that “in operational testing, the [Department of Defense] routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic.” It’s a similar attitude to corporate executives who believe that they can’t be hacked—and equally naive.

An updated GAO report from earlier this year found some improvements, but the basic problem remained: “DOD is still learning how to contract for cybersecurity in weapon systems, and selected programs we reviewed have struggled to incorporate systems’ cybersecurity requirements into contracts.” While DOD now appears aware of the issue of lack of cybersecurity requirements, they’re still not sure yet how to fix it, and in three of the five cases GAO reviewed, DOD simply chose to not include the requirements at all.

Militaries around the world are now exploiting these vulnerabilities in weapons systems to carry out operations. When Israel in 2007 bombed a Syrian nuclear reactor, the raid was preceded by what is believed to have been a cyber attack on Syrian air defenses that resulted in radar screens showing no threat as bombers zoomed overhead. In 2018, a 29-country NATO exercise, Trident Juncture, that included cyberweapons was disrupted by Russian GPS jamming. NATO does try to test cyberweapons outside such exercises, but has limited scope in doing so. In May, Jens Stoltenberg, the NATO secretary-general, said that “NATO computer systems are facing almost daily cyberattacks.”

The war of the future will not only be about explosions, but will also be about disabling the systems that make armies run. It’s not (solely) that bases will get blown up; it’s that some bases will lose power, data, and communications. It’s not that self-driving trucks will suddenly go mad and begin rolling over friendly soldiers; it’s that they’ll casually roll off roads or into water where they sit, rusting, and in need of repair. It’s not that targeting systems on guns will be retargeted to 1600 Pennsylvania Avenue; it’s that many of them could simply turn off and not turn back on again.

So, how do we prepare for this next war? First, militaries need to introduce a little anarchy into their planning. Let’s have wargames where essential systems malfunction or are subverted­not all of the time, but randomly. To help combat siloed military thinking, include some civilians as well. Allow their ideas into the room when predicting potential enemy action. And militaries need to have well-developed backup plans, for when systems are subverted. In Joe Haldeman’s 1975 science-fiction novel The Forever War, he postulated a “stasis field” that forced his space marines to rely on nothing more than Roman military technologies, like javelins. We should be thinking in the same direction.

NATO isn’t yet allowing civilians not employed by NATO or associated military contractors access to their training cyber ranges where vulnerabilities could be discovered and remediated before battlefield deployment. Last year, one of us (Tarah) was listening to a NATO briefing after the end of the 2020 Cyber Coalition exercises, and asked how she and other information security researchers could volunteer to test cyber ranges used to train its cyber incident response force. She was told that including civilians would be a “welcome thought experiment in the tabletop exercises,” but including them in reality wasn’t considered. There is a rich opportunity for improvement here, providing transparency into where improvements could be made.

Second, it’s time to take cybersecurity seriously in military procurement, from weapons systems to logistics and communications contracts. In the three year span from the original 2018 GAO report to this year’s report, cybersecurity audit compliance went from 0% to 40% (those 2 of 5 programs mentioned earlier). We need to get much better. DOD requires that its contractors and suppliers follow the Cybersecurity Maturity Model Certification process; it should abide by the same standards. Making those standards both more rigorous and mandatory would be an obvious second step.

Gone are the days when we can pretend that our technologies will work in the face of a military cyberattack. Securing our systems will make everything we buy more expensive—maybe a lot more expensive. But the alternative is no longer viable.

The future of war is cyberwar. If your weapons and systems aren’t secure, don’t even bother bringing them onto the battlefield.

This essay was written with Tarah Wheeler, and previously appeared in Brookings TechStream.

Posted on June 8, 2021 at 5:32 AMView Comments

The Misaligned Incentives for Cloud Security

Russia’s Sunburst cyberespionage campaign, discovered late last year, impacted more than 100 large companies and US federal agencies, including the Treasury, Energy, Justice, and Homeland Security departments. A crucial part of the Russians’ success was their ability to move through these organizations by compromising cloud and local network identity systems to then access cloud accounts and pilfer emails and files.

Hackers said by the US government to have been working for the Kremlin targeted a widely used Microsoft cloud service that synchronizes user identities. The hackers stole security certificates to create their own identities, which allowed them to bypass safeguards such as multifactor authentication and gain access to Office 365 accounts, impacting thousands of users at the affected companies and government agencies.

It wasn’t the first time cloud services were the focus of a cyberattack, and it certainly won’t be the last. Cloud weaknesses were also critical in a 2019 breach at Capital One. There, an Amazon Web Services cloud vulnerability, compounded by Capital One’s own struggle to properly configure a complex cloud service, led to the disclosure of tens of millions of customer records, including credit card applications, Social Security numbers, and bank account information.

This trend of attacks on cloud services by criminals, hackers, and nation states is growing as cloud computing takes over worldwide as the default model for information technologies. Leaked data is bad enough, but disruption to the cloud, even an outage at a single provider, could quickly cost the global economy billions of dollars a day.

Cloud computing is an important source of risk both because it has quickly supplanted traditional IT and because it concentrates ownership of design choices at a very small number of companies. First, cloud is increasingly the default mode of computing for organizations, meaning ever more users and critical data from national intelligence and defense agencies ride on these technologies. Second, cloud computing services, especially those supplied by the world’s four largest providers—Amazon, Microsoft, Alibaba, and Google—concentrate key security and technology design choices inside a small number of organizations. The consequences of bad decisions or poorly made trade-offs can quickly scale to hundreds of millions of users.

The cloud is everywhere. Some cloud companies provide software as a service, support your Netflix habit, or carry your Slack chats. Others provide computing infrastructure like business databases and storage space. The largest cloud companies provide both.

The cloud can be deployed in several different ways, each of which shift the balance of responsibility for the security of this technology. But the cloud provider plays an important role in every case. Choices the provider makes in how these technologies are designed, built, and deployed influence the user’s security—yet the user has very little influence over them. Then, if Google or Amazon has a vulnerability in their servers—which you are unlikely to know about and have no control over—you suffer the consequences.

The problem is one of economics. On the surface, it might seem that competition between cloud companies gives them an incentive to invest in their users’ security. But several market failures get in the way of that ideal. First, security is largely an externality for these cloud companies, because the losses due to data breaches are largely borne by their users. As long as a cloud provider isn’t losing customers by the droves—which generally doesn’t happen after a security incident—it is incentivized to underinvest in security. Additionally, data shows that investors don’t punish the cloud service companies either: Stock price dips after a public security breach are both small and temporary.

Second, public information about cloud security generally doesn’t share the design trade-offs involved in building these cloud services or provide much transparency about the resulting risks. While cloud companies have to publicly disclose copious amounts of security design and operational information, it can be impossible for consumers to understand which threats the cloud services are taking into account, and how. This lack of understanding makes it hard to assess a cloud service’s overall security. As a result, customers and users aren’t able to differentiate between secure and insecure services, so they don’t base their buying and use decisions on it.

Third, cybersecurity is complex—and even more complex when the cloud is involved. For a customer like a company or government agency, the security dependencies of various cloud and on-premises network systems and services can be subtle and hard to map out. This means that users can’t adequately assess the security of cloud services or how they will interact with their own networks. This is a classic “lemons market” in economics, and the result is that cloud providers provide variable levels of security, as documented by Dan Geer, the chief information security officer for In-Q-Tel, and Wade Baker, a professor at Virginia Tech’s College of Business, when they looked at the prevalence of severe security findings at the top 10 largest cloud providers. Yet most consumers are none the wiser.

The result is a market failure where cloud service providers don’t compete to provide the best security for their customers and users at the lowest cost. Instead, cloud companies take the chance that they won’t get hacked, and past experience tells them they can weather the storm if they do. This kind of decision-making and priority-setting takes place at the executive level, of course, and doesn’t reflect the dedication and technical skill of product engineers and security specialists. The effect of this underinvestment is pernicious, however, by piling on risk that’s largely hidden from users. Widespread adoption of cloud computing carries that risk to an organization’s network, to its customers and users, and, in turn, to the wider internet.

This aggregation of cybersecurity risk creates a national security challenge. Policymakers can help address the challenge by setting clear expectations for the security of cloud services—and for making decisions and design trade-offs about that security transparent. The Biden administration, including newly nominated National Cyber Director Chris Inglis, should lead an interagency effort to work with cloud providers to review their threat models and evaluate the security architecture of their various offerings. This effort to require greater transparency from cloud providers and exert more scrutiny of their security engineering efforts should be accompanied by a push to modernize cybersecurity regulations for the cloud era.

The Federal Risk and Authorization Management Program (FedRAMP), which is the principal US government program for assessing the risk of cloud services and authorizing them for use by government agencies, would be a prime vehicle for these efforts. A recent executive order outlines several steps to make FedRAMP faster and more responsive. But the program is still focused largely on the security of individual services rather than the cloud vendors’ deeper architectural choices and threat models. Congressional action should reinforce and extend the executive order by adding new obligations for vendors to provide transparency about design trade-offs, threat models, and resulting risks. These changes could help transform FedRAMP into a more effective tool of security governance even as it becomes faster and more efficient.

Cloud providers have become important national infrastructure. Not since the heights of the mainframe era between the 1960s and early 1980s has the world witnessed computing systems of such complexity used by so many but designed and created by so few. The security of this infrastructure demands greater transparency and public accountability—if only to match the consequences of its failure.

This essay was written with Trey Herr, and previously appeared in Foreign Policy.

Posted on May 28, 2021 at 6:20 AMView Comments

AIs and Fake Comments

This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of US democracy ­—the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.

This attack was detected because it was relatively crude. But artificial intelligence technologies are making it possible to generate genuine-seeming comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.

As political scientists like Paul Pierson have pointed out, what happens between elections is important to democracy. Politicians shape policies and they make laws. And citizens can approve or condemn what politicians are doing, through contacting their representatives or commenting on proposed rules.

That’s what should happen. But as the New York report shows, it often doesn’t. The big telecommunications companies paid millions of dollars to specialist “AstroTurf” companies to generate public comments. These companies then stole people’s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress. All of them said that they supported the corporations’ position on something called “net neutrality,” the idea that telecommunications companies must treat all Internet content equally and not prioritize any company or service. Three AstroTurf companies—Fluent, Opt-Intelligence and React2Media ­—agreed to pay nearly $4 million in fines.

The fakes were crude. Many of them were identical, while others were patchworks of simple textual variations: substituting “Federal Communications Commission” and “FCC” for each other, for example.

Next time, though, we won’t be so lucky. New technologies are about to make it far easier to generate enormous numbers of convincing personalized comments and letters, each with its own word choices, expressive style and pithy examples. The people who create fake grass-roots organizations have always been enthusiastic early adopters of technology, weaponizing letters, faxes, emails and Web comments to manufacture the appearance of public support or public outrage.

Take Generative Pre-trained Transformer 3, or GPT-3, an AI model created by OpenAI, a San Francisco based start-up. With minimal prompting, GPT-3 can generate convincing seeming newspaper articles, résumé cover letters, even Harry Potter fan fiction in the style of Ernest Hemingway. It is trivially easy to use these techniques to compose large numbers of public comments or letters to lawmakers.

OpenAI restricts access to GPT-3, but in a recent experiment, researchers used a different text-generation program to submit 1,000 comments in response to a government request for public input on a Medicaid issue. They all sounded unique, like real people advocating a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. The researchers subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. Others won’t be so ethical.

When the floodgates open, democratic speech is in danger of drowning beneath a tide of fake letters and comments, tweets and Facebook posts. The danger isn’t just that fake support can be generated for unpopular positions, as happened with net neutrality. It is that public commentary will be completely discredited. This would be bad news for specialist AstroTurf companies, which would have no business model if there isn’t a public that they can pretend to be representing. But it would empower still further other kinds of lobbyists, who at least can prove that they are who they say they are.

We may have a brief window to shore up the flood walls. The most effective response would be to regulate what UCLA sociologist Edward Walker has described as the “grassroots for hire” industry. Organizations that deliberately fabricate citizen voices shouldn’t just be subject to civil fines, but to criminal penalties. Businesses that hire these organizations should be held liable for failures of oversight. It’s impossible to prove or disprove whether telecommunications companies knew their subcontractors would create bogus citizen voices, but a liability standard would at least give such companies an incentive to find out. This is likely to be politically difficult to put in place, though, since so many powerful actors benefit from the status quo.

This essay was written with Henry Farrell, and previously appeared in the Washington Post.

EDITED TO ADD: CSET published an excellent report on AI-generated partisan content. Short summary: it’s pretty good, and will continue to get better. Renee DeRista has also written about this.

This paper is about a lower-tech version of this threat. Also this.

EDITED TO ADD: Another essay on the same topic.

Posted on May 24, 2021 at 6:20 AMView Comments

When AIs Start Hacking

If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage.

Okay, maybe this is a bit of hyperbole, but it requires no far-future science fiction technology. I’m not postulating an AI “singularity,” where the AI-learning feedback loop becomes so fast that it outstrips human understanding. I’m not assuming intelligent androids. I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening. As AI gets more sophisticated, though, we often won’t even know it’s happening.

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind.

While researchers are working on AI that can explain itself, there seems to be a trade-off between capability and explainability. Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. Forcing an AI to produce explanations might be an additional constraint that could affect the quality of its decisions. For now, AI is becoming more and more opaque and less explainable.

Separately, AIs can engage in something called reward hacking. Because AIs don’t solve problems in the same way people do, they will invariably stumble on solutions we humans might never have anticipated­—and some will subvert the intent of the system. That’s because AIs don’t think in terms of the implications, context, norms, and values we humans share and take for granted. This reward hacking involves achieving a goal but in a way the AI’s designers neither wanted nor intended.

Take a soccer simulation where an AI figured out that if it kicked the ball out of bounds, the goalie would have to throw the ball in and leave the goal undefended. Or another simulation, where an AI figured out that instead of running, it could make itself tall enough to cross a distant finish line by falling over it. Or the robot vacuum cleaner that instead of learning to not bump into things, it learned to drive backwards, where there were no sensors telling it it was bumping into things. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find these hacks.

We learned about this hacking problem as children with the story of King Midas. When the god Dionysus grants him a wish, Midas asks that everything he touches turns to gold. He ends up starving and miserable when his food, drink, and daughter all turn to gold. It’s a specification problem: Midas programmed the wrong goal into the system.

Genies are very precise about the wording of wishes, and can be maliciously pedantic. We know this, but there’s still no way to outsmart the genie. Whatever you wish for, he will always be able to grant it in a way you wish he hadn’t. He will hack your wish. Goals and desires are always underspecified in human language and thought. We never describe all the options, or include all the applicable caveats, exceptions, and provisos. Any goal we specify will necessarily be incomplete.

While humans most often implicitly understand context and usually act in good faith, we can’t completely specify goals to an AI. And AIs won’t be able to completely understand context.

In 2015, Volkswagen was caught cheating on emissions control tests. This wasn’t AI—human engineers programmed a regular computer to cheat—but it illustrates the problem. They programmed their engine to detect emissions control testing, and to behave differently. Their cheat remained undetected for years.

If I asked you to design a car’s engine control software to maximize performance while still passing emissions control tests, you wouldn’t design the software to cheat without understanding that you were cheating. This simply isn’t true for an AI. It will think “out of the box” simply because it won’t have a conception of the box. It won’t understand that the Volkswagen solution harms others, undermines the intent of the emissions control tests, and is breaking the law. Unless the programmers specify the goal of not behaving differently when being tested, an AI might come up with the same hack. The programmers will be satisfied, the accountants ecstatic. And because of the explainability problem, no one will realize what the AI did. And yes, knowing the Volkswagen story, we can explicitly set the goal to avoid that particular hack. But the lesson of the genie is that there will always be unanticipated hacks.

How realistic is AI hacking in the real world? The feasibility of an AI inventing a new hack depends a lot on the specific system being modeled. For an AI to even start on optimizing a problem, let alone hacking a completely novel solution, all of the rules of the environment must be formalized in a way the computer can understand. Goals—known in AI as objective functions—need to be established. And the AI needs some sort of feedback on how well it’s doing so that it can improve.

Sometimes this is simple. In chess, the rules, objective, and feedback—did you win or lose?—are all precisely specified. And there’s no context to know outside of those things that would muddy the waters. This is why most of the current examples of goal and reward hacking come from simulated environments. These are artificial and constrained, with all of the rules specified to the AI. The inherent ambiguity in most other systems ends up being a near-term security defense against AI hacking.

Where this gets interesting are systems that are well specified and almost entirely digital. Think about systems of governance like the tax code: a series of algorithms, with inputs and outputs. Think about financial systems, which are more or less algorithmically tractable.

We can imagine equipping an AI with all of the world’s laws and regulations, plus all the world’s financial information in real time, plus anything else we think might be relevant; and then giving it the goal of “maximum profit.” My guess is that this isn’t very far off, and that the result will be all sorts of novel hacks.

But advances in AI are discontinuous and counterintuitive. Things that seem easy turn out to be hard, and things that seem hard turn out to be easy. We don’t know until the breakthrough occurs.

When AIs start hacking, everything will change. They won’t be constrained in the same ways, or have the same limits, as people. They’ll change hacking’s speed, scale, and scope, at rates and magnitudes we’re not ready for. AI text generation bots, for example, will be replicated in the millions across social media. They will be able to engage on issues around the clock, sending billions of messages, and overwhelm any actual online discussions among humans. What we will see as boisterous political debate will be bots arguing with other bots. They’ll artificially influence what we think is normal, what we think others think.

The increasing scope of AI systems also makes hacks more dangerous. AIs are already making important decisions about our lives, decisions we used to believe were the exclusive purview of humans: Who gets parole, receives bank loans, gets into college, or gets a job. As AI systems get more capable, society will cede more—and more important—decisions to them. Hacks of these systems will become more damaging.

What if you fed an AI the entire US tax code? Or, in the case of a multinational corporation, the entire world’s tax codes? Will it figure out, without being told, that it’s smart to incorporate in Delaware and register your ship in Panama? How many loopholes will it find that we don’t already know about? Dozens? Thousands? We have no idea.

While we have societal systems that deal with hacks, those were developed when hackers were humans, and reflect human speed, scale, and scope. The IRS cannot deal with dozens—let alone thousands—of newly discovered tax loopholes. An AI that discovers unanticipated but legal hacks of financial systems could upend our markets faster than we could recover.

As I discuss in my report, while hacks can be used by attackers to exploit systems, they can also be used by defenders to patch and secure systems. So in the long run, AI hackers will favor the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed. Of course, the transition period is dangerous because of all the legacy rules that will be hacked. There, our solution has to be resilience.

We need to build resilient governing structures that can quickly and effectively respond to the hacks. It won’t do any good if it takes years to update the tax code, or if a legislative hack becomes so entrenched that it can’t be patched for political reasons. This is a hard problem of modern governance. It also isn’t a substantially different problem than building governing structures that can operate at the speed and complexity of the information age.

What I’ve been describing is the interplay between human and computer systems, and the risks inherent when the computers start doing the part of humans. This, too, is a more general problem than AI hackers. It’s also one that technologists and futurists are writing about. And while it’s easy to let technology lead us into the future, we’re much better off if we as a society decide what technology’s role in our future should be.

This is all something we need to figure out now, before these AIs come online and start hacking our world.

This essay previously appeared on Wired.com

Posted on April 26, 2021 at 6:06 AMView Comments

Illegal Content and the Blockchain

Security researchers have recently discovered a botnet with a novel defense against takedowns. Normally, authorities can disable a botnet by taking over its command-and-control server. With nowhere to go for instructions, the botnet is rendered useless. But over the years, botnet designers have come up with ways to make this counterattack harder. Now the content-delivery network Akamai has reported on a new method: a botnet that uses the Bitcoin blockchain ledger. Since the blockchain is globally accessible and hard to take down, the botnet’s operators appear to be safe.

It’s best to avoid explaining the mathematics of Bitcoin’s blockchain, but to understand the colossal implications here, you need to understand one concept. Blockchains are a type of “distributed ledger”: a record of all transactions since the beginning, and everyone using the blockchain needs to have access to—and reference—a copy of it. What if someone puts illegal material in the blockchain? Either everyone has a copy of it, or the blockchain’s security fails.

To be fair, not absolutely everyone who uses a blockchain holds a copy of the entire ledger. Many who buy cryptocurrencies like Bitcoin and Ethereum don’t bother using the ledger to verify their purchase. Many don’t actually hold the currency outright, and instead trust an exchange to do the transactions and hold the coins. But people need to continually verify the blockchain’s history on the ledger for the system to be secure. If they stopped, then it would be trivial to forge coins. That’s how the system works.

Some years ago, people started noticing all sorts of things embedded in the Bitcoin blockchain. There are digital images, including one of Nelson Mandela. There’s the Bitcoin logo, and the original paper describing Bitcoin by its alleged founder, the pseudonymous Satoshi Nakamoto. There are advertisements, and several prayers. There’s even illegal pornography and leaked classified documents. All of these were put in by anonymous Bitcoin users. But none of this, so far, appears to seriously threaten those in power in governments and corporations. Once someone adds something to the Bitcoin ledger, it becomes sacrosanct. Removing something requires a fork of the blockchain, in which Bitcoin fragments into multiple parallel cryptocurrencies (and associated blockchains). Forks happen, rarely, but never yet because of legal coercion. And repeated forking would destroy Bitcoin’s stature as a stable(ish) currency.

The botnet’s designers are using this idea to create an unblockable means of coordination, but the implications are much greater. Imagine someone using this idea to evade government censorship. Most Bitcoin mining happens in China. What if someone added a bunch of Chinese-censored Falun Gong texts to the blockchain?<

What if someone added a type of political speech that Singapore routinely censors? Or cartoons that Disney holds the copyright to?

In Bitcoin’s and most other public blockchains there are no central, trusted authorities. Anyone in the world can perform transactions or become a miner. Everyone is equal to the extent that they have the hardware and electricity to perform cryptographic computations.

This openness is also a vulnerability, one that opens the door to asymmetric threats and small-time malicious actors. Anyone can put information in the one and only Bitcoin blockchain. Again, that’s how the system works.

Over the last three decades, the world has witnessed the power of open networks: blockchains, social media, the very web itself. What makes them so powerful is that their value is related not just to the number of users, but the number of potential links between users. This is Metcalfe’s law—value in a network is quadratic, not linear, in the number of users—and every open network since has followed its prophecy.

As Bitcoin has grown, its monetary value has skyrocketed, even if its uses remain unclear. With no barrier to entry, the blockchain space has been a Wild West of innovation and lawlessness. But today, many prominent advocates suggest Bitcoin should become a global, universal currency. In this context, asymmetric threats like embedded illegal data become a major challenge.

The philosophy behind Bitcoin traces to the earliest days of the open internet. Articulated in John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace, it was and is the ethos of tech startups: Code is more trustworthy than institutions. Information is meant to be free, and nobody has the right—and should not have the ability—to control it.

But information must reside somewhere. Code is written by and for people, stored on computers located within countries, and embedded within the institutions and societies we have created. To trust information is to trust its chain of custody and the social context it comes from. Neither code nor information is value-neutral, nor ever free of human context.

Today, Barlow’s vision is a mere shadow; every society controls the information its people can access. Some of this control is through overt censorship, as China controls information about Taiwan, Tiananmen Square, and the Uyghurs. Some of this is through civil laws designed by the powerful for their benefit, as with Disney and US copyright law, or UK libel law.

Bitcoin and blockchains like it are on a collision course with these laws. What happens when the interests of the powerful, with the law on their side, are pitted against an open blockchain? Let’s imagine how our various scenarios might play out.

China first: In response to Falun Gong texts in the blockchain, the People’s Republic decrees that any miners processing blocks with banned content will be taken offline—their IPs will be blacklisted. This causes a hard fork of the blockchain at the point just before the banned content. China might do this under the guise of a “patriotic” messaging campaign, publicly stating that it’s merely maintaining financial sovereignty from Western banks. Then it uses paid influencers and moderators on social media to pump the China Bitcoin fork, through both partisan comments and transactions. Two distinct forks would soon emerge, one behind China’s Great Firewall and one outside. Other countries with similar governmental and media ecosystems—Russia, Singapore, Myanmar—might consider following suit, creating multiple national Bitcoin forks. These would operate independently, under mandates to censor unacceptable transactions from then on.

Disney’s approach would play out differently. Imagine the company announces it will sue any ISP that hosts copyrighted content, starting with networks hosting the biggest miners. (Disney has sued to enforce its intellectual property rights in China before.) After some legal pressure, the networks cut the miners off. The miners reestablish themselves on another network, but Disney keeps the pressure on. Eventually miners get pushed further and further off of mainstream network providers, and resort to tunneling their traffic through an anonymity service like Tor. That causes a major slowdown in the already slow (because of the mathematics) Bitcoin network. Disney might issue takedown requests for Tor exit nodes, causing the network to slow to a crawl. It could persist like this for a long time without a fork. Or the slowdown could cause people to jump ship, either by forking Bitcoin or switching to another cryptocurrency without the copyrighted content.

And then there’s illegal pornographic content and leaked classified data. These have been on the Bitcoin blockchain for over five years, and nothing has been done about it. Just like the botnet example, it may be that these do not threaten existing power structures enough to warrant takedowns. This could easily change if Bitcoin becomes a popular way to share child sexual abuse material. Simply having these illegal images on your hard drive is a felony, which could have significant repercussions for anyone involved in Bitcoin.

Whichever scenario plays out, this may be the Achilles heel of Bitcoin as a global currency.

If an open network such as a blockchain were threatened by a powerful organization—China’s censors, Disney’s lawyers, or the FBI trying to take down a more dangerous botnet—it could fragment into multiple networks. That’s not just a nuisance, but an existential risk to Bitcoin.

Suppose Bitcoin were fragmented into 10 smaller blockchains, perhaps by geography: one in China, another in the US, and so on. These fragments might retain their original users, and by ordinary logic, nothing would have changed. But Metcalfe’s law implies that the overall value of these blockchain fragments combined would be a mere tenth of the original. That is because the value of an open network relates to how many others you can communicate with—and, in a blockchain, transact with. Since the security of bitcoin currency is achieved through expensive computations, fragmented blockchains are also easier to attack in a conventional manner—through a 51 percent attack—by an organized attacker. This is especially the case if the smaller blockchains all use the same hash function, as they would here.

Traditional currencies are generally not vulnerable to these sorts of asymmetric threats. There are no viable small-scale attacks against the US dollar, or almost any other fiat currency. The institutions and beliefs that give money its value are deep-seated, despite instances of currency hyperinflation.

The only notable attacks against fiat currencies are in the form of counterfeiting. Even in the past, when counterfeit bills were common, attacks could be thwarted. Counterfeiters require specialized equipment and are vulnerable to law enforcement discovery and arrest. Furthermore, most money today—even if it’s nominally in a fiat currency—doesn’t exist in paper form.

Bitcoin attracted a following for its openness and immunity from government control. Its goal is to create a world that replaces cultural power with cryptographic power: verification in code, not trust in people. But there is no such world. And today, that feature is a vulnerability. We really don’t know what will happen when the human systems of trust come into conflict with the trustless verification that make blockchain currencies unique. Just last week we saw this exact attack on smaller blockchains—not Bitcoin yet. We are watching a public socio-technical experiment in the making, and we will witness its success or failure in the not-too-distant future.

This essay was written with Barath Raghavan, and previously appeared on Wired.com.

EDITED TO ADD (4/14): A research paper on erasing data from Bitcoin blockchain.

Posted on March 17, 2021 at 6:10 AMView Comments

National Security Risks of Late-Stage Capitalism

Early in 2020, cyberspace attackers apparently working for the Russian government compromised a piece of widely used network management software made by a company called SolarWinds. The hack gave the attackers access to the computer networks of some 18,000 of SolarWinds’s customers, including US government agencies such as the Homeland Security Department and State Department, American nuclear research labs, government contractors, IT companies and nongovernmental agencies around the world.

It was a huge attack, with major implications for US national security. The Senate Intelligence Committee is scheduled to hold a hearing on the breach on Tuesday. Who is at fault?

The US government deserves considerable blame, of course, for its inadequate cyberdefense. But to see the problem only as a technical shortcoming is to miss the bigger picture. The modern market economy, which aggressively rewards corporations for short-term profits and aggressive cost-cutting, is also part of the problem: Its incentive structure all but ensures that successful tech companies will end up selling insecure products and services.

Like all for-profit corporations, SolarWinds aims to increase shareholder value by minimizing costs and maximizing profit. The company is owned in large part by Silver Lake and Thoma Bravo, private-equity firms known for extreme cost-cutting.

SolarWinds certainly seems to have underspent on security. The company outsourced much of its software engineering to cheaper programmers overseas, even though that typically increases the risk of security vulnerabilities. For a while, in 2019, the update server’s password for SolarWinds’s network management software was reported to be “solarwinds123.” Russian hackers were able to breach SolarWinds’s own email system and lurk there for months. Chinese hackers appear to have exploited a separate vulnerability in the company’s products to break into US government computers. A cybersecurity adviser for the company said that he quit after his recommendations to strengthen security were ignored.

There is no good reason to underspend on security other than to save money—especially when your clients include government agencies around the world and when the technology experts that you pay to advise you are telling you to do more.

As the economics writer Matt Stoller has suggested, cybersecurity is a natural area for a technology company to cut costs because its customers won’t notice unless they are hacked ­—and if they are, they will have already paid for the product. In other words, the risk of a cyberattack can be transferred to the customers. Doesn’t this strategy jeopardize the possibility of long-term, repeat customers? Sure, there’s a danger there—­ but investors are so focused on short-term gains that they’re too often willing to take that risk.

The market loves to reward corporations for risk-taking when those risks are largely borne by other parties, like taxpayers. This is known as “privatizing profits and socializing losses.” Standard examples include companies that are deemed “too big to fail,” which means that society as a whole pays for their bad luck or poor business decisions. When national security is compromised by high-flying technology companies that fob off cybersecurity risks onto their customers, something similar is at work.

Similar misaligned incentives affect your everyday cybersecurity, too. Your smartphone is vulnerable to something called SIM-swap fraud because phone companies want to make it easy for you to frequently get a new phone—and they know that the cost of fraud is largely borne by customers. Data brokers and credit bureaus that collect, use, and sell your personal data don’t spend a lot of money securing it because it’s your problem if someone hacks them and steals it. Social media companies too easily let hate speech and misinformation flourish on their platforms because it’s expensive and complicated to remove it, and they don’t suffer the immediate costs ­—indeed, they tend to profit from user engagement regardless of its nature.

There are two problems to solve. The first is information asymmetry: buyers can’t adequately judge the security of software products or company practices. The second is a perverse incentive structure: the market encourages companies to make decisions in their private interest, even if that imperils the broader interests of society. Together these two problems result in companies that save money by taking on greater risk and then pass off that risk to the rest of us, as individuals and as a nation.

The only way to force companies to provide safety and security features for customers and users is with government intervention. Companies need to pay the true costs of their insecurities, through a combination of laws, regulations, and legal liability. Governments routinely legislate safety—pollution standards, automobile seat belts, lead-free gasoline, food service regulations. We need to do the same with cybersecurity: the federal government should set minimum security standards for software and software development.

In today’s underregulated markets, it’s just too easy for software companies like SolarWinds to save money by skimping on security and to hope for the best. That’s a rational decision in today’s free-market world, and the only way to change that is to change the economic incentives.

This essay previously appeared in the New York Times.

Posted on March 1, 2021 at 6:12 AMView Comments

Presidential Cybersecurity and Pelotons

President Biden wants his Peloton in the White House. For those who have missed the hype, it’s an Internet-connected stationary bicycle. It has a screen, a camera, and a microphone. You can take live classes online, work out with your friends, or join the exercise social network. And all of that is a security risk, especially if you are the president of the United States.

Any computer brings with it the risk of hacking. This is true of our computers and phones, and it’s also true about all of the Internet-of-Things devices that are increasingly part of our lives. These large and small appliances, cars, medical devices, toys and—yes—exercise machines are all computers at their core, and they’re all just as vulnerable. Presidents face special risks when it comes to the IoT, but Biden has the NSA to help him handle them.

Not everyone is so lucky, and the rest of us need something more structural.

US presidents have long tussled with their security advisers over tech. The NSA often customizes devices, but that means eliminating features. In 2010, President Barack Obama complained that his presidential BlackBerry device was “no fun” because only ten people were allowed to contact him on it. In 2013, security prevented him from getting an iPhone. When he finally got an upgrade to his BlackBerry in 2016, he complained that his new “secure” phone couldn’t take pictures, send texts, or play music. His “hardened” iPad to read daily intelligence briefings was presumably similarly handicapped. We don’t know what the NSA did to these devices, but they certainly modified the software and physically removed the cameras and microphones—and possibly the wireless Internet connection.

President Donald Trump resisted efforts to secure his phones. We don’t know the details, only that they were regularly replaced, with the government effectively treating them as burner phones.

The risks are serious. We know that the Russians and the Chinese were eavesdropping on Trump’s phones. Hackers can remotely turn on microphones and cameras, listening in on conversations. They can grab copies of any documents on the device. They can also use those devices to further infiltrate government networks, maybe even jumping onto classified networks that the devices connect to. If the devices have physical capabilities, those can be hacked as well. In 2007, the wireless features of Vice President Richard B. Cheney’s pacemaker were disabled out of fears that it could be hacked to assassinate him. In 1999, the NSA banned Furbies from its offices, mistakenly believing that they could listen and learn.

Physically removing features and components works, but the results are increasingly unacceptable. The NSA could take Biden’s Peloton and rip out the camera, microphone, and Internet connection, and that would make it secure—but then it would just be a normal (albeit expensive) stationary bike. Maybe Biden wouldn’t accept that, and he’d demand that the NSA do even more work to customize and secure the Peloton part of the bicycle. Maybe Biden’s security agents could isolate his Peloton in a specially shielded room where it couldn’t infect other computers, and warn him not to discuss national security in its presence.

This might work, but it certainly doesn’t scale. As president, Biden can direct substantial resources to solving his cybersecurity problems. The real issue is what everyone else should do. The president of the United States is a singular espionage target, but so are members of his staff and other administration officials.

Members of Congress are targets, as are governors and mayors, police officers and judges, CEOs and directors of human rights organizations, nuclear power plant operators, and election officials. All of these people have smartphones, tablets, and laptops. Many have Internet-connected cars and appliances, vacuums, bikes, and doorbells. Every one of those devices is a potential security risk, and all of those people are potential national security targets. But none of those people will get their Internet-connected devices customized by the NSA.

That is the real cybersecurity issue. Internet connectivity brings with it features we like. In our cars, it means real-time navigation, entertainment options, automatic diagnostics, and more. In a Peloton, it means everything that makes it more than a stationary bike. In a pacemaker, it means continuous monitoring by your doctor—and possibly your life saved as a result. In an iPhone or iPad, it means…well, everything. We can search for older, non-networked versions of some of these devices, or the NSA can disable connectivity for the privileged few of us. But the result is the same: in Obama’s words, “no fun.”

And unconnected options are increasingly hard to find. In 2016, I tried to find a new car that didn’t come with Internet connectivity, but I had to give up: there were no options to omit that in the class of car I wanted. Similarly, it’s getting harder to find major appliances without a wireless connection. As the price of connectivity continues to drop, more and more things will only be available Internet-enabled.

Internet security is national security—not because the president is personally vulnerable but because we are all part of a single network. Depending on who we are and what we do, we will make different trade-offs between security and fun. But we all deserve better options.

Regulations that force manufacturers to provide better security for all of us are the only way to do that. We need minimum security standards for computers of all kinds. We need transparency laws that give all of us, from the president on down, sufficient information to make our own security trade-offs. And we need liability laws that hold companies liable when they misrepresent the security of their products and services.

I’m not worried about Biden. He and his staff will figure out how to balance his exercise needs with the national security needs of the country. Sometimes the solutions are weirdly customized, such as the anti-eavesdropping tent that Obama used while traveling. I am much more worried about the political activists, journalists, human rights workers, and oppressed minorities around the world who don’t have the money or expertise to secure their technology, or the information that would give them the ability to make informed decisions on which technologies to choose.

This essay previously appeared in the Washington Post.

Posted on February 5, 2021 at 5:58 AMView Comments

Russia’s SolarWinds Attack and Software Security

The information that is emerging about Russia’s extensive cyberintelligence operation against the United States and other countries should be increasingly alarming to the public. The magnitude of the hacking, now believed to have affected more than 250 federal agencies and businesses—­primarily through a malicious update of the SolarWinds network management software—­may have slipped under most people’s radar during the holiday season, but its implications are stunning.

According to a Washington Post report, this is a massive intelligence coup by Russia’s foreign intelligence service (SVR). And a massive security failure on the part of the United States is also to blame. Our insecure Internet infrastructure has become a critical national security risk­—one that we need to take seriously and spend money to reduce.

President-elect Joe Biden’s initial response spoke of retaliation, but there really isn’t much the United States can do beyond what it already does. Cyberespionage is business as usual among countries and governments, and the United States is aggressively offensive in this regard. We benefit from the lack of norms in this area and are unlikely to push back too hard because we don’t want to limit our own offensive actions.

Biden took a more realistic tone last week when he spoke of the need to improve US defenses. The initial focus will likely be on how to clean the hackers out of our networks, why the National Security Agency and US Cyber Command failed to detect this intrusion and whether the 2-year-old Cybersecurity and Infrastructure Security Agency has the resources necessary to defend the United States against attacks of this caliber. These are important discussions to have, but we also need to address the economic incentives that led to SolarWinds being breached and how that insecure software ended up in so many critical US government networks.

Software has become incredibly complicated. Most of us almost don’t know all of the software running on our laptops and what it’s doing. We don’t know where it’s connecting to on the Internet­—not even which countries it’s connecting to­—and what data it’s sending. We typically don’t know what third party libraries are in the software we install. We don’t know what software any of our cloud services are running. And we’re rarely alone in our ignorance. Finding all of this out is incredibly difficult.

This is even more true for software that runs our large government networks, or even the Internet backbone. Government software comes from large companies, small suppliers, open source projects and everything in between. Obscure software packages can have hidden vulnerabilities that affect the security of these networks, and sometimes the entire Internet. Russia’s SVR leveraged one of those vulnerabilities when it gained access to SolarWinds’ update server, tricking thousands of customers into downloading a malicious software update that gave the Russians access to those networks.

The fundamental problem is one of economic incentives. The market rewards quick development of products. It rewards new features. It rewards spying on customers and users: collecting and selling individual data. The market does not reward security, safety or transparency. It doesn’t reward reliability past a bare minimum, and it doesn’t reward resilience at all.

This is what happened at SolarWinds. A New York Times report noted the company ignored basic security practices. It moved software development to Eastern Europe, where Russia has more influence and could potentially subvert programmers, because it’s cheaper.

Short-term profit was seemingly prioritized over product security.

Companies have the right to make decisions like this. The real question is why the US government bought such shoddy software for its critical networks. This is a problem that Biden can fix, and he needs to do so immediately.

The United States needs to improve government software procurement. Software is now critical to national security. Any system for acquiring software needs to evaluate the security of the software and the security practices of the company, in detail, to ensure they are sufficient to meet the security needs of the network they’re being installed in. Procurement contracts need to include security controls of the software development process. They need security attestations on the part of the vendors, with substantial penalties for misrepresentation or failure to comply. The government needs detailed best practices for government and other companies.

Some of the groundwork for an approach like this has already been laid by the federal government, which has sponsored the development of a “Software Bill of Materials” that would set out a process for software makers to identify the components used to assemble their software.

This scrutiny can’t end with purchase. These security requirements need to be monitored throughout the software’s life cycle, along with what software is being used in government networks.

None of this is cheap, and we should be prepared to pay substantially more for secure software. But there’s a benefit to these practices. If the government evaluations are public, along with the list of companies that meet them, all network buyers can benefit from them. The US government acting purely in the realm of procurement can improve the security of nongovernmental networks worldwide.

This is important, but it isn’t enough. We need to set minimum safety and security standards for all software: from the code in that Internet of Things appliance you just bought to the code running our critical national infrastructure. It’s all one network, and a vulnerability in your refrigerator’s software can be used to attack the national power grid.

The IOT Cybersecurity Improvement Act, signed into law last month, is a start in this direction.

The Biden administration should prioritize minimum security standards for all software sold in the United States, not just to the government but to everyone. Long gone are the days when we can let the software industry decide how much emphasis to place on security. Software security is now a matter of personal safety: whether it’s ensuring your car isn’t hacked over the Internet or that the national power grid isn’t hacked by the Russians.

This regulation is the only way to force companies to provide safety and security features for customers—just as legislation was necessary to mandate food safety measures and require auto manufacturers to install life-saving features such as seat belts and air bags. Smart regulations that incentivize innovation create a market for security features. And they improve security for everyone.

It’s true that creating software in this sort of regulatory environment is more expensive. But if we truly value our personal and national security, we need to be prepared to pay for it.

The truth is that we’re already paying for it. Today, software companies increase their profits by secretly pushing risk onto their customers. We pay the cost of insecure personal computers, just as the government is now paying the cost to clean up after the SolarWinds hack. Fixing this requires both transparency and regulation. And while the industry will resist both, they are essential for national security in our increasingly computer-dependent worlds.

This essay previously appeared on CNN.com.

Posted on January 8, 2021 at 6:27 AMView Comments

Russia’s SolarWinds Attack

Recent news articles have all been talking about the massive Russian cyberattack against the United States, but that’s wrong on two accounts. It wasn’t a cyberattack in international relations terms, it was espionage. And the victim wasn’t just the US, it was the entire world. But it was massive, and it is dangerous.

Espionage is internationally allowed in peacetime. The problem is that both espionage and cyberattacks require the same computer and network intrusions, and the difference is only a few keystrokes. And since this Russian operation isn’t at all targeted, the entire world is at risk—and not just from Russia. Many countries carry out these sorts of operations, none more extensively than the US. The solution is to prioritize security and defense over espionage and attack.

Here’s what we know: Orion is a network management product from a company named SolarWinds, with over 300,000 customers worldwide. Sometime before March, hackers working for the Russian SVR—previously known as the KGB—hacked into SolarWinds and slipped a backdoor into an Orion software update. (We don’t know how, but last year the company’s update server was protected by the password “solarwinds123″—something that speaks to a lack of security culture.) Users who downloaded and installed that corrupted update between March and June unwittingly gave SVR hackers access to their networks.

This is called a supply-chain attack, because it targets a supplier to an organization rather than an organization itself—and can affect all of a supplier’s customers. It’s an increasingly common way to attack networks. Other examples of this sort of attack include fake apps in the Google Play store, and hacked replacement screens for your smartphone.

SolarWinds has removed its customer list from its website, but the Internet Archive saved it: all five branches of the US military, the state department, the White House, the NSA, 425 of the Fortune 500 companies, all five of the top five accounting firms, and hundreds of universities and colleges. In an SEC filing, SolarWinds said that it believes “fewer than 18,000” of those customers installed this malicious update, another way of saying that more than 17,000 did.

That’s a lot of vulnerable networks, and it’s inconceivable that the SVR penetrated them all. Instead, it chose carefully from its cornucopia of targets. Microsoft’s analysis identified 40 customers who were infiltrated using this vulnerability. The great majority of those were in the US, but networks in Canada, Mexico, Belgium, Spain, the UK, Israel and the UAE were also targeted. This list includes governments, government contractors, IT companies, thinktanks, and NGOs—and it will certainly grow.

Once inside a network, SVR hackers followed a standard playbook: establish persistent access that will remain even if the initial vulnerability is fixed; move laterally around the network by compromising additional systems and accounts; and then exfiltrate data. Not being a SolarWinds customer is no guarantee of security; this SVR operation used other initial infection vectors and techniques as well. These are sophisticated and patient hackers, and we’re only just learning some of the techniques involved here.

Recovering from this attack isn’t easy. Because any SVR hackers would establish persistent access, the only way to ensure that your network isn’t compromised is to burn it to the ground and rebuild it, similar to reinstalling your computer’s operating system to recover from a bad hack. This is how a lot of sysadmins are going to spend their Christmas holiday, and even then they can&;t be sure. There are many ways to establish persistent access that survive rebuilding individual computers and networks. We know, for example, of an NSA exploit that remains on a hard drive even after it is reformatted. Code for that exploit was part of the Equation Group tools that the Shadow Brokers—again believed to be Russia—stole from the NSA and published in 2016. The SVR probably has the same kinds of tools.

Even without that caveat, many network administrators won’t go through the long, painful, and potentially expensive rebuilding process. They’ll just hope for the best.

It’s hard to overstate how bad this is. We are still learning about US government organizations breached: the state department, the treasury department, homeland security, the Los Alamos and Sandia National Laboratories (where nuclear weapons are developed), the National Nuclear Security Administration, the National Institutes of Health, and many more. At this point, there’s no indication that any classified networks were penetrated, although that could change easily. It will take years to learn which networks the SVR has penetrated, and where it still has access. Much of that will probably be classified, which means that we, the public, will never know.

And now that the Orion vulnerability is public, other governments and cybercriminals will use it to penetrate vulnerable networks. I can guarantee you that the NSA is using the SVR’s hack to infiltrate other networks; why would they not? (Do any Russian organizations use Orion? Probably.)

While this is a security failure of enormous proportions, it is not, as Senator Richard Durban said, “virtually a declaration of war by Russia on the United States.” While President-elect Biden said he will make this a top priority, it’s unlikely that he will do much to retaliate.

The reason is that, by international norms, Russia did nothing wrong. This is the normal state of affairs. Countries spy on each other all the time. There are no rules or even norms, and it’s basically “buyer beware.” The US regularly fails to retaliate against espionage operations—such as China’s hack of the Office of Personal Management (OPM) and previous Russian hacks—because we do it, too. Speaking of the OPM hack, the then director of national intelligence, James Clapper, said: “You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don’t think we’d hesitate for a minute.”

We don’t, and I’m sure NSA employees are grudgingly impressed with the SVR. The US has by far the most extensive and aggressive intelligence operation in the world. The NSA’s budget is the largest of any intelligence agency. It aggressively leverages the US’s position controlling most of the internet backbone and most of the major internet companies. Edward Snowden disclosed many targets of its efforts around 2014, which then included 193 countries, the World Bank, the IMF and the International Atomic Energy Agency. We are undoubtedly running an offensive operation on the scale of this SVR operation right now, and it’ll probably never be made public. In 2016, President Obama boasted that we have “more capacity than anybody both offensively and defensively.”

He may have been too optimistic about our defensive capability. The US prioritizes and spends many times more on offense than on defensive cybersecurity. In recent years, the NSA has adopted a strategy of “persistent engagement,” sometimes called “defending forward.” The idea is that instead of passively waiting for the enemy to attack our networks and infrastructure, we go on the offensive and disrupt attacks before they get to us. This strategy was credited with foiling a plot by the Russian Internet Research Agency to disrupt the 2018 elections.

But if persistent engagement is so effective, how could it have missed this massive SVR operation? It seems that pretty much the entire US government was unknowingly sending information back to Moscow. If we had been watching everything the Russians were doing, we would have seen some evidence of this. The Russians’ success under the watchful eye of the NSA and US Cyber Command shows that this is a failed approach.

And how did US defensive capability miss this? The only reason we know about this breach is because, earlier this month, the security company FireEye discovered that it had been hacked. During its own audit of its network, it uncovered the Orion vulnerability and alerted the US government. Why don’t organizations like the Departments of State, Treasury and Homeland Wecurity regularly conduct that level of audit on their own systems? The government’s intrusion detection system, Einstein 3, failed here because it doesn’t detect new sophisticated attacks—a deficiency pointed out in 2018 but never fixed. We shouldn’t have to rely on a private cybersecurity company to alert us of a major nation-state attack.

If anything, the US’s prioritization of offense over defense makes us less safe. In the interests of surveillance, the NSA has pushed for an insecure cell phone encryption standard and a backdoor in random number generators (important for secure encryption). The DoJ has never relented in its insistence that the world’s popular encryption systems be made insecure through back doors—another hot point where attack and defense are in conflict. In other words, we allow for insecure standards and systems, because we can use them to spy on others.

We need to adopt a defense-dominant strategy. As computers and the internet become increasingly essential to society, cyberattacks are likely to be the precursor to actual war. We are simply too vulnerable when we prioritize offense, even if we have to give up the advantage of using those insecurities to spy on others.

Our vulnerability is magnified as eavesdropping may bleed into a direct attack. The SVR’s access allows them not only to eavesdrop, but also to modify data, degrade network performance, or erase entire networks. The first might be normal spying, but the second certainly could be considered an act of war. Russia is almost certainly laying the groundwork for future attack.

This preparation would not be unprecedented. There’s a lot of attack going on in the world. In 2010, the US and Israel attacked the Iranian nuclear program. In 2012, Iran attacked the Saudi national oil company. North Korea attacked Sony in 2014. Russia attacked the Ukrainian power grid in 2015 and 2016. Russia is hacking the US power grid, and the US is hacking Russia’s power grid—just in case the capability is needed someday. All of these attacks began as a spying operation. Security vulnerabilities have real-world consequences.

We’re not going to be able to secure our networks and systems in this no-rules, free-for-all every-network-for-itself world. The US needs to willingly give up part of its offensive advantage in cyberspace in exchange for a vastly more secure global cyberspace. We need to invest in securing the world’s supply chains from this type of attack, and to press for international norms and agreements prioritizing cybersecurity, like the 2018 Paris Call for Trust and Security in Cyberspace or the Global Commission on the Stability of Cyberspace. Hardening widely used software like Orion (or the core internet protocols) helps everyone. We need to dampen this offensive arms race rather than exacerbate it, and work towards cyber peace. Otherwise, hypocritically criticizing the Russians for doing the same thing we do every day won’t help create the safer world in which we all want to live.

This essay previously appeared in the Guardian.

Posted on December 28, 2020 at 6:21 AMView Comments

Should There Be Limits on Persuasive Technologies?

Persuasion is as old as our species. Both democracy and the market economy depend on it. Politicians persuade citizens to vote for them, or to support different policy positions. Businesses persuade consumers to buy their products or services. We all persuade our friends to accept our choice of restaurant, movie, and so on. It’s essential to society; we couldn’t get large groups of people to work together without it. But as with many things, technology is fundamentally changing the nature of persuasion. And society needs to adapt its rules of persuasion or suffer the consequences.

Democratic societies, in particular, are in dire need of a frank conversation about the role persuasion plays in them and how technologies are enabling powerful interests to target audiences. In a society where public opinion is a ruling force, there is always a risk of it being mobilized for ill purposes—­such as provoking fear to encourage one group to hate another in a bid to win office, or targeting personal vulnerabilities to push products that might not benefit the consumer.

In this regard, the United States, already extremely polarized, sits on a precipice.

There have long been rules around persuasion. The US Federal Trade Commission enforces laws that claims about products “must be truthful, not misleading, and, when appropriate, backed by scientific evidence.” Political advertisers must identify themselves in television ads. If someone abuses a position of power to force another person into a contract, undue influence can be argued to nullify that agreement. Yet there is more to persuasion than the truth, transparency, or simply applying pressure.

Persuasion also involves psychology, and that has been far harder to regulate. Using psychology to persuade people is not new. Edward Bernays, a pioneer of public relations and nephew to Sigmund Freud, made a marketing practice of appealing to the ego. His approach was to tie consumption to a person’s sense of self. In his 1928 book Propaganda, Bernays advocated engineering events to persuade target audiences as desired. In one famous stunt, he hired women to smoke cigarettes while taking part in the 1929 New York City Easter Sunday parade, causing a scandal while linking smoking with the emancipation of women. The tobacco industry would continue to market lifestyle in selling cigarettes into the 1960s.

Emotional appeals have likewise long been a facet of political campaigns. In the 1860 US presidential election, Southern politicians and newspaper editors spread fears of what a “Black Republican” win would mean, painting horrific pictures of what the emancipation of slaves would do to the country. In the 2020 US presidential election, modern-day Republicans used Cuban Americans’ fears of socialism in ads on Spanish-language radio and messaging on social media. Because of the emotions involved, many voters believed the campaigns enough to let them influence their decisions.

The Internet has enabled new technologies of persuasion to go even further. Those seeking to influence others can collect and use data about targeted audiences to create personalized messaging. Tracking the websites a person visits, the searches they make online, and what they engage with on social media, persuasion technologies enable those who have access to such tools to better understand audiences and deliver more tailored messaging where audiences are likely to see it most. This information can be combined with data about other activities, such as offline shopping habits, the places a person visits, and the insurance they buy, to create a profile of them that can be used to develop persuasive messaging that is aimed at provoking a specific response.

Our senses of self, meanwhile, are increasingly shaped by our interaction with technology. The same digital environment where we read, search, and converse with our intimates enables marketers to take that data and turn it back on us. A modern day Bernays no longer needs to ferret out the social causes that might inspire you or entice you­—you’ve likely already shared that by your online behavior.

Some marketers posit that women feel less attractive on Mondays, particularly first thing in the morning—­and therefore that’s the best time to advertise cosmetics to them. The New York Times once experimented by predicting the moods of readers based on article content to better target ads, enabling marketers to find audiences when they were sad or fearful. Some music streaming platforms encourage users to disclose their current moods, which helps advertisers target subscribers based on their emotional states.

The phones in our pockets provide marketers with our location in real time, helping deliver geographically relevant ads, such as propaganda to those attending a political rally. This always-on digital experience enables marketers to know what we are doing­—and when, where, and how we might be feeling at that moment.

All of this is not intended to be alarmist. It is important not to overstate the effectiveness of persuasive technologies. But while many of them are more smoke and mirrors than reality, it is likely that they will only improve over time. The technology already exists to help predict moods of some target audiences, pinpoint their location at any given time, and deliver fairly tailored and timely messaging. How far does that ability need to go before it erodes the autonomy of those targeted to make decisions of their own free will?

Right now, there are few legal or even moral limits on persuasion­—and few answers regarding the effectiveness of such technologies. Before it is too late, the world needs to consider what is acceptable and what is over the line.

For example, it’s been long known that people are more receptive to advertisements made with people who look like them: in race, ethnicity, age, gender. Ads have long been modified to suit the general demographic of the television show or magazine they appear in. But we can take this further. The technology exists to take your likeness and morph it with a face that is demographically similar to you. The result is a face that looks like you, but that you don’t recognize. If that turns out to be more persuasive than coarse demographic targeting, is that okay?

Another example: Instead of just advertising to you when they detect that you are vulnerable, what if advertisers craft advertisements that deliberately manipulate your mood? In some ways, being able to place ads alongside content that is likely to provoke a certain emotional response enables advertisers to do this already. The only difference is that the media outlet claims it isn’t crafting the content to deliberately achieve this. But is it acceptable to actively prime a target audience and then to deliver persuasive messaging that fits the mood?

Further, emotion-based decision-making is not the rational type of slow thinking that ought to inform important civic choices such as voting. In fact, emotional thinking threatens to undermine the very legitimacy of the system, as voters are essentially provoked to move in whatever direction someone with power and money wants. Given the pervasiveness of digital technologies, and the often instant, reactive responses people have to them, how much emotion ought to be allowed in persuasive technologies? Is there a line that shouldn’t be crossed?

Finally, for most people today, exposure to information and technology is pervasive. The average US adult spends more than eleven hours a day interacting with media. Such levels of engagement lead to huge amounts of personal data generated and aggregated about you­—your preferences, interests, and state of mind. The more those who control persuasive technologies know about us, what we are doing, how we are feeling, when we feel it, and where we are, the better they can tailor messaging that provokes us into action. The unsuspecting target is grossly disadvantaged. Is it acceptable for the same services to both mediate our digital experience and to target us? Is there ever such thing as too much targeting?

The power dynamics of persuasive technologies are changing. Access to tools and technologies of persuasion is not egalitarian. Many require large amounts of both personal data and computation power, turning modern persuasion into an arms race where the better resourced will be better placed to influence audiences.

At the same time, the average person has very little information about how these persuasion technologies work, and is thus unlikely to understand how their beliefs and opinions might be manipulated by them. What’s more, there are few rules in place to protect people from abuse of persuasion technologies, much less even a clear articulation of what constitutes a level of manipulation so great it effectively takes agency away from those targeted. This creates a positive feedback loop that is dangerous for society.

In the 1970s, there was widespread fear about so-called subliminal messaging, which claimed that images of sex and death were hidden in the details of print advertisements, as in the curls of smoke in cigarette ads and the ice cubes of liquor ads. It was pretty much all a hoax, but that didn’t stop the Federal Trade Commission and the Federal Communications Commission from declaring it an illegal persuasive technology. That’s how worried people were about being manipulated without their knowledge and consent.

It is time to have a serious conversation about limiting the technologies of persuasion. This must begin by articulating what is permitted and what is not. If we don’t, the powerful persuaders will become even more powerful.

This essay was written with Alicia Wanless, and previously appeared in Foreign Policy.

EDITED TO ADD: Ukrainian translation.

Posted on December 14, 2020 at 2:03 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.