Entries Tagged "essays"

Page 1 of 43

On the Twitter Hack

Twitter was hacked this week. Not a few people’s Twitter accounts, but all of Twitter. Someone compromised the entire Twitter network, probably by stealing the log-in credentials of one of Twitter’s system administrators. Those are the people trusted to ensure that Twitter functions smoothly.

The hacker used that access to send tweets from a variety of popular and trusted accounts, including those of Joe Biden, Bill Gates, and Elon Musk, as part of a mundane scam — stealing bitcoin — but it’s easy to envision more nefarious scenarios. Imagine a government using this sort of attack against another government, coordinating a series of fake tweets from hundreds of politicians and other public figures the day before a major election, to affect the outcome. Or to escalate an international dispute. Done well, it would be devastating.

Whether the hackers had access to Twitter direct messages is not known. These DMs are not end-to-end encrypted, meaning that they are unencrypted inside Twitter’s network and could have been available to the hackers. Those messages — between world leaders, industry CEOs, reporters and their sources, heath organizations — are much more valuable than bitcoin. (If I were a national-intelligence agency, I might even use a bitcoin scam to mask my real intelligence-gathering purpose.) Back in 2018, Twitter said it was exploring encrypting those messages, but it hasn’t yet.

Internet communications platforms — such as Facebook, Twitter, and YouTube — are crucial in today’s society. They’re how we communicate with one another. They’re how our elected leaders communicate with us. They are essential infrastructure. Yet they are run by for-profit companies with little government oversight. This is simply no longer sustainable. Twitter and companies like it are essential to our national dialogue, to our economy, and to our democracy. We need to start treating them that way, and that means both requiring them to do a better job on security and breaking them up.

In the Twitter case this week, the hacker’s tactics weren’t particularly sophisticated. We will almost certainly learn about security lapses at Twitter that enabled the hack, possibly including a SIM-swapping attack that targeted an employee’s cellular service provider, or maybe even a bribed insider. The FBI is investigating.

This kind of attack is known as a “class break.” Class breaks are endemic to computerized systems, and they’re not something that we as users can defend against with better personal security. It didn’t matter whether individual accounts had a complicated and hard-to-remember password, or two-factor authentication. It didn’t matter whether the accounts were normally accessed via a Mac or a PC. There was literally nothing any user could do to protect against it.

Class breaks are security vulnerabilities that break not just one system, but an entire class of systems. They might exploit a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system’s software. Or a vulnerability in internet-enabled digital video recorders and webcams that allows an attacker to recruit those devices into a massive botnet. Or a single vulnerability in the Twitter network that allows an attacker to take over every account.

For Twitter users, this attack was a double whammy. Many people rely on Twitter’s authentication systems to know that someone who purports to be a certain celebrity, politician, or journalist is really that person. When those accounts were hijacked, trust in that system took a beating. And then, after the attack was discovered and Twitter temporarily shut down all verified accounts, the public lost a vital source of information.

There are many security technologies companies like Twitter can implement to better protect themselves and their users; that’s not the issue. The problem is economic, and fixing it requires doing two things. One is regulating these companies, and requiring them to spend more money on security. The second is reducing their monopoly power.

The security regulations for banks are complex and detailed. If a low-level banking employee were caught messing around with people’s accounts, or if she mistakenly gave her log-in credentials to someone else, the bank would be severely fined. Depending on the details of the incident, senior banking executives could be held personally liable. The threat of these actions helps keep our money safe. Yes, it costs banks money; sometimes it severely cuts into their profits. But the banks have no choice.

The opposite is true for these tech giants. They get to decide what level of security you have on your accounts, and you have no say in the matter. If you are offered security and privacy options, it’s because they decided you can have them. There is no regulation. There is no accountability. There isn’t even any transparency. Do you know how secure your data is on Facebook, or in Apple’s iCloud, or anywhere? You don’t. No one except those companies do. Yet they’re crucial to the country’s national security. And they’re the rare consumer product or service allowed to operate without significant government oversight.

For example, President Donald Trump’s Twitter account wasn’t hacked as Joe Biden’s was, because that account has “special protections,” the details of which we don’t know. We also don’t know what other world leaders have those protections, or the decision process surrounding who gets them. Are they manual? Can they scale? Can all verified accounts have them? Your guess is as good as mine.

In addition to security measures, the other solution is to break up the tech monopolies. Companies like Facebook and Twitter have so much power because they are so large, and they face no real competition. This is a national-security risk as well as a personal-security risk. Were there 100 different Twitter-like companies, and enough compatibility so that all their feeds could merge into one interface, this attack wouldn’t have been such a big deal. More important, the risk of a similar but more politically targeted attack wouldn’t be so great. If there were competition, different platforms would offer different security options, as well as different posting rules, different authentication guidelines — different everything. Competition is how our economy works; it’s how we spur innovation. Monopolies have more power to do what they want in the quest for profits, even if it harms people along the way.

This wasn’t Twitter’s first security problem involving trusted insiders. In 2017, on his last day of work, an employee shut down President Donald Trump’s account. In 2019, two people were charged with spying for the Saudi government while they were Twitter employees.

Maybe this hack will serve as a wake-up call. But if past incidents involving Twitter and other companies are any indication, it won’t. Underspending on security, and letting society pay the eventual price, is far more profitable. I don’t blame the tech companies. Their corporate mandate is to make as much money as is legally possible. Fixing this requires changes in the law, not changes in the hearts of the company’s leaders.

This essay previously appeared on TheAtlantic.com.

Posted on July 20, 2020 at 8:49 AMView Comments

The Security Value of Inefficiency

For decades, we have prized efficiency in our economy. We strive for it. We reward it. In normal times, that’s a good thing. Running just at the margins is efficient. A single just-in-time global supply chain is efficient. Consolidation is efficient. And that’s all profitable. Inefficiency, on the other hand, is waste. Extra inventory is inefficient. Overcapacity is inefficient. Using many small suppliers is inefficient. Inefficiency is unprofitable.

But inefficiency is essential security, as the COVID-19 pandemic is teaching us. All of the overcapacity that has been squeezed out of our healthcare system; we now wish we had it. All of the redundancy in our food production that has been consolidated away; we want that, too. We need our old, local supply chains — not the single global ones that are so fragile in this crisis. And we want our local restaurants and businesses to survive, not just the national chains.

We have lost much inefficiency to the market in the past few decades. Investors have become very good at noticing any fat in every system and swooping down to monetize those redundant assets. The winner-take-all mentality that has permeated so many industries squeezes any inefficiencies out of the system.

This drive for efficiency leads to brittle systems that function properly when everything is normal but break under stress. And when they break, everyone suffers. The less fortunate suffer and die. The more fortunate are merely hurt, and perhaps lose their freedoms or their future. But even the extremely fortunate suffer — maybe not in the short term, but in the long term from the constriction of the rest of society.

Efficient systems have limited ability to deal with system-wide economic shocks. Those shocks are coming with increased frequency. They’re caused by global pandemics, yes, but also by climate change, by financial crises, by political crises. If we want to be secure against these crises and more, we need to add inefficiency back into our systems.

I don’t simply mean that we need to make our food production, or healthcare system, or supply chains sloppy and wasteful. We need a certain kind of inefficiency, and it depends on the system in question. Sometimes we need redundancy. Sometimes we need diversity. Sometimes we need overcapacity.

The market isn’t going to supply any of these things, least of all in a strategic capacity that will result in resilience. What’s necessary to make any of this work is regulation.

First, we need to enforce antitrust laws. Our meat supply chain is brittle because there are limited numbers of massive meatpacking plants — now disease factories — rather than lots of smaller slaughterhouses. Our retail supply chain is brittle because a few national companies and websites dominate. We need multiple companies offering alternatives to a single product or service. We need more competition, more niche players. We need more local companies, more domestic corporate players, and diversity in our international suppliers. Competition provides all of that, while monopolies suck that out of the system.

The second thing we need is specific regulations that require certain inefficiencies. This isn’t anything new. Every safety system we have is, to some extent, an inefficiency. This is true for fire escapes on buildings, lifeboats on cruise ships, and multiple ways to deploy the landing gear on aircraft. Not having any of those things would make the underlying systems more efficient, but also less safe. It’s also true for the internet itself, originally designed with extensive redundancy as a Cold War security measure.

With those two things in place, the market can work its magic to provide for these strategic inefficiencies as cheaply and as effectively as possible. As long as there are competitors who are vying with each other, and there aren’t competitors who can reduce the inefficiencies and undercut the competition, these inefficiencies just become part of the price of whatever we’re buying.

The government is the entity that steps in and enforces a level playing field instead of a race to the bottom. Smart regulation addresses the long-term need for security, and ensures it’s not continuously sacrificed to short-term considerations.

We have largely been content to ignore the long term and let Wall Street run our economy as efficiently as it can. That’s no longer sustainable. We need inefficiency — the right kind in the right way — to ensure our security. No, it’s not free. But it’s worth the cost.

This essay previously appeared in Quartz.

EDITED TO ADD (7/14): A related piece by Dan Geer.

Posted on July 2, 2020 at 9:26 AMView Comments

Security of Health Information

The world is racing to contain the new COVID-19 virus that is spreading around the globe with alarming speed. Right now, pandemic disease experts at the World Health Organization (WHO), the US Centers for Disease Control and Prevention (CDC), and other public-health agencies are gathering information to learn how and where the virus is spreading. To do so, they are using a variety of digital communications and surveillance systems. Like much of the medical infrastructure, these systems are highly vulnerable to hacking and interference.

That vulnerability should be deeply concerning. Governments and intelligence agencies have long had an interest in manipulating health information, both in their own countries and abroad. They might do so to prevent mass panic, avert damage to their economies, or avoid public discontent (if officials made grave mistakes in containing an outbreak, for example). Outside their borders, states might use disinformation to undermine their adversaries or disrupt an alliance between other nations. A sudden epidemic­ — when countries struggle to manage not just the outbreak but its social, economic, and political fallout­ — is especially tempting for interference.

In the case of COVID-19, such interference is already well underway. That fact should not come as a surprise. States hostile to the West have a long track record of manipulating information about health issues to sow distrust. In the 1980s, for example, the Soviet Union spread the false story that the US Department of Defense bioengineered HIV in order to kill African Americans. This propaganda was effective: some 20 years after the original Soviet disinformation campaign, a 2005 survey found that 48 percent of African Americans believed HIV was concocted in a laboratory, and 15 percent thought it was a tool of genocide aimed at their communities.

More recently, in 2018, Russia undertook an extensive disinformation campaign to amplify the anti-vaccination movement using social media platforms like Twitter and Facebook. Researchers have confirmed that Russian trolls and bots tweeted anti-vaccination messages at up to 22 times the rate of average users. Exposure to these messages, other researchers found, significantly decreased vaccine uptake, endangering individual lives and public health.

Last week, US officials accused Russia of spreading disinformation about COVID-19 in yet another coordinated campaign. Beginning around the middle of January, thousands of Twitter, Facebook, and Instagram accounts­ — many of which had previously been tied to Russia­ — had been seen posting nearly identical messages in English, German, French, and other languages, blaming the United States for the outbreak. Some of the messages claimed that the virus is part of a US effort to wage economic war on China, others that it is a biological weapon engineered by the CIA.

As much as this disinformation can sow discord and undermine public trust, the far greater vulnerability lies in the United States’ poorly protected emergency-response infrastructure, including the health surveillance systems used to monitor and track the epidemic. By hacking these systems and corrupting medical data, states with formidable cybercapabilities can change and manipulate data right at the source.

Here is how it would work, and why we should be so concerned. Numerous health surveillance systems are monitoring the spread of COVID-19 cases, including the CDC’s influenza surveillance network. Almost all testing is done at a local or regional level, with public-health agencies like the CDC only compiling and analyzing the data. Only rarely is an actual biological sample sent to a high-level government lab. Many of the clinics and labs providing results to the CDC no longer file reports as in the past, but have several layers of software to store and transmit the data.

Potential vulnerabilities in these systems are legion: hackers exploiting bugs in the software, unauthorized access to a lab’s servers by some other route, or interference with the digital communications between the labs and the CDC. That the software involved in disease tracking sometimes has access to electronic medical records is particularly concerning, because those records are often integrated into a clinic or hospital’s network of digital devices. One such device connected to a single hospital’s network could, in theory, be used to hack into the CDC’s entire COVID-19 database.

In practice, hacking deep into a hospital’s systems can be shockingly easy. As part of a cybersecurity study, Israeli researchers at Ben-Gurion University were able to hack into a hospital’s network via the public Wi-Fi system. Once inside, they could move through most of the hospital’s databases and diagnostic systems. Gaining control of the hospital’s unencrypted image database, the researchers inserted malware that altered healthy patients’ CT scans to show nonexistent tumors. Radiologists reading these images could only distinguish real from altered CTs 60 percent of the time­ — and only after being alerted that some of the CTs had been manipulated.

Another study directly relevant to public-health emergencies showed that a critical US biosecurity initiative, the Department of Homeland Security’s BioWatch program, had been left vulnerable to cyberattackers for over a decade. This program monitors more than 30 US jurisdictions and allows health officials to rapidly detect a bioweapons attack. Hacking this program could cover up an attack, or fool authorities into believing one has occurred.

Fortunately, no case of healthcare sabotage by intelligence agencies or hackers has come to light (the closest has been a series of ransomware attacks extorting money from hospitals, causing significant data breaches and interruptions in medical services). But other critical infrastructure has often been a target. The Russians have repeatedly hacked Ukraine’s national power grid, and have been probing US power plants and grid infrastructure as well. The United States and Israel hacked the Iranian nuclear program, while Iran has targeted Saudi Arabia’s oil infrastructure. There is no reason to believe that public-health infrastructure is in any way off limits.

Despite these precedents and proven risks, a detailed assessment of the vulnerability of US health surveillance systems to infiltration and manipulation has yet to be made. With COVID-19 on the verge of becoming a pandemic, the United States is at risk of not having trustworthy data, which in turn could cripple our country’s ability to respond.

Under normal conditions, there is plenty of time for health officials to notice unusual patterns in the data and track down wrong information­ — if necessary, using the old-fashioned method of giving the lab a call. But during an epidemic, when there are tens of thousands of cases to track and analyze, it would be easy for exhausted disease experts and public-health officials to be misled by corrupted data. The resulting confusion could lead to misdirected resources, give false reassurance that case numbers are falling, or waste precious time as decision makers try to validate inconsistent data.

In the face of a possible global pandemic, US and international public-health leaders must lose no time assessing and strengthening the security of the country’s digital health systems. They also have an important role to play in the broader debate over cybersecurity. Making America’s health infrastructure safe requires a fundamental reorientation of cybersecurity away from offense and toward defense. The position of many governments, including the United States’, that Internet infrastructure must be kept vulnerable so they can better spy on others, is no longer tenable. A digital arms race, in which more countries acquire ever more sophisticated cyberattack capabilities, only increases US vulnerability in critical areas such as pandemic control. By highlighting the importance of protecting digital health infrastructure, public-health leaders can and should call for a well-defended and peaceful Internet as a foundation for a healthy and secure world.

This essay was co-authored with Margaret Bourdeaux; a slightly different version appeared in Foreign Policy.

EDITED TO ADD: On last week’s squid post, there was a big conversation regarding the COVID-19. Many of the comments straddled the line between what are and aren’t the the core topics. Yesterday I deleted a bunch for being off-topic. Then I reconsidered and republished some of what I deleted.

Going forward, comments about the COVID-19 will be restricted to the security and risk implications of the virus. This includes cybersecurity, security, risk management, surveillance, and containment measures. Comments that stray off those topics will be removed. By clarifying this, I hope to keep the conversation on-topic while also allowing discussion of the security implications of current events.

Thank you for your patience and forbearance on this.

Posted on March 5, 2020 at 6:10 AMView Comments

Policy vs. Technology

Sometime around 1993 or 1994, during the first Crypto Wars, I was part of a group of cryptography experts that went to Washington to advocate for strong encryption. Matt Blaze and Ron Rivest were with me; I don’t remember who else. We met with then Massachusetts Representative Ed Markey. (He didn’t become a senator until 2013.) Back then, he and Vermont Senator Patrick Leahy were the most knowledgeable on this issue and our biggest supporters against government backdoors. They still are.

Markey was against forcing encrypted phone providers to implement the NSA’s Clipper Chip in their devices, but wanted us to reach a compromise with the FBI regardless. This completely startled us techies, who thought having the right answer was enough. It was at that moment that I learned an important difference between technologists and policy makers. Technologists want solutions; policy makers want consensus.

Since then, I have become more immersed in policy discussions. I have spent more time with legislators, advised advocacy organizations like EFF and EPIC, and worked with policy-minded think tanks in the United States and around the world. I teach cybersecurity policy and technology at the Harvard Kennedy School of Government. My most recent two books, Data and Goliath — about surveillance — and Click Here to Kill Everybody — about IoT security — are really about the policy implications of technology.

Over that time, I have observed many other differences between technologists and policy makers — differences that we in cybersecurity need to understand if we are to translate our technological solutions into viable policy outcomes.

Technologists don’t try to consider all of the use cases of a given technology. We tend to build something for the uses we envision, and hope that others can figure out new and innovative ways to extend what we created. We love it when there is a new use for a technology that we never considered and that changes the world. And while we might be good at security around the use cases we envision, we are regularly blindsided when it comes to new uses or edge cases. (Authentication risks surrounding someone’s intimate partner is a good example.)

Policy doesn’t work that way; it’s specifically focused on use. It focuses on people and what they do. Policy makers can’t create policy around a piece of technology without understanding how it is used — how all of it’s used.

Policy is often driven by exceptional events, like the FBI’s desire to break the encryption on the San Bernardino shooter’s iPhone. (The PATRIOT Act is the most egregious example I can think of.) Technologists tend to look at more general use cases, like the overall value of strong encryption to societal security. Policy tends to focus on the past, making existing systems work or correcting wrongs that have happened. It’s hard to imagine policy makers creating laws around VR systems, because they don’t yet exist in any meaningful way. Technology is inherently future focused. Technologists try to imagine better systems, or future flaws in present systems, and work to improve things.

As technologists, we iterate. It’s how we write software. It’s how we field products. We know we can’t get it right the first time, so we have developed all sorts of agile systems to deal with that fact. Policy making is often the opposite. U.S. federal laws take months or years to negotiate and pass, and after that the issue doesn’t get addressed again for a decade or more. It is much more critical to get it right the first time, because the effects of getting it wrong are long lasting. (See, for example, parts of the GDPR.) Sometimes regulatory agencies can be more agile. The courts can also iterate policy, but it’s slower.

Along similar lines, the two groups work in very different time frames. Engineers, conditioned by Moore’s law, have long thought of 18 months as the maximum time to roll out a new product, and now think in terms of continuous deployment of new features. As I said previously, policy makers tend to think in terms of multiple years to get a law or regulation in place, and then more years as the case law builds up around it so everyone knows what it really means. It’s like tortoises and hummingbirds.

Technology is inherently global. It is often developed with local sensibilities according to local laws, but it necessarily has global reach. Policy is always jurisdictional. This difference is causing all sorts of problems for the global cloud services we use every day. The providers are unable to operate their global systems in compliance with more than 200 different — and sometimes conflicting — national requirements. Policy makers are often unimpressed with claims of inability; laws are laws, they say, and if Facebook can translate its website into French for the French, it can also implement their national laws.

Technology and policy both use concepts of trust, but differently. Technologists tend to think of trust in terms of controls on behavior. We’re getting better — NIST’s recent work on trust is a good example — but we have a long way to go. For example, Google’s Trust and Safety Department does a lot of AI and ethics work largely focused on technological controls. Policy makers think of trust in more holistic societal terms: trust in institutions, trust as the ability not to worry about adverse outcomes, consumer confidence. This dichotomy explains how techies can claim bitcoin is trusted because of the strong cryptography, but policy makers can’t imagine calling a system trustworthy when you lose all your money if you forget your encryption key.

Policy is how society mediates how individuals interact with society. Technology has the potential to change how individuals interact with society. The conflict between these two causes considerable friction, as technologists want policy makers to get out of the way and not stifle innovation, and policy makers want technologists to stop moving fast and breaking so many things.

Finally, techies know that code is law­ — that the restrictions and limitations of a technology are more fundamental than any human-created legal anything. Policy makers know that law is law, and tech is just tech. We can see this in the tension between applying existing law to new technologies and creating new law specifically for those new technologies.

Yes, these are all generalizations and there are exceptions. It’s also not all either/or. Great technologists and policy makers can see the other perspectives. The best policy makers know that for all their work toward consensus, they won’t make progress by redefining pi as three. Thoughtful technologists look beyond the immediate user demands to the ways attackers might abuse their systems, and design against those adversaries as well. These aren’t two alien species engaging in first contact, but cohorts who can each learn and borrow tools from the other. Too often, though, neither party tries.

In October, I attended the first ACM Symposium on Computer Science and the Law. Google counsel Brian Carver talked about his experience with the few computer science grad students who would attend his Intellectual Property and Cyberlaw classes every year at UC Berkeley. One of the first things he would do was give the students two different cases to read. The cases had nearly identical facts, and the judges who’d ruled on them came to exactly opposite conclusions. The law students took this in stride; it’s the way the legal system works when it’s wrestling with a new concept or idea. But it shook the computer science students. They were appalled that there wasn’t a single correct answer.

But that’s not how law works, and that’s not how policy works. As the technologies we’re creating become more central to society, and as we in technology continue to move into the public sphere and become part of the increasingly important policy debates, it is essential that we learn these lessons. Gone are the days when we were creating purely technical systems and our work ended at the keyboard and screen. Now we’re building complex socio-technical systems that are literally creating a new world. And while it’s easy to dismiss policy makers as doing it wrong, it’s important to understand that they’re not. Policy making has been around a lot longer than the Internet or computers or any technology. And the essential challenges of this century will require both groups to work together.

This essay previously appeared in IEEE Security & Privacy.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on February 21, 2020 at 5:54 AMView Comments

Modern Mass Surveillance: Identify, Correlate, Discriminate

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­– using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge ­– and almost entirely unregulated ­– data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies –­ and governments ­– to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law ­– passed in Vermont in 2018 ­– that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.

This essay previously appeared in the New York Times.

EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn’t thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement’s usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It’s something that’s easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement’s use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology’s marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It’s why the issue has momentum, and why we’ve gotten the small wins we’ve had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There’s more information from EFF and EPIC.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on January 27, 2020 at 12:21 PMView Comments

Artificial Personas and Public Discourse

Presidential campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It’s writing news stories, particularly in sports and finance. It’s talking with customers on merchant websites. It’s writing convincing op-eds on topics in the news (though there are limitations). And it’s being used to bulk up “pink-slime journalism” — websites meant to appear like legitimate local news outlets but that publish propaganda instead.

There’s a record of algorithmic content pretending to be from individuals, as well. In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them — maybe half — were fake, using stolen identities. These comments were also crude; 1.3 million were generated from the same template, with some words altered to make them appear unique. They didn’t stand up to even cursory scrutiny.

These efforts will only get more sophisticated. In a recent experiment, Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. The next group to try this won’t be so honorable.

Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries. These tended to be simple programs mindlessly repeating slogans: a quarter million pro-Saudi “We all have trust in Mohammed bin Salman” tweets following the 2018 murder of Jamal Khashoggi, for example. Detecting many bots with a few followers each is harder than detecting a few bots with lots of followers. And measuring the effectiveness of these bots is difficult. The best analyses indicate that they did not affect the 2016 US presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos — sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people, based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the Internet. Not just on social media, but everywhere there’s commentary.

Maybe these persona bots will be controlled by foreign actors. Maybe it’ll be domestic political groups. Maybe it’ll be the candidates themselves. Most likely, it’ll be everybody. The most important lesson from the 2016 election about misinformation isn’t that misinformation occurred; it is how cheap and easy misinforming people was. Future technological improvements will make it all even more affordable.

Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

Solutions are hard to imagine. We can regulate the use of bots — a proposed California law would require bots to identify themselves — but that is effective only against legitimate influence campaigns, such as advertising. Surreptitious influence operations will be much harder to detect. The most obvious defense is to develop and standardize better authentication methods. If social networks verify that an actual person is behind each account, then they can better weed out fake personas. But fake accounts are already regularly created for real people without their knowledge or consent, and anonymous speech is essential for robust political debate, especially when speakers are from disadvantaged or marginalized communities. We don’t have an authentication system that both protects privacy and scales to the billions of users.

We can hope that our ability to identify artificial personas keeps up with our ability to disguise them. If the arms race between deep fakes and deep-fake detectors is any guide, that’ll be hard as well. The technologies of obfuscation always seem one step ahead of the technologies of detection. And artificial personas will be designed to act exactly like real people.

In the end, any solutions have to be nontechnical. We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we’re talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.

Misinformation efforts are now common around the globe, conducted in more than 70 countries. This is the normal way to push propaganda in countries with authoritarian leanings, and it’s becoming the way to run a political campaign, for either a candidate or an issue.

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don’t know the effect of that noise on democracy, only that it’ll be pernicious, and that it’s inevitable.

This essay previously appeared in TheAtlantic.com.

EDITED TO ADD: Jamie Susskind wrote a similar essay.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

EDITED TO ADD (6/4): This essay has been translated into Portuguese.

Posted on January 13, 2020 at 8:21 AMView Comments

Technology and Policymakers

Technologists and policymakers largely inhabit two separate worlds. It’s an old problem, one that the British scientist CP Snow identified in a 1959 essay entitled The Two Cultures. He called them sciences and humanities, and pointed to the split as a major hindrance to solving the world’s problems. The essay was influential — but 60 years later, nothing has changed.

When Snow was writing, the two cultures theory was largely an interesting societal observation. Today, it’s a crisis. Technology is now deeply intertwined with policy. We’re building complex socio-technical systems at all levels of our society. Software constrains behavior with an efficiency that no law can match. It’s all changing fast; technology is literally creating the world we all live in, and policymakers can’t keep up. Getting it wrong has become increasingly catastrophic. Surviving the future depends in bringing technologists and policymakers together.

Consider artificial intelligence (AI). This technology has the potential to augment human decision-making, eventually replacing notoriously subjective human processes with something fairer, more consistent, faster and more scalable. But it also has the potential to entrench bias and codify inequity, and to act in ways that are unexplainable and undesirable. It can be hacked in new ways, giving attackers from criminals and nation states new capabilities to disrupt and harm. How do we avoid the pitfalls of AI while benefiting from its promise? Or, more specifically, where and how should government step in and regulate what is largely a market-driven industry? The answer requires a deep understanding of both the policy tools available to modern society and the technologies of AI.

But AI is just one of many technological areas that needs policy oversight. We also need to tackle the increasingly critical cybersecurity vulnerabilities in our infrastructure. We need to understand both the role of social media platforms in disseminating politically divisive content, and what technology can and cannot to do mitigate its harm. We need policy around the rapidly advancing technologies of bioengineering, such as genome editing and synthetic biology, lest advances cause problems for our species and planet. We’re barely keeping up with regulations on food and water safety — let alone energy policy and climate change. Robotics will soon be a common consumer technology, and we are not ready for it at all.

Addressing these issues will require policymakers and technologists to work together from the ground up. We need to create an environment where technologists get involved in public policy – where there is a viable career path for what has come to be called “public-interest technologists.”

The concept isn’t new, even if the phrase is. There are already professionals who straddle the worlds of technology and policy. They come from the social sciences and from computer science. They work in data science, or tech policy, or public-focused computer science. They worked in Bush and Obama’s White House, or in academia and NGOs. The problem is that there are too few of them; they are all exceptions and they are all exceptional. We need to find them, support them, and scale up whatever the process is that creates them.

There are two aspects to creating a scalable career path for public-interest technologists, and you can think of them as the problems of supply and demand. In the long term, supply will almost certainly be the bigger problem. There simply aren’t enough technologists who want to get involved in public policy. This will only become more critical as technology further permeates our society. We can’t begin to calculate the number of them that our society will need in the coming years and decades.

Fixing this supply problem requires changes in educational curricula, from childhood through college and beyond. Science and technology programs need to include mandatory courses in ethics, social science, policy and human-centered design. We need joint degree programs to provide even more integrated curricula. We need ways to involve people from a variety of backgrounds and capabilities. We need to foster opportunities for public-interest tech work on the side, as part of their more traditional jobs, or for a few years during their more conventional careers during designed sabbaticals or fellowships. Public service needs to be part of an academic career. We need to create, nurture and compensate people who aren’t entirely technologists or policymakers, but instead an amalgamation of the two. Public-interest technology needs to be a respected career choice, even if it will never pay what a technologist can make at a tech firm.

But while the supply side is the harder problem, the demand side is the more immediate problem. Right now, there aren’t enough places to go for scientists or technologists who want to do public policy work, and the ones that exist tend to be underfunded and in environments where technologists are unappreciated. There aren’t enough positions on legislative staffs, in government agencies, at NGOs or in the press. There aren’t enough teaching positions and fellowships at colleges and universities. There aren’t enough policy-focused technological projects. In short, not enough policymakers realize that they need scientists and technologists — preferably those with some policy training — as part of their teams.

To make effective tech policy, policymakers need to better understand technology. For some reason, ignorance about technology isn’t seen as a deficiency among our elected officials, and this is a problem. It is no longer okay to not understand how the internet, machine learning — or any other core technologies — work.

This doesn’t mean policymakers need to become tech experts. We have long expected our elected officials to regulate highly specialized areas of which they have little understanding. It’s been manageable because those elected officials have people on their staff who do understand those areas, or because they trust other elected officials who do. Policymakers need to realize that they need technologists on their policy teams, and to accept well-established scientific findings as fact. It is also no longer okay to discount technological expertise merely because it contradicts your political biases.

The evolution of public health policy serves as an instructive model. Health policy is a field that includes both policy experts who know a lot about the science and keep abreast of health research, and biologists and medical researchers who work closely with policymakers. Health policy is often a specialization at policy schools. We live in a world where the importance of vaccines is widely accepted and well-understood by policymakers, and is written into policy. Our policies on global pandemics are informed by medical experts. This serves society well, but it wasn’t always this way. Health policy was not always part of public policy. People lived through a lot of terrible health crises before policymakers figured out how to actually talk and listen to medical experts. Today we are facing a similar situation with technology.

Another parallel is public-interest law. Lawyers work in all parts of government and in many non-governmental organizations, crafting policy or just lawyering in the public interest. Every attorney at a major law firm is expected to devote some time to public-interest cases; it’s considered part of a well-rounded career. No law firm looks askance at an attorney who takes two years out of his career to work in a public-interest capacity. A tech career needs to look more like that.

In his book Future Politics, Jamie Susskind writes: “Politics in the twentieth century was dominated by a central question: how much of our collective life should be determined by the state, and what should be left to the market and civil society? For the generation now approaching political maturity, the debate will be different: to what extent should our lives be directed and controlled by powerful digital systems — and on what terms?”

I teach cybersecurity policy at the Harvard Kennedy School of Government. Because that question is fundamentally one of economics — and because my institution is a product of both the 20th century and that question — its faculty is largely staffed by economists. But because today’s question is a different one, the institution is now hiring policy-focused technologists like me.

If we’re honest with ourselves, it was never okay for technology to be separate from policy. But today, amid what we’re starting to call the Fourth Industrial Revolution, the separation is much more dangerous. We need policymakers to recognize this danger, and to welcome a new generation of technologists from every persuasion to help solve the socio-technical policy problems of the 21st century. We need to create ways to speak tech to power — and power needs to open the door and let technologists in.

This essay previously appeared on the World Economic Forum blog.

Posted on November 14, 2019 at 7:04 AMView Comments

Supply-Chain Security and Trust

The United States government’s continuing disagreement with the Chinese company Huawei underscores a much larger problem with computer technologies in general: We have no choice but to trust them completely, and it’s impossible to verify that they’re trustworthy. Solving this problem ­ which is increasingly a national security issue ­ will require us to both make major policy changes and invent new technologies.

The Huawei problem is simple to explain. The company is based in China and subject to the rules and dictates of the Chinese government. The government could require Huawei to install back doors into the 5G routers it sells abroad, allowing the government to eavesdrop on communications or ­– even worse ­– take control of the routers during wartime. Since the United States will rely on those routers for all of its communications, we become vulnerable by building our 5G backbone on Huawei equipment.

It’s obvious that we can’t trust computer equipment from a country we don’t trust, but the problem is much more pervasive than that. The computers and smartphones you use are not built in the United States. Their chips aren’t made in the United States. The engineers who design and program them come from over a hundred countries. Thousands of people have the opportunity, acting alone, to slip a back door into the final product.

There’s more. Open-source software packages are increasingly targeted by groups installing back doors. Fake apps in the Google Play store illustrate vulnerabilities in our software distribution systems. The NotPetya worm was distributed by a fraudulent update to a popular Ukranian accounting package, illustrating vulnerabilities in our update systems. Hardware chips can be back-doored at the point of fabrication, even if the design is secure. The National Security Agency exploited the shipping process to subvert Cisco routers intended for the Syrian telephone company. The overall problem is that of supply-chain security, because every part of the supply chain can be attacked.

And while nation-state threats like China and Huawei ­– or Russia and the antivirus company Kaspersky a couple of years earlier ­– make the news, many of the vulnerabilities I described above are being exploited by cybercriminals.

Policy solutions involve forcing companies to open their technical details to inspection, including the source code of their products and the designs of their hardware. Huawei and Kaspersky have offered this sort of openness as a way to demonstrate that they are trustworthy. This is not a worthless gesture, and it helps, but it’s not nearly enough. Too many back doors can evade this kind of inspection.

Technical solutions fall into two basic categories, both currently beyond our reach. One is to improve the technical inspection processes for products whose designers provide source code and hardware design specifications, and for products that arrive without any transparency information at all. In both cases, we want to verify that the end product is secure and free of back doors. Sometimes we can do this for some classes of back doors: We can inspect source code ­ this is how a Linux back door was discovered and removed in 2003 ­ or the hardware design, which becomes a cleverness battle between attacker and defender.

This is an area that needs more research. Today, the advantage goes to the attacker. It’s hard to ensure that the hardware and software you examine is the same as what you get, and it’s too easy to create back doors that slip past inspection. And while we can find and correct some of these supply-chain attacks, we won’t find them all. It’s a needle-in-a-haystack problem, except we don’t know what a needle looks like. We need technologies, possibly based on artificial intelligence, that can inspect systems more thoroughly and faster than humans can do. We need them quickly.

The other solution is to build a secure system, even though any of its parts can be subverted. This is what the former Deputy Director of National Intelligence Sue Gordon meant in April when she said about 5G, “You have to presume a dirty network.” Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?

It sounds ridiculous on its face, but the Internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of research. That research continues today, and it’s how we can have highly resilient distributed systems like Google’s network even though none of the individual components are particularly good. It’s also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.

Security is a lot harder than reliability. We don’t even really know how to build secure systems out of secure parts, let alone out of parts and processes that we can’t trust and that are almost certainly being subverted by governments and criminals around the world. Current security technologies are nowhere near good enough, though, to defend against these increasingly sophisticated attacks. So while this is an important part of the solution, and something we need to focus research on, it’s not going to solve our near-term problems.

At the same time, all of these problems are getting worse as computers and networks become more critical to personal and national security. The value of 5G isn’t for you to watch videos faster; it’s for things talking to things without bothering you. These things ­– cars, appliances, power plants, smart cities –­ increasingly affect the world in a direct physical manner. They’re increasingly autonomous, using A.I. and other technologies to make decisions without human intervention. The risk from Chinese back doors into our networks and computers isn’t that their government will listen in on our conversations; it’s that they’ll turn the power off or make all the cars crash into one another.

All of this doesn’t leave us with many options for today’s supply-chain problems. We still have to presume a dirty network ­– as well as back-doored computers and phones — and we can clean up only a fraction of the vulnerabilities. Citing the lack of non-Chinese alternatives for some of the communications hardware, already some are calling to abandon attempts to secure 5G from Chinese back doors and work on having secure American or European alternatives for 6G networks. It’s not nearly enough to solve the problem, but it’s a start.

Perhaps these half-solutions are the best we can do. Live with the problem today, and accelerate research to solve the problem for the future. These are research projects on a par with the Internet itself. They need government funding, like the Internet itself. And, also like the Internet, they’re critical to national security.

Critically, these systems must be as secure as we can make them. As former FCC Commissioner Tom Wheeler has explained, there’s a lot more to securing 5G than keeping Chinese equipment out of the network. This means we have to give up the fantasy that law enforcement can have back doors to aid criminal investigations without also weakening these systems. The world uses one network, and there can only be one answer: Either everyone gets to spy, or no one gets to spy. And as these systems become more critical to national security, a network secure from all eavesdroppers becomes more important.

This essay previously appeared in the New York Times.

Posted on September 30, 2019 at 6:36 AMView Comments

On Chinese "Spy Trains"

The trade war with China has reached a new industry: subway cars. Congress is considering legislation that would prevent the world’s largest train maker, the Chinese-owned CRRC Corporation, from competing on new contracts in the United States.

Part of the reasoning behind this legislation is economic, and stems from worries about Chinese industries undercutting the competition and dominating key global industries. But another part involves fears about national security. News articles talk about “spy trains,” and the possibility that the train cars might surreptitiously monitor their passengers’ faces, movements, conversations or phone calls.

This is a complicated topic. There is definitely a national security risk in buying computer infrastructure from a country you don’t trust. That’s why there is so much worry about Chinese-made equipment for the new 5G wireless networks.

It’s also why the United States has blocked the cybersecurity company Kaspersky from selling its Russian-made antivirus products to US government agencies. Meanwhile, the chairman of China’s technology giant Huawei has pointed to NSA spying disclosed by Edward Snowden as a reason to mistrust US technology companies.

The reason these threats are so real is that it’s not difficult to hide surveillance or control infrastructure in computer components, and if they’re not turned on, they’re very difficult to find.

Like every other piece of modern machinery, modern train cars are filled with computers, and while it’s certainly possible to produce a subway car with enough surveillance apparatus to turn it into a “spy train,” in practice it doesn’t make much sense. The risk of discovery is too great, and the payoff would be too low. Like the United States, China is more likely to try to get data from the US communications infrastructure, or from the large Internet companies that already collect data on our every move as part of their business model.

While it’s unlikely that China would bother spying on commuters using subway cars, it would be much less surprising if a tech company offered free Internet on subways in exchange for surveillance and data collection. Or if the NSA used those corporate systems for their own surveillance purposes (just as the agency has spied on in-flight cell phone calls, according to an investigation by the Intercept and Le Monde, citing documents provided by Edward Snowden). That’s an easier, and more fruitful, attack path.

We have credible reports that the Chinese hacked Gmail around 2010, and there are ongoing concerns about both censorship and surveillance by the Chinese social-networking company TikTok. (TikTok’s parent company has told the Washington Post that the app doesn’t send American users’ info back to Beijing, and that the Chinese government does not influence the app’s use in the United States.)

Even so, these examples illustrate an important point: there’s no escaping the technology of inevitable surveillance. You have little choice but to rely on the companies that build your computers and write your software, whether in your smartphones, your 5G wireless infrastructure, or your subway cars. And those systems are so complicated that they can be secretly programmed to operate against your interests.

Last year, Le Monde reported that the Chinese government bugged the computer network of the headquarters of the African Union in Addis Ababa. China had built and outfitted the organization’s new headquarters as a foreign aid gift, reportedly secretly configuring the network to send copies of confidential data to Shanghai every night between 2012 and 2017. China denied having done so, of course.

If there’s any lesson from all of this, it’s that everybody spies using the Internet. The United States does it. Our allies do it. Our enemies do it. Many countries do it to each other, with their success largely dependent on how sophisticated their tech industries are.

China dominates the subway car manufacturing industry because of its low prices­ — the same reason it dominates the 5G hardware industry. Whether these low prices are because the companies are more efficient than their competitors or because they’re being unfairly subsidized by the Chinese government is a matter to be determined at trade negotiations.

Finally, Americans must understand that higher prices are an inevitable result of banning cheaper tech products from China.

We might willingly pay the higher prices because we want domestic control of our telecommunications infrastructure. We might willingly pay more because of some protectionist belief that global trade is somehow bad. But we need to make these decisions to protect ourselves deliberately and rationally, recognizing both the risks and the costs. And while I’m worried about our 5G infrastructure built using Chinese hardware, I’m not worried about our subway cars.

This essay originally appeared on CNN.com.

EDITED TO ADD: I had a lot of trouble with CNN’s legal department with this essay. They were very reluctant to call out the US and its allies for similar behavior, and spent a lot more time adding caveats to statements that I didn’t think needed them. They wouldn’t let me link to this Intercept article talking about US, French, and German infiltration of supply chains, or even the NSA document from the Snowden archives that proved the statements.

Posted on September 26, 2019 at 6:21 AMView Comments

1 2 3 43

Sidebar photo of Bruce Schneier by Joe MacInnis.