Entries Tagged "economics of security"

Page 2 of 39

An Examination of the Bug Bounty Marketplace

Here’s a fascinating report: “Bounty Everything: Hackers and the Making of the Global Bug Marketplace.” From a summary:

…researchers Ryan Ellis and Yuan Stevens provide a window into the working lives of hackers who participate in “bug bounty” programs­—programs that hire hackers to discover and report bugs or other vulnerabilities in their systems. This report illuminates the risks and insecurities for hackers as gig workers, and how bounty programs rely on vulnerable workers to fix their vulnerable systems.

Ellis and Stevens’s research offers a historical overview of bounty programs and an analysis of contemporary bug bounty platforms—­the new intermediaries that now structure the vast majority of bounty work. The report draws directly from interviews with hackers, who recount that bounty programs seem willing to integrate a diverse workforce in their practices, but only on terms that deny them the job security and access enjoyed by core security workforces. These inequities go far beyond the difference experienced by temporary and permanent employees at companies such as Google and Apple, contend the authors. The global bug bounty workforce is doing piecework—they are paid for each bug, and the conditions under which a bug is paid vary greatly from one company to the next.

Posted on January 17, 2022 at 6:16 AMView Comments

Illegal Content and the Blockchain

Security researchers have recently discovered a botnet with a novel defense against takedowns. Normally, authorities can disable a botnet by taking over its command-and-control server. With nowhere to go for instructions, the botnet is rendered useless. But over the years, botnet designers have come up with ways to make this counterattack harder. Now the content-delivery network Akamai has reported on a new method: a botnet that uses the Bitcoin blockchain ledger. Since the blockchain is globally accessible and hard to take down, the botnet’s operators appear to be safe.

It’s best to avoid explaining the mathematics of Bitcoin’s blockchain, but to understand the colossal implications here, you need to understand one concept. Blockchains are a type of “distributed ledger”: a record of all transactions since the beginning, and everyone using the blockchain needs to have access to—and reference—a copy of it. What if someone puts illegal material in the blockchain? Either everyone has a copy of it, or the blockchain’s security fails.

To be fair, not absolutely everyone who uses a blockchain holds a copy of the entire ledger. Many who buy cryptocurrencies like Bitcoin and Ethereum don’t bother using the ledger to verify their purchase. Many don’t actually hold the currency outright, and instead trust an exchange to do the transactions and hold the coins. But people need to continually verify the blockchain’s history on the ledger for the system to be secure. If they stopped, then it would be trivial to forge coins. That’s how the system works.

Some years ago, people started noticing all sorts of things embedded in the Bitcoin blockchain. There are digital images, including one of Nelson Mandela. There’s the Bitcoin logo, and the original paper describing Bitcoin by its alleged founder, the pseudonymous Satoshi Nakamoto. There are advertisements, and several prayers. There’s even illegal pornography and leaked classified documents. All of these were put in by anonymous Bitcoin users. But none of this, so far, appears to seriously threaten those in power in governments and corporations. Once someone adds something to the Bitcoin ledger, it becomes sacrosanct. Removing something requires a fork of the blockchain, in which Bitcoin fragments into multiple parallel cryptocurrencies (and associated blockchains). Forks happen, rarely, but never yet because of legal coercion. And repeated forking would destroy Bitcoin’s stature as a stable(ish) currency.

The botnet’s designers are using this idea to create an unblockable means of coordination, but the implications are much greater. Imagine someone using this idea to evade government censorship. Most Bitcoin mining happens in China. What if someone added a bunch of Chinese-censored Falun Gong texts to the blockchain?<

What if someone added a type of political speech that Singapore routinely censors? Or cartoons that Disney holds the copyright to?

In Bitcoin’s and most other public blockchains there are no central, trusted authorities. Anyone in the world can perform transactions or become a miner. Everyone is equal to the extent that they have the hardware and electricity to perform cryptographic computations.

This openness is also a vulnerability, one that opens the door to asymmetric threats and small-time malicious actors. Anyone can put information in the one and only Bitcoin blockchain. Again, that’s how the system works.

Over the last three decades, the world has witnessed the power of open networks: blockchains, social media, the very web itself. What makes them so powerful is that their value is related not just to the number of users, but the number of potential links between users. This is Metcalfe’s law—value in a network is quadratic, not linear, in the number of users—and every open network since has followed its prophecy.

As Bitcoin has grown, its monetary value has skyrocketed, even if its uses remain unclear. With no barrier to entry, the blockchain space has been a Wild West of innovation and lawlessness. But today, many prominent advocates suggest Bitcoin should become a global, universal currency. In this context, asymmetric threats like embedded illegal data become a major challenge.

The philosophy behind Bitcoin traces to the earliest days of the open internet. Articulated in John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace, it was and is the ethos of tech startups: Code is more trustworthy than institutions. Information is meant to be free, and nobody has the right—and should not have the ability—to control it.

But information must reside somewhere. Code is written by and for people, stored on computers located within countries, and embedded within the institutions and societies we have created. To trust information is to trust its chain of custody and the social context it comes from. Neither code nor information is value-neutral, nor ever free of human context.

Today, Barlow’s vision is a mere shadow; every society controls the information its people can access. Some of this control is through overt censorship, as China controls information about Taiwan, Tiananmen Square, and the Uyghurs. Some of this is through civil laws designed by the powerful for their benefit, as with Disney and US copyright law, or UK libel law.

Bitcoin and blockchains like it are on a collision course with these laws. What happens when the interests of the powerful, with the law on their side, are pitted against an open blockchain? Let’s imagine how our various scenarios might play out.

China first: In response to Falun Gong texts in the blockchain, the People’s Republic decrees that any miners processing blocks with banned content will be taken offline—their IPs will be blacklisted. This causes a hard fork of the blockchain at the point just before the banned content. China might do this under the guise of a “patriotic” messaging campaign, publicly stating that it’s merely maintaining financial sovereignty from Western banks. Then it uses paid influencers and moderators on social media to pump the China Bitcoin fork, through both partisan comments and transactions. Two distinct forks would soon emerge, one behind China’s Great Firewall and one outside. Other countries with similar governmental and media ecosystems—Russia, Singapore, Myanmar—might consider following suit, creating multiple national Bitcoin forks. These would operate independently, under mandates to censor unacceptable transactions from then on.

Disney’s approach would play out differently. Imagine the company announces it will sue any ISP that hosts copyrighted content, starting with networks hosting the biggest miners. (Disney has sued to enforce its intellectual property rights in China before.) After some legal pressure, the networks cut the miners off. The miners reestablish themselves on another network, but Disney keeps the pressure on. Eventually miners get pushed further and further off of mainstream network providers, and resort to tunneling their traffic through an anonymity service like Tor. That causes a major slowdown in the already slow (because of the mathematics) Bitcoin network. Disney might issue takedown requests for Tor exit nodes, causing the network to slow to a crawl. It could persist like this for a long time without a fork. Or the slowdown could cause people to jump ship, either by forking Bitcoin or switching to another cryptocurrency without the copyrighted content.

And then there’s illegal pornographic content and leaked classified data. These have been on the Bitcoin blockchain for over five years, and nothing has been done about it. Just like the botnet example, it may be that these do not threaten existing power structures enough to warrant takedowns. This could easily change if Bitcoin becomes a popular way to share child sexual abuse material. Simply having these illegal images on your hard drive is a felony, which could have significant repercussions for anyone involved in Bitcoin.

Whichever scenario plays out, this may be the Achilles heel of Bitcoin as a global currency.

If an open network such as a blockchain were threatened by a powerful organization—China’s censors, Disney’s lawyers, or the FBI trying to take down a more dangerous botnet—it could fragment into multiple networks. That’s not just a nuisance, but an existential risk to Bitcoin.

Suppose Bitcoin were fragmented into 10 smaller blockchains, perhaps by geography: one in China, another in the US, and so on. These fragments might retain their original users, and by ordinary logic, nothing would have changed. But Metcalfe’s law implies that the overall value of these blockchain fragments combined would be a mere tenth of the original. That is because the value of an open network relates to how many others you can communicate with—and, in a blockchain, transact with. Since the security of bitcoin currency is achieved through expensive computations, fragmented blockchains are also easier to attack in a conventional manner—through a 51 percent attack—by an organized attacker. This is especially the case if the smaller blockchains all use the same hash function, as they would here.

Traditional currencies are generally not vulnerable to these sorts of asymmetric threats. There are no viable small-scale attacks against the US dollar, or almost any other fiat currency. The institutions and beliefs that give money its value are deep-seated, despite instances of currency hyperinflation.

The only notable attacks against fiat currencies are in the form of counterfeiting. Even in the past, when counterfeit bills were common, attacks could be thwarted. Counterfeiters require specialized equipment and are vulnerable to law enforcement discovery and arrest. Furthermore, most money today—even if it’s nominally in a fiat currency—doesn’t exist in paper form.

Bitcoin attracted a following for its openness and immunity from government control. Its goal is to create a world that replaces cultural power with cryptographic power: verification in code, not trust in people. But there is no such world. And today, that feature is a vulnerability. We really don’t know what will happen when the human systems of trust come into conflict with the trustless verification that make blockchain currencies unique. Just last week we saw this exact attack on smaller blockchains—not Bitcoin yet. We are watching a public socio-technical experiment in the making, and we will witness its success or failure in the not-too-distant future.

This essay was written with Barath Raghavan, and previously appeared on Wired.com.

EDITED TO ADD (4/14): A research paper on erasing data from Bitcoin blockchain.

Posted on March 17, 2021 at 6:10 AMView Comments

Should There Be Limits on Persuasive Technologies?

Persuasion is as old as our species. Both democracy and the market economy depend on it. Politicians persuade citizens to vote for them, or to support different policy positions. Businesses persuade consumers to buy their products or services. We all persuade our friends to accept our choice of restaurant, movie, and so on. It’s essential to society; we couldn’t get large groups of people to work together without it. But as with many things, technology is fundamentally changing the nature of persuasion. And society needs to adapt its rules of persuasion or suffer the consequences.

Democratic societies, in particular, are in dire need of a frank conversation about the role persuasion plays in them and how technologies are enabling powerful interests to target audiences. In a society where public opinion is a ruling force, there is always a risk of it being mobilized for ill purposes—­such as provoking fear to encourage one group to hate another in a bid to win office, or targeting personal vulnerabilities to push products that might not benefit the consumer.

In this regard, the United States, already extremely polarized, sits on a precipice.

There have long been rules around persuasion. The US Federal Trade Commission enforces laws that claims about products “must be truthful, not misleading, and, when appropriate, backed by scientific evidence.” Political advertisers must identify themselves in television ads. If someone abuses a position of power to force another person into a contract, undue influence can be argued to nullify that agreement. Yet there is more to persuasion than the truth, transparency, or simply applying pressure.

Persuasion also involves psychology, and that has been far harder to regulate. Using psychology to persuade people is not new. Edward Bernays, a pioneer of public relations and nephew to Sigmund Freud, made a marketing practice of appealing to the ego. His approach was to tie consumption to a person’s sense of self. In his 1928 book Propaganda, Bernays advocated engineering events to persuade target audiences as desired. In one famous stunt, he hired women to smoke cigarettes while taking part in the 1929 New York City Easter Sunday parade, causing a scandal while linking smoking with the emancipation of women. The tobacco industry would continue to market lifestyle in selling cigarettes into the 1960s.

Emotional appeals have likewise long been a facet of political campaigns. In the 1860 US presidential election, Southern politicians and newspaper editors spread fears of what a “Black Republican” win would mean, painting horrific pictures of what the emancipation of slaves would do to the country. In the 2020 US presidential election, modern-day Republicans used Cuban Americans’ fears of socialism in ads on Spanish-language radio and messaging on social media. Because of the emotions involved, many voters believed the campaigns enough to let them influence their decisions.

The Internet has enabled new technologies of persuasion to go even further. Those seeking to influence others can collect and use data about targeted audiences to create personalized messaging. Tracking the websites a person visits, the searches they make online, and what they engage with on social media, persuasion technologies enable those who have access to such tools to better understand audiences and deliver more tailored messaging where audiences are likely to see it most. This information can be combined with data about other activities, such as offline shopping habits, the places a person visits, and the insurance they buy, to create a profile of them that can be used to develop persuasive messaging that is aimed at provoking a specific response.

Our senses of self, meanwhile, are increasingly shaped by our interaction with technology. The same digital environment where we read, search, and converse with our intimates enables marketers to take that data and turn it back on us. A modern day Bernays no longer needs to ferret out the social causes that might inspire you or entice you­—you’ve likely already shared that by your online behavior.

Some marketers posit that women feel less attractive on Mondays, particularly first thing in the morning—­and therefore that’s the best time to advertise cosmetics to them. The New York Times once experimented by predicting the moods of readers based on article content to better target ads, enabling marketers to find audiences when they were sad or fearful. Some music streaming platforms encourage users to disclose their current moods, which helps advertisers target subscribers based on their emotional states.

The phones in our pockets provide marketers with our location in real time, helping deliver geographically relevant ads, such as propaganda to those attending a political rally. This always-on digital experience enables marketers to know what we are doing­—and when, where, and how we might be feeling at that moment.

All of this is not intended to be alarmist. It is important not to overstate the effectiveness of persuasive technologies. But while many of them are more smoke and mirrors than reality, it is likely that they will only improve over time. The technology already exists to help predict moods of some target audiences, pinpoint their location at any given time, and deliver fairly tailored and timely messaging. How far does that ability need to go before it erodes the autonomy of those targeted to make decisions of their own free will?

Right now, there are few legal or even moral limits on persuasion­—and few answers regarding the effectiveness of such technologies. Before it is too late, the world needs to consider what is acceptable and what is over the line.

For example, it’s been long known that people are more receptive to advertisements made with people who look like them: in race, ethnicity, age, gender. Ads have long been modified to suit the general demographic of the television show or magazine they appear in. But we can take this further. The technology exists to take your likeness and morph it with a face that is demographically similar to you. The result is a face that looks like you, but that you don’t recognize. If that turns out to be more persuasive than coarse demographic targeting, is that okay?

Another example: Instead of just advertising to you when they detect that you are vulnerable, what if advertisers craft advertisements that deliberately manipulate your mood? In some ways, being able to place ads alongside content that is likely to provoke a certain emotional response enables advertisers to do this already. The only difference is that the media outlet claims it isn’t crafting the content to deliberately achieve this. But is it acceptable to actively prime a target audience and then to deliver persuasive messaging that fits the mood?

Further, emotion-based decision-making is not the rational type of slow thinking that ought to inform important civic choices such as voting. In fact, emotional thinking threatens to undermine the very legitimacy of the system, as voters are essentially provoked to move in whatever direction someone with power and money wants. Given the pervasiveness of digital technologies, and the often instant, reactive responses people have to them, how much emotion ought to be allowed in persuasive technologies? Is there a line that shouldn’t be crossed?

Finally, for most people today, exposure to information and technology is pervasive. The average US adult spends more than eleven hours a day interacting with media. Such levels of engagement lead to huge amounts of personal data generated and aggregated about you­—your preferences, interests, and state of mind. The more those who control persuasive technologies know about us, what we are doing, how we are feeling, when we feel it, and where we are, the better they can tailor messaging that provokes us into action. The unsuspecting target is grossly disadvantaged. Is it acceptable for the same services to both mediate our digital experience and to target us? Is there ever such thing as too much targeting?

The power dynamics of persuasive technologies are changing. Access to tools and technologies of persuasion is not egalitarian. Many require large amounts of both personal data and computation power, turning modern persuasion into an arms race where the better resourced will be better placed to influence audiences.

At the same time, the average person has very little information about how these persuasion technologies work, and is thus unlikely to understand how their beliefs and opinions might be manipulated by them. What’s more, there are few rules in place to protect people from abuse of persuasion technologies, much less even a clear articulation of what constitutes a level of manipulation so great it effectively takes agency away from those targeted. This creates a positive feedback loop that is dangerous for society.

In the 1970s, there was widespread fear about so-called subliminal messaging, which claimed that images of sex and death were hidden in the details of print advertisements, as in the curls of smoke in cigarette ads and the ice cubes of liquor ads. It was pretty much all a hoax, but that didn’t stop the Federal Trade Commission and the Federal Communications Commission from declaring it an illegal persuasive technology. That’s how worried people were about being manipulated without their knowledge and consent.

It is time to have a serious conversation about limiting the technologies of persuasion. This must begin by articulating what is permitted and what is not. If we don’t, the powerful persuaders will become even more powerful.

This essay was written with Alicia Wanless, and previously appeared in Foreign Policy.

EDITED TO ADD: Ukrainian translation.

Posted on December 14, 2020 at 2:03 PMView Comments

The Security Value of Inefficiency

For decades, we have prized efficiency in our economy. We strive for it. We reward it. In normal times, that’s a good thing. Running just at the margins is efficient. A single just-in-time global supply chain is efficient. Consolidation is efficient. And that’s all profitable. Inefficiency, on the other hand, is waste. Extra inventory is inefficient. Overcapacity is inefficient. Using many small suppliers is inefficient. Inefficiency is unprofitable.

But inefficiency is essential security, as the COVID-19 pandemic is teaching us. All of the overcapacity that has been squeezed out of our healthcare system; we now wish we had it. All of the redundancy in our food production that has been consolidated away; we want that, too. We need our old, local supply chains—not the single global ones that are so fragile in this crisis. And we want our local restaurants and businesses to survive, not just the national chains.

We have lost much inefficiency to the market in the past few decades. Investors have become very good at noticing any fat in every system and swooping down to monetize those redundant assets. The winner-take-all mentality that has permeated so many industries squeezes any inefficiencies out of the system.

This drive for efficiency leads to brittle systems that function properly when everything is normal but break under stress. And when they break, everyone suffers. The less fortunate suffer and die. The more fortunate are merely hurt, and perhaps lose their freedoms or their future. But even the extremely fortunate suffer—maybe not in the short term, but in the long term from the constriction of the rest of society.

Efficient systems have limited ability to deal with system-wide economic shocks. Those shocks are coming with increased frequency. They’re caused by global pandemics, yes, but also by climate change, by financial crises, by political crises. If we want to be secure against these crises and more, we need to add inefficiency back into our systems.

I don’t simply mean that we need to make our food production, or healthcare system, or supply chains sloppy and wasteful. We need a certain kind of inefficiency, and it depends on the system in question. Sometimes we need redundancy. Sometimes we need diversity. Sometimes we need overcapacity.

The market isn’t going to supply any of these things, least of all in a strategic capacity that will result in resilience. What’s necessary to make any of this work is regulation.

First, we need to enforce antitrust laws. Our meat supply chain is brittle because there are limited numbers of massive meatpacking plants—now disease factories—rather than lots of smaller slaughterhouses. Our retail supply chain is brittle because a few national companies and websites dominate. We need multiple companies offering alternatives to a single product or service. We need more competition, more niche players. We need more local companies, more domestic corporate players, and diversity in our international suppliers. Competition provides all of that, while monopolies suck that out of the system.

The second thing we need is specific regulations that require certain inefficiencies. This isn’t anything new. Every safety system we have is, to some extent, an inefficiency. This is true for fire escapes on buildings, lifeboats on cruise ships, and multiple ways to deploy the landing gear on aircraft. Not having any of those things would make the underlying systems more efficient, but also less safe. It’s also true for the internet itself, originally designed with extensive redundancy as a Cold War security measure.

With those two things in place, the market can work its magic to provide for these strategic inefficiencies as cheaply and as effectively as possible. As long as there are competitors who are vying with each other, and there aren’t competitors who can reduce the inefficiencies and undercut the competition, these inefficiencies just become part of the price of whatever we’re buying.

The government is the entity that steps in and enforces a level playing field instead of a race to the bottom. Smart regulation addresses the long-term need for security, and ensures it’s not continuously sacrificed to short-term considerations.

We have largely been content to ignore the long term and let Wall Street run our economy as efficiently as it can. That’s no longer sustainable. We need inefficiency—the right kind in the right way—to ensure our security. No, it’s not free. But it’s worth the cost.

This essay previously appeared in Quartz.

EDITED TO ADD (7/14): A related piece by Dan Geer.

Posted on July 2, 2020 at 9:26 AMView Comments

Security and Human Behavior (SHB) 2020

Today is the second day of the thirteenth Workshop on Security and Human Behavior. It’s being hosted by the University of Cambridge, which in today’s world means we’re all meeting on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. We’ve done pretty well translating this format to video chat, including using the random breakout feature to put people into small groups.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, and twelfth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Posted on June 19, 2020 at 2:09 PMView Comments

On Cyber Warranties

Interesting article discussing cyber-warranties, and whether they are an effective way to transfer risk (as envisioned by Akerlof’s “market for lemons”) or a marketing trick.

The conclusion:

Warranties must transfer non-negligible amounts of liability to vendors in order to meaningfully overcome the market for lemons. Our preliminary analysis suggests the majority of cyber warranties cover the cost of repairing the device alone. Only cyber-incident warranties cover first-party costs from cyber-attacks—why all such warranties were offered by firms selling intangible products is an open question. Consumers should question whether warranties can function as a costly signal when narrow coverage means vendors accept little risk.

Worse still, buyers cannot compare across cyber-incident warranty contracts due to the diversity of obligations and exclusions. Ambiguous definitions of the buyer’s obligations and excluded events create uncertainty over what is covered. Moving toward standardized terms and conditions may help consumers, as has been pursued in cyber insurance, but this is in tension with innovation and product diversity.

[..]

Theoretical work suggests both the breadth of the warranty and the price of a product determine whether the warranty functions as a quality signal. Our analysis has not touched upon the price of these products. It could be that firms with ineffective products pass the cost of the warranty on to buyers via higher prices. Future studies could analyze warranties and price together to probe this issue.

In conclusion, cyber warranties—particularly cyber-product warranties—do not transfer enough risk to be a market fix as imagined in Woods. But this does not mean they are pure marketing tricks either. The most valuable feature of warranties is in preventing vendors from exaggerating what their products can do. Consumers who read the fine print can place greater trust in marketing claims so long as the functionality is covered by a cyber-incident warranty.

Posted on March 26, 2020 at 6:27 AMView Comments

Technology and Policymakers

Technologists and policymakers largely inhabit two separate worlds. It’s an old problem, one that the British scientist CP Snow identified in a 1959 essay entitled The Two Cultures. He called them sciences and humanities, and pointed to the split as a major hindrance to solving the world’s problems. The essay was influential—but 60 years later, nothing has changed.

When Snow was writing, the two cultures theory was largely an interesting societal observation. Today, it’s a crisis. Technology is now deeply intertwined with policy. We’re building complex socio-technical systems at all levels of our society. Software constrains behavior with an efficiency that no law can match. It’s all changing fast; technology is literally creating the world we all live in, and policymakers can’t keep up. Getting it wrong has become increasingly catastrophic. Surviving the future depends in bringing technologists and policymakers together.

Consider artificial intelligence (AI). This technology has the potential to augment human decision-making, eventually replacing notoriously subjective human processes with something fairer, more consistent, faster and more scalable. But it also has the potential to entrench bias and codify inequity, and to act in ways that are unexplainable and undesirable. It can be hacked in new ways, giving attackers from criminals and nation states new capabilities to disrupt and harm. How do we avoid the pitfalls of AI while benefiting from its promise? Or, more specifically, where and how should government step in and regulate what is largely a market-driven industry? The answer requires a deep understanding of both the policy tools available to modern society and the technologies of AI.

But AI is just one of many technological areas that needs policy oversight. We also need to tackle the increasingly critical cybersecurity vulnerabilities in our infrastructure. We need to understand both the role of social media platforms in disseminating politically divisive content, and what technology can and cannot to do mitigate its harm. We need policy around the rapidly advancing technologies of bioengineering, such as genome editing and synthetic biology, lest advances cause problems for our species and planet. We’re barely keeping up with regulations on food and water safety—let alone energy policy and climate change. Robotics will soon be a common consumer technology, and we are not ready for it at all.

Addressing these issues will require policymakers and technologists to work together from the ground up. We need to create an environment where technologists get involved in public policy – where there is a viable career path for what has come to be called “public-interest technologists.”

The concept isn’t new, even if the phrase is. There are already professionals who straddle the worlds of technology and policy. They come from the social sciences and from computer science. They work in data science, or tech policy, or public-focused computer science. They worked in Bush and Obama’s White House, or in academia and NGOs. The problem is that there are too few of them; they are all exceptions and they are all exceptional. We need to find them, support them, and scale up whatever the process is that creates them.

There are two aspects to creating a scalable career path for public-interest technologists, and you can think of them as the problems of supply and demand. In the long term, supply will almost certainly be the bigger problem. There simply aren’t enough technologists who want to get involved in public policy. This will only become more critical as technology further permeates our society. We can’t begin to calculate the number of them that our society will need in the coming years and decades.

Fixing this supply problem requires changes in educational curricula, from childhood through college and beyond. Science and technology programs need to include mandatory courses in ethics, social science, policy and human-centered design. We need joint degree programs to provide even more integrated curricula. We need ways to involve people from a variety of backgrounds and capabilities. We need to foster opportunities for public-interest tech work on the side, as part of their more traditional jobs, or for a few years during their more conventional careers during designed sabbaticals or fellowships. Public service needs to be part of an academic career. We need to create, nurture and compensate people who aren’t entirely technologists or policymakers, but instead an amalgamation of the two. Public-interest technology needs to be a respected career choice, even if it will never pay what a technologist can make at a tech firm.

But while the supply side is the harder problem, the demand side is the more immediate problem. Right now, there aren’t enough places to go for scientists or technologists who want to do public policy work, and the ones that exist tend to be underfunded and in environments where technologists are unappreciated. There aren’t enough positions on legislative staffs, in government agencies, at NGOs or in the press. There aren’t enough teaching positions and fellowships at colleges and universities. There aren’t enough policy-focused technological projects. In short, not enough policymakers realize that they need scientists and technologists—preferably those with some policy training—as part of their teams.

To make effective tech policy, policymakers need to better understand technology. For some reason, ignorance about technology isn’t seen as a deficiency among our elected officials, and this is a problem. It is no longer okay to not understand how the internet, machine learning—or any other core technologies—work.

This doesn’t mean policymakers need to become tech experts. We have long expected our elected officials to regulate highly specialized areas of which they have little understanding. It’s been manageable because those elected officials have people on their staff who do understand those areas, or because they trust other elected officials who do. Policymakers need to realize that they need technologists on their policy teams, and to accept well-established scientific findings as fact. It is also no longer okay to discount technological expertise merely because it contradicts your political biases.

The evolution of public health policy serves as an instructive model. Health policy is a field that includes both policy experts who know a lot about the science and keep abreast of health research, and biologists and medical researchers who work closely with policymakers. Health policy is often a specialization at policy schools. We live in a world where the importance of vaccines is widely accepted and well-understood by policymakers, and is written into policy. Our policies on global pandemics are informed by medical experts. This serves society well, but it wasn’t always this way. Health policy was not always part of public policy. People lived through a lot of terrible health crises before policymakers figured out how to actually talk and listen to medical experts. Today we are facing a similar situation with technology.

Another parallel is public-interest law. Lawyers work in all parts of government and in many non-governmental organizations, crafting policy or just lawyering in the public interest. Every attorney at a major law firm is expected to devote some time to public-interest cases; it’s considered part of a well-rounded career. No law firm looks askance at an attorney who takes two years out of his career to work in a public-interest capacity. A tech career needs to look more like that.

In his book Future Politics, Jamie Susskind writes: “Politics in the twentieth century was dominated by a central question: how much of our collective life should be determined by the state, and what should be left to the market and civil society? For the generation now approaching political maturity, the debate will be different: to what extent should our lives be directed and controlled by powerful digital systems—and on what terms?”

I teach cybersecurity policy at the Harvard Kennedy School of Government. Because that question is fundamentally one of economics—and because my institution is a product of both the 20th century and that question—its faculty is largely staffed by economists. But because today’s question is a different one, the institution is now hiring policy-focused technologists like me.

If we’re honest with ourselves, it was never okay for technology to be separate from policy. But today, amid what we’re starting to call the Fourth Industrial Revolution, the separation is much more dangerous. We need policymakers to recognize this danger, and to welcome a new generation of technologists from every persuasion to help solve the socio-technical policy problems of the 21st century. We need to create ways to speak tech to power—and power needs to open the door and let technologists in.

This essay previously appeared on the World Economic Forum blog.

Posted on November 14, 2019 at 7:04 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.