Blog: March 2019 Archives

NSA-Inspired Vulnerability Found in Huawei Laptops

This is an interesting story of a serious vulnerability in a Huawei driver that Microsoft found. The vulnerability is similar in style to the NSA’s DOUBLEPULSAR that was leaked by the Shadow Brokers—believed to be the Russian government—and it’s obvious that this attack copied that technique.

What is less clear is whether the vulnerability—which has been fixed—was put into the Huawei driver accidentally or on purpose.

Posted on March 29, 2019 at 6:11 AM14 Comments

Malware Installed in Asus Computers through Hacked Update Process

Kaspersky Labs is reporting on a new supply chain attack they call “Shadowhammer.”

In January 2019, we discovered a sophisticated supply chain attack involving the ASUS Live Update Utility. The attack took place between June and November 2018 and according to our telemetry, it affected a large number of users.

[…]

The goal of the attack was to surgically target an unknown pool of users, which were identified by their network adapters’ MAC addresses. To achieve this, the attackers had hardcoded a list of MAC addresses in the trojanized samples and this list was used to identify the actual intended targets of this massive operation. We were able to extract more than 600 unique MAC addresses from over 200 samples used in this attack. Of course, there might be other samples out there with different MAC addresses in their list.

We believe this to be a very sophisticated supply chain attack, which matches or even surpasses the Shadowpad and the CCleaner incidents in complexity and techniques. The reason that it stayed undetected for so long is partly due to the fact that the trojanized updaters were signed with legitimate certificates (eg: “ASUSTeK Computer Inc.”). The malicious updaters were hosted on the official liveupdate01s.asus[.]com and liveupdate01.asus[.]com ASUS update servers.

The sophistication of the attack leads to the speculation that a nation-state—and one of the cyber powers—is responsible.

As I have previously written, supply chain security is “an incredibly complex problem.” These attacks co-opt the very mechanisms we need to trust for our security. And the international nature of our industry results in an array of vulnerabilities that are very hard to secure.

Kim Zetter has a really good article on this. Check if your computer is infected here, or use this diagnostic tool from Asus.

Another news article.

Posted on March 28, 2019 at 6:42 AM35 Comments

Programmers Who Don't Understand Security Are Poor at Security

A university study confirmed the obvious: if you pay a random bunch of freelance programmers a small amount of money to write security software, they’re not going to do a very good job at it.

In an experiment that involved 43 programmers hired via the Freelancer.com platform, University of Bonn academics have discovered that developers tend to take the easy way out and write code that stores user passwords in an unsafe manner.

For their study, the German academics asked a group of 260 Java programmers to write a user registration system for a fake social network.

Of the 260 developers, only 43 took up the job, which involved using technologies such as Java, JSF, Hibernate, and PostgreSQL to create the user registration component.

Of the 43, academics paid half of the group with €100, and the other half with €200, to determine if higher pay made a difference in the implementation of password security features.

Further, they divided the developer group a second time, prompting half of the developers to store passwords in a secure manner, and leaving the other half to store passwords in their preferred method—hence forming four quarters of developers paid €100 and prompted to use a secure password storage method (P100), developers paid €200 and prompted to use a secure password storage method (P200), devs paid €100 but not prompted for password security (N100), and those paid €200 but not prompted for password security (N200).

I don’t know why anyone would expect this group of people to implement a good secure password system. Look at how they were hired. Look at the scope of the project. Look at what they were paid. I’m sure they grabbed the first thing they found on GitHub that did the job.

I’m not very impressed with the study or its conclusions.

Posted on March 27, 2019 at 6:37 AM55 Comments

Mail Fishing

Not email, paper mail:

Thieves, often at night, use string to lower glue-covered rodent traps or bottles coated with an adhesive down the chute of a sidewalk mailbox. This bait attaches to the envelopes inside, and the fish in this case—mail containing gift cards, money orders or checks, which can be altered with chemicals and cashed—are reeled out slowly.

In response, the US Post Office is introducing a more secure mailbox:

The mail slots are only large enough for letters, meaning sending even small packages will require a trip to the post office. The opening is also equipped with a mechanism that grabs at a letter once inserted, making it difficult to retract.

The crime has become more common in the past few years.

Posted on March 25, 2019 at 9:39 AM24 Comments

Friday Squid Blogging: New Research on Squid Camouflage

From the New York Times:

Now, a paper published last week in Nature Communications suggests that their chromatophores, previously thought to be mainly pockets of pigment embedded in their skin, are also equipped with tiny reflectors made of proteins. These reflectors aid the squid to produce such a wide array of colors, including iridescent greens and blues, within a second of passing in front of a new background. The research reveals that by using tricks found in other parts of the animal kingdom—like shimmering butterflies and peacocks—squid are able to combine multiple approaches to produce their vivid camouflage.

Researchers studied Doryteuthis pealeii, or the longfin squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on March 22, 2019 at 4:45 PM73 Comments

First Look Media Shutting Down Access to Snowden NSA Archives

The Daily Beast is reporting that First Look Media—home of The Intercept and Glenn Greenwald—is shutting down access to the Snowden archives.

The Intercept was the home for Greenwald’s subset of Snowden’s NSA documents since 2014, after he parted ways with the Guardian the year before. I don’t know the details of how the archive was stored, but it was offline and well secured—and it was available to journalists for research purposes. Many stories were published based on those archives over the years, albeit fewer in recent years.

The article doesn’t say what “shutting down access” means, but my guess is that it means that First Look Media will no longer make the archive available to outside journalists, and probably not to staff journalists, either. Reading between the lines, I think they will delete what they have.

This doesn’t mean that we’re done with the documents. Glenn Greenwald tweeted:

Both Laura & I have full copies of the archives, as do others. The Intercept has given full access to multiple media orgs, reporters & researchers. I’ve been looking for the right partner—an academic institution or research facility—that has the funds to robustly publish.

I’m sure there are still stories in those NSA documents, but with many of them a decade or more old, they are increasingly history and decreasingly current events. Every capability discussed in the documents needs to be read with a “and then they had ten years to improve this” mentality.

Eventually it’ll all become public, but not before it is 100% history and 0% current events.

Posted on March 21, 2019 at 5:52 AM18 Comments

Zipcar Disruption

This isn’t a security story, but it easily could have been. Last Saturday, Zipcar had a system outage: “an outage experienced by a third party telecommunications vendor disrupted connections between the company’s vehicles and its reservation software.”

That didn’t just mean people couldn’t get cars they reserved. Sometimes is meant they couldn’t get the cars they were already driving to work:

Andrew Jones of Roxbury was stuck on hold with customer service for at least a half-hour while he and his wife waited inside a Zipcar that would not turn back on after they stopped to fill it up with gas.

“We were just waiting and waiting for the call back,” he said.

Customers in other states, including New York, California, and Oregon, reported a similar problem. One user who tweeted about issues with a Zipcar vehicle listed his location as Toronto.

Some, like Jones, stayed with the inoperative cars. Others, including Tina Penman in Portland, Ore., and Heather Reid in Cambridge, abandoned their Zipcar. Penman took an Uber home, while Reid walked from the grocery store back to her apartment.

This is a reliability issue that turns into a safety issue. Systems that touch the direct physical world like this need better fail-safe defaults.

Posted on March 20, 2019 at 12:38 PM14 Comments

An Argument that Cybersecurity Is Basically Okay

Andrew Odlyzko’s new essay is worth reading—”Cybersecurity is not very important“:

Abstract: There is a rising tide of security breaches. There is an even faster rising tide of hysteria over the ostensible reason for these breaches, namely the deficient state of our information infrastructure. Yet the world is doing remarkably well overall, and has not suffered any of the oft-threatened giant digital catastrophes. This continuing general progress of society suggests that cyber security is not very important. Adaptations to cyberspace of techniques that worked to protect the traditional physical world have been the main means of mitigating the problems that occurred. This “chewing gum and baling wire”approach is likely to continue to be the basic method of handling problems that arise, and to provide adequate levels of security.

I am reminded of these two essays. And, as I said in the blog post about those two essays:

This is true, and is something I worry will change in a world of physically capable computers. Automation, autonomy, and physical agency will make computer security a matter of life and death, and not just a matter of data.

Posted on March 20, 2019 at 6:03 AM50 Comments

CAs Reissue Over One Million Weak Certificates

Turns out that the software a bunch of CAs used to generate public-key certificates was flawed: they created random serial numbers with only 63 bits instead of the required 64. That may not seem like a big deal to the layman, but that one bit change means that the serial numbers only have half the required entropy. This really isn’t a security problem; the serial numbers are to protect against attacks that involve weak hash functions, and we don’t allow those weak hash functions anymore. Still, it’s a good thing that the CAs are reissuing the certificates. The point of a standard is that it’s to be followed.

Posted on March 18, 2019 at 6:23 AM26 Comments

I Was Cited in a Court Decision

An article I co-wrote—my first law journal article—was cited by the Massachusetts Supreme Judicial Court—the state supreme court—in a case on compelled decryption.

Here’s the first, in footnote 1:

We understand the word “password” to be synonymous with other terms that cell phone users may be familiar with, such as Personal Identification Number or “passcode.” Each term refers to the personalized combination of letters or digits that, when manually entered by the user, “unlocks” a cell phone. For simplicity, we use “password” throughout. See generally, Kerr & Schneier, Encryption Workarounds, 106 Geo. L.J. 989, 990, 994, 998 (2018).

And here’s the second, in footnote 5:

We recognize that ordinary cell phone users are likely unfamiliar with the complexities of encryption technology. For instance, although entering a password “unlocks” a cell phone, the password itself is not the “encryption key” that decrypts the cell phone’s contents. See Kerr & Schneier, supra at 995. Rather, “entering the [password] decrypts the [encryption] key, enabling the key to be processed and unlocking the phone. This two-stage process is invisible to the casual user.” Id. Because the technical details of encryption technology do not play a role in our analysis, they are not worth belaboring. Accordingly, we treat the entry of a password as effectively decrypting the contents of a cell phone. For a more detailed discussion of encryption technology, see generally Kerr & Schneier, supra.

Posted on March 15, 2019 at 2:38 PM7 Comments

Critical Flaw in Swiss Internet Voting System

Researchers have found a critical flaw in the Swiss Internet voting system. I was going to write an essay about how this demonstrates that Internet voting is a stupid idea and should never be attempted—and that this system in particular should never be deployed, even if the found flaw is fixed—but Cory Doctorow beat me to it:

The belief that companies can be trusted with this power defies all logic, but it persists. Someone found Swiss Post’s embrace of the idea too odious to bear, and they leaked the source code that Swiss Post had shared under its nondisclosure terms, and then an international team of some of the world’s top security experts (including some of our favorites, like Matthew Green) set about analyzing that code, and (as every security expert who doesn’t work for an e-voting company has predicted since the beginning of time), they found an incredibly powerful bug that would allow a single untrusted party at Swiss Post to undetectably alter the election results.

And, as everyone who’s ever advocated for the right of security researchers to speak in public without permission from the companies whose products they were assessing has predicted since the beginning of time, Swiss Post and Scytl downplayed the importance of this objectively very, very, very important bug. Swiss Post’s position is that since the bug only allows elections to be stolen by Swiss Post employees, it’s not a big deal, because Swiss Post employees wouldn’t steal an election.

But when Swiss Post agreed to run the election, they promised an e-voting system based on “zero knowledge” proofs that would allow voters to trust the outcome of the election without having to trust Swiss Post. Swiss Post is now moving the goalposts, saying that it wouldn’t be such a big deal if you had to trust Swiss Post implicitly to trust the outcome of the election.

You might be thinking, “Well, what is the big deal? If you don’t trust the people administering an election, you can’t trust the election’s outcome, right?” Not really: we design election systems so that multiple, uncoordinated people all act as checks and balances on each other. To suborn a well-run election takes massive coordination at many polling- and counting-places, as well as independent scrutineers from different political parties, as well as outside observers, etc.

Read the whole thing. It’s excellent.

More info.

Posted on March 15, 2019 at 9:44 AM32 Comments

DARPA Is Developing an Open-Source Voting System

This sounds like a good development:

…a new $10 million contract the Defense Department’s Defense Advanced Research Projects Agency (DARPA) has launched to design and build a secure voting system that it hopes will be impervious to hacking.

The first-of-its-kind system will be designed by an Oregon-based firm called Galois, a longtime government contractor with experience in designing secure and verifiable systems. The system will use fully open source voting software, instead of the closed, proprietary software currently used in the vast majority of voting machines, which no one outside of voting machine testing labs can examine. More importantly, it will be built on secure open source hardware, made from special secure designs and techniques developed over the last year as part of a special program at DARPA. The voting system will also be designed to create fully verifiable and transparent results so that voters don’t have to blindly trust that the machines and election officials delivered correct results.

But DARPA and Galois won’t be asking people to blindly trust that their voting systems are secure—as voting machine vendors currently do. Instead they’ll be publishing source code for the software online and bring prototypes of the systems to the Def Con Voting Village this summer and next, so that hackers and researchers will be able to freely examine the systems themselves and conduct penetration tests to gauge their security. They’ll also be working with a number of university teams over the next year to have them examine the systems in formal test environments.

Posted on March 14, 2019 at 1:20 PM41 Comments

Judging Facebook's Privacy Shift

Facebook is making a new and stronger commitment to privacy. Last month, the company hired three of its most vociferous critics and installed them in senior technical positions. And on Wednesday, Mark Zuckerberg wrote that the company will pivot to focus on private conversations over the public sharing that has long defined the platform, even while conceding that “frankly we don’t currently have a strong reputation for building privacy protective services.”

There is ample reason to question Zuckerberg’s pronouncement: The company has made—and broken—many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook’s surveillance capitalism business model. All the post discusses is making private chats more central to the company, which seems to be a play for increased market dominance and to counter the Chinese company WeChat.

In security and privacy, the devil is always in the details—and Zuckerberg’s post provides none. But we’ll take him at his word and try to fill in some of the details here. What follows is a list of changes we should expect if Facebook is serious about changing its business model and improving user privacy.

How Facebook treats people on its platform

Increased transparency over advertiser and app accesses to user data. Today, Facebook users can download and view much of the data the company has about them. This is important, but it doesn’t go far enough. The company could be more transparent about what data it shares with advertisers and others and how it allows advertisers to select users they show ads to. Facebook could use its substantial skills in usability testing to help people understand the mechanisms advertisers use to show them ads or the reasoning behind what it chooses to show in user timelines. It could deliver on promises in this area.

Better—and more usable—privacy options. Facebook users have limited control over how their data is shared with other Facebook users and almost no control over how it is shared with Facebook’s advertisers, which are the company’s real customers. Moreover, the controls are buried deep behind complex and confusing menu options. To be fair, some of this is because privacy is complex, and it’s hard to understand the results of different options. But much of this is deliberate; Facebook doesn’t want its users to make their data private from other users.

The company could give people better control over how—and whether—their data is used, shared, and sold. For example, it could allow users to turn off individually targeted news and advertising. By this, we don’t mean simply making those advertisements invisible; we mean turning off the data flows into those tailoring systems. Finally, since most users stick to the default options when it comes to configuring their apps, a changing Facebook could tilt those defaults toward more privacy, requiring less tailoring most of the time.

More user protection from stalking. “Facebook stalking” is often thought of as “stalking light,” or “harmless.” But stalkers are rarely harmless. Facebook should acknowledge this class of misuse and work with experts to build tools that protect all of its users, especially its most vulnerable ones. Such tools should guide normal people away from creepiness and give victims power and flexibility to enlist aid from sources ranging from advocates to police.

Fully ending real-name enforcement. Facebook’s real-names policy, requiring people to use their actual legal names on the platform, hurts people such as activists, victims of intimate partner violence, police officers whose work makes them targets, and anyone with a public persona who wishes to have control over how they identify to the public. There are many ways Facebook can improve on this, from ending enforcement to allowing verifying pseudonyms for everyone­—not just celebrities like Lady Gaga. Doing so would mark a clear shift.

How Facebook runs its platform

Increased transparency of Facebook’s business practices. One of the hard things about evaluating Facebook is the effort needed to get good information about its business practices. When violations are exposed by the media, as they regularly are, we are all surprised at the different ways Facebook violates user privacy. Most recently, the company used phone numbers provided for two-factor authentication for advertising and networking purposes. Facebook needs to be both explicit and detailed about how and when it shares user data. In fact, a move from discussing “sharing” to discussing “transfers,” “access to raw information,” and “access to derived information” would be a visible improvement.

Increased transparency regarding censorship rules. Facebook makes choices about what content is acceptable on its site. Those choices are controversial, implemented by thousands of low-paid workers quickly implementing unclear rules. These are tremendously hard problems without clear solutions. Even obvious rules like banning hateful words run into challenges when people try to legitimately discuss certain important topics. Whatever Facebook does in this regard, the company needs be more transparent about its processes. It should allow regulators and the public to audit the company’s practices. Moreover, Facebook should share any innovative engineering solutions with the world, much as it currently shares its data center engineering.

Better security for collected user data. There have been numerous examples of attackers targeting cloud service platforms to gain access to user data. Facebook has a large and skilled product security team that says some of the right things. That team needs to be involved in the design trade-offs for features and not just review the near-final designs for flaws. Shutting down a feature based on internal security analysis would be a clear message.

Better data security so Facebook sees less. Facebook eavesdrops on almost every aspect of its users’ lives. On the other hand, WhatsApp—purchased by Facebook in 2014—provides users with end-to-end encrypted messaging. While Facebook knows who is messaging whom and how often, Facebook has no way of learning the contents of those messages. Recently, Facebook announced plans to combine WhatsApp, Facebook Messenger, and Instagram, extending WhatsApp’s security to the consolidated system. Changing course here would be a dramatic and negative signal.

Collecting less data from outside of Facebook. Facebook doesn’t just collect data about you when you’re on the platform. Because its “like” button is on so many other pages, the company can collect data about you when you’re not on Facebook. It even collects what it calls “shadow profiles“—data about you even if you’re not a Facebook user. This data is combined with other surveillance data the company buys, including health and financial data. Collecting and saving less of this data would be a strong indicator of a new direction for the company.

Better use of Facebook data to prevent violence. There is a trade-off between Facebook seeing less and Facebook doing more to prevent hateful and inflammatory speech. Dozens of people have been killed by mob violence because of fake news spread on WhatsApp. If Facebook were doing a convincing job of controlling fake news without end-to-end encryption, then we would expect to hear how it could use patterns in metadata to handle encrypted fake news.

How Facebook manages for privacy

Create a team measured on privacy and trust. Where companies spend their money tells you what matters to them. Facebook has a large and important growth team, but what team, if any, is responsible for privacy, not as a matter of compliance or pushing the rules, but for engineering? Transparency in how it is staffed relative to other teams would be telling.

Hire a senior executive responsible for trust. Facebook’s current team has been focused on growth and revenue. Its one chief security officer, Alex Stamos, was not replaced when he left in 2018, which may indicate that having an advocate for security on the leadership team led to debate and disagreement. Retaining a voice for security and privacy issues at the executive level, before those issues affected users, was a good thing. Now that responsibility is diffuse. It’s unclear how Facebook measures and assesses its own progress and who might be held accountable for failings. Facebook can begin the process of fixing this by designating a senior executive who is responsible for trust.

Engage with regulators. Much of Facebook’s posturing seems to be an attempt to forestall regulation. Facebook sends lobbyists to Washington and other capitals, and until recently the company sent support staff to politician’s offices. It has secret lobbying campaigns against privacy laws. And Facebook has repeatedly violated a 2011 Federal Trade Commission consent order regarding user privacy. Regulating big technical projects is not easy. Most of the people who understand how these systems work understand them because they build them. Societies will regulate Facebook, and the quality of that regulation requires real education of legislators and their staffs. While businesses often want to avoid regulation, any focus on privacy will require strong government oversight. If Facebook is serious about privacy being a real interest, it will accept both government regulation and community input.

User privacy is traditionally against Facebook’s core business interests. Advertising is its business model, and targeted ads sell better and more profitably—and that requires users to engage with the platform as much as possible. Increased pressure on Facebook to manage propaganda and hate speech could easily lead to more surveillance. But there is pressure in the other direction as well, as users equate privacy with increased control over how they present themselves on the platform.

We don’t expect Facebook to abandon its advertising business model, relent in its push for monopolistic dominance, or fundamentally alter its social networking platforms. But the company can give users important privacy protections and controls without abandoning surveillance capitalism. While some of these changes will reduce profits in the short term, we hope Facebook’s leadership realizes that they are in the best long-term interest of the company.

Facebook talks about community and bringing people together. These are admirable goals, and there’s plenty of value (and profit) in having a sustainable platform for connecting people. But as long as the most important measure of success is short-term profit, doing things that help strengthen communities will fall by the wayside. Surveillance, which allows individually targeted advertising, will be prioritized over user privacy. Outrage, which drives engagement, will be prioritized over feelings of belonging. And corporate secrecy, which allows Facebook to evade both regulators and its users, will be prioritized over societal oversight. If Facebook now truly believes that these latter options are critical to its long-term success as a company, we welcome the changes that are forthcoming.

This essay was co-authored with Adam Shostack, and originally appeared on Medium OneZero. We wrote a similar essay in 2002 about judging Microsoft’s then newfound commitment to security.

Posted on March 13, 2019 at 6:51 AM29 Comments

On Surveillance in the Workplace

Data & Society just published a report entitled “Workplace Monitoring & Surveillance“:

This explainer highlights four broad trends in employee monitoring and surveillance technologies:

  • Prediction and flagging tools that aim to predict characteristics or behaviors of employees or that are designed to identify or deter perceived rule-breaking or fraud. Touted as useful management tools, they can augment biased and discriminatory practices in workplace evaluations and segment workforces into risk categories based on patterns of behavior.
  • Biometric and health data of workers collected through tools like wearables, fitness tracking apps, and biometric timekeeping systems as a part of employer- provided health care programs, workplace wellness, and digital tracking work shifts tools. Tracking non-work-related activities and information, such as health data, may challenge the boundaries of worker privacy, open avenues for discrimination, and raise questions about consent and workers’ ability to opt out of tracking.
  • Remote monitoring and time-tracking used to manage workers and measure performance remotely. Companies may use these tools to decentralize and lower costs by hiring independent contractors, while still being able to exert control over them like traditional employees with the aid of remote monitoring tools. More advanced time-tracking can generate itemized records of on-the-job activities, which can be used to facilitate wage theft or allow employers to trim what counts as paid work time.
  • Gamification and algorithmic management of work activities through continuous data collection. Technology can take on management functions, such as sending workers automated “nudges” or adjusting performance benchmarks based on a worker’s real-time progress, while gamification renders work activities into competitive, game-like dynamics driven by performance metrics. However, these practices can create punitive work environments that place pressures on workers to meet demanding and shifting efficiency benchmarks.

In a blog post about this report, Cory Doctorow mentioned “the adoption curve for oppressive technology, which goes, ‘refugee, immigrant, prisoner, mental patient, children, welfare recipient, blue collar worker, white collar worker.'” I don’t agree with the ordering, but the sentiment is correct. These technologies are generally used first against people with diminished rights: prisoners, children, the mentally ill, and soldiers.

Posted on March 12, 2019 at 6:38 AM15 Comments

Russia Is Testing Online Voting

This is a bad idea:

A second innovation will allow “electronic absentee voting” within voters’ home precincts. In other words, Russia is set to introduce its first online voting system. The system will be tested in a Moscow neighborhood that will elect a single member to the capital’s city council in September. The details of how the experiment will work are not yet known; the State Duma’s proposal on Internet voting does not include logistical specifics. The Central Election Commission’s reference materials on the matter simply reference “absentee voting, blockchain technology.” When Dmitry Vyatkin, one of the bill’s co-sponsors, attempted to describe how exactly blockchains would be involved in the system, his explanation was entirely disconnected from the actual functions of that technology. A discussion of this new type of voting is planned for an upcoming public forum in Moscow.

Surely the Russians know that online voting is insecure. Could they not care, or do they think the surveillance is worth the risk?

Posted on March 11, 2019 at 6:54 AM39 Comments

Videos and Links from the Public-Interest Technology Track at the RSA Conference

Yesterday at the RSA Conference, I gave a keynote talk about the role of public-interest technologists in cybersecurity. (Video here).

I also hosted a one-day mini-track on the topic. We had six panels, and they were all great. If you missed it live, we have videos:

  • How Public Interest Technologists are Changing the World: Matt Mitchell, Tactical Tech; Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; and J. Bob Alotta, Astraea Foundation (Moderator). (Video here.)
  • Public Interest Tech in Silicon Valley: Mitchell Baker, Chairwoman, Mozilla Corporation; Cindy Cohn, EFF; and Lucy Vasserman, Software Engineer, Google. (Video here.)
  • Working in Civil Society: Sarah Aoun, Digital Security Technologist; Peter Eckersley, Partnership on AI; Harlo Holmes, Director of Newsroom Digital Security, Freedom of the Press Foundation; and John Scott-Railton, Senior Researcher, Citizen Lab. (Video here.)
  • Government Needs You: Travis Moore, TechCongress; Hashim Mteuzi, Senior Manager, Network Talent Initiative, Code for America; Gigi Sohn, Distinguished Fellow, Georgetown Law Institute for Technology, Law and Policy; and Ashkan Soltani, Independent Consultant. (Video here.)
  • Changing Academia: Latanya Sweeney, Harvard; Dierdre Mulligan, UC Berkeley; and Danny Weitzner, MIT CSAIL. (Video here.)
  • The Future of Public Interest Tech: Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; Ben Wizner, ACLU; and Jenny Toomey, Director, Internet Freedom, Ford Foundation (Moderator). (Video here.)

I also conducted eight short video interviews with different people involved in public-interest technology: independent security technologist Sarah Aoun, TechCongress’s Travis Moore, Ford Foundation’s Jenny Toomey, CitizenLab’s John-Scott Railton, Dierdre Mulligan from UC Berkeley, ACLU’s Jon Callas, Matt Mitchell of TacticalTech, and Kelley Misata from Sightline Security.

Here is my blog post about the event. Here’s Ford Foundation’s blog post on why they helped me organize the event.

We got some good press coverage about the event. (Hey MeriTalk: you spelled my name wrong.)

Related: Here’s my longer essay on the need for public-interest technologists in Internet security, and my public-interest technology resources page.

And just so we have all the URLs in one place, here is a page from the RSA Conference website with links to all of the videos.

If you liked this mini-track, please rate it highly on your RSA Conference evaluation form. I’d like to do it again next year.

Posted on March 8, 2019 at 2:24 PM7 Comments

Cybersecurity Insurance Not Paying for NotPetya Losses

This will complicate things:

To complicate matters, having cyber insurance might not cover everyone’s losses. Zurich American Insurance Company refused to pay out a $100 million claim from Mondelez, saying that since the U.S. and other governments labeled the NotPetya attack as an action by the Russian military their claim was excluded under the “hostile or warlike action in time of peace or war” exemption.

I get that $100 million is real money, but the insurance industry needs to figure out how to properly insure commercial networks against this sort of thing.

Posted on March 8, 2019 at 5:57 AM21 Comments

Detecting Shoplifting Behavior

This system claims to detect suspicious behavior that indicates shoplifting:

Vaak, a Japanese startup, has developed artificial intelligence software that hunts for potential shoplifters, using footage from security cameras for fidgeting, restlessness and other potentially suspicious body language.

The article has no detail or analysis, so we don’t know how well it works. But this kind of thing is surely the future of video surveillance.

Posted on March 7, 2019 at 1:48 PM21 Comments

Cybersecurity for the Public Interest

The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there’s no way to provide this capability without also weakening the security of every user of those devices and communications systems.

It’s an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism—­as practiced by the Internet companies that are already spying on everyone­—matters. So does society’s underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?

The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals—­that occasionally become law­—that are technological disasters.

This isn’t sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand­—and are involved in—­policy. We need public-interest technologists.

Let’s pause at that term. The Ford Foundation defines public-interest technologists as “technology practitioners who focus on social justice, the common good, and/or the public interest.” A group of academics recently wrote that public-interest technologists are people who “study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good.” Tim Berners-Lee has called them “philosophical engineers.” I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it’s not the best term­—and I know not everyone likes it­—but it’s a decent umbrella term that can encompass all these roles.

We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn’t want to formulate public policy about artificial intelligence and robotics without a security technologist involved.

Public-interest technology isn’t new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.

This might all seem like a lot, but it’s really not. There aren’t enough people doing it, there aren’t enough people who know it needs to be done, and there aren’t enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.

There are many barriers. There’s a report titled A Pivotal Moment that includes this quote: “While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress.”

That quote speaks to the three places for intervention. One: the supply side. There just isn’t enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with “clinics” at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they’ve learned and done.

Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama’s US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.

Three: the marketplace. We need job boards, conferences, and skills exchanges­—places where people on the supply side can learn about the demand.

Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.

This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.

This is bigger than computer security. Technology now permeates society in a way it didn’t just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There’s a pervasive myth in Silicon Valley that technology is politically neutral. It’s not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn’t matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: “What should be governed by the state, and what should be governed by the market?” This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: “How much of our lives should be governed by technology, and under what terms?” In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

This essay previously appeared in the January/February issue of IEEE Security & Privacy.

Together with the Ford Foundation, I am hosting a one-day mini-track on public-interest technologists at the RSA Conference this week on Thursday. We’ve had some press coverage.

EDITED TO ADD (3/7): More news articles.

Posted on March 5, 2019 at 6:31 AM35 Comments

The Latest in Creepy Spyware

The Nest home alarm system shipped with a secret microphone, which—according to the company—was only an accidental secret:

On Tuesday, a Google spokesperson told Business Insider the company had made an “error.”

“The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the spokesperson said. “That was an error on our part.”

Where are the consumer protection agencies? They should be all over this.

And while they’re figuring out which laws Google broke, they should also look at American Airlines. Turns out that some of their seats have built-in cameras:

American Airlines spokesperson Ross Feinstein confirmed to BuzzFeed News that cameras are present on some of the airlines’ in-flight entertainment systems, but said “they have never been activated, and American is not considering using them.” Feinstein added, “Cameras are a standard feature on many in-flight entertainment systems used by multiple airlines. Manufacturers of those systems have included cameras for possible future uses, such as hand gestures to control in-flight entertainment.”

That makes it all okay, doesn’t it?

Actually, I kind of understand the airline seat camera thing. My guess is that whoever designed the in-flight entertainment system just specced a standard tablet computer, and they all came with unnecessary features like cameras. This is how we end up with refrigerators with Internet connectivity and Roombas with microphones. It’s cheaper to leave the functionality in than it is to remove it.

Still, we need better disclosure laws.

Posted on March 4, 2019 at 6:04 AM33 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.