Blog: May 2019 Archives

Fraudulent Academic Papers

The term “fake news” has lost much of its meaning, but it describes a real and dangerous Internet trend. Because it’s hard for many people to differentiate a real news site from a fraudulent one, they can be hoodwinked by fictitious news stories pretending to be real. The result is that otherwise reasonable people believe lies.

The trends fostering fake news are more general, though, and we need to start thinking about how it could affect different areas of our lives. In particular, I worry about how it will affect academia. In addition to fake news, I worry about fake research.

An example of this seems to have happened recently in the cryptography field. SIMON is a block cipher designed by the National Security Agency (NSA) and made public in 2013. It’s a general design optimized for hardware implementation, with a variety of block sizes and key lengths. Academic cryptanalysts have been trying to break the cipher since then, with some pretty good results, although the NSA’s specified parameters are still immune to attack. Last week, a paper appeared on the International Association for Cryptologic Research (IACR) ePrint archive purporting to demonstrate a much more effective break of SIMON, one that would affect actual implementations. The paper was sufficiently weird, the authors sufficiently unknown and the details of the attack sufficiently absent, that the editors took it down a few days later. No harm done in the end.

In recent years, there has been a push to speed up the process of disseminating research results. Instead of the laborious process of academic publication, researchers have turned to faster online publishing processes, preprint servers, and simply posting research results. The IACR ePrint archive is one of those alternatives. This has all sorts of benefits, but one of the casualties is the process of peer review. As flawed as that process is, it does help ensure the accuracy of results. (Of course, bad papers can still make it through the process. We’re still dealing with the aftermath of a flawed, and now retracted, Lancet paper linking vaccines with autism.)

Like the news business, academic publishing is subject to abuse. We can only speculate about the motivations of the three people who are listed as authors on the SIMON paper, but you can easily imagine better-executed and more nefarious scenarios. In a world of competitive research, one group might publish a fake result to throw other researchers off the trail. It might be a company trying to gain an advantage over a potential competitor, or even a country trying to gain an advantage over another country.

Reverting to a slower and more accurate system isn’t the answer; the world is just moving too fast for that. We need to recognize that fictitious research results can now easily be injected into our academic publication system, and tune our skepticism meters accordingly.

This essay previously appeared on Lawfare.com.

Posted on May 30, 2019 at 9:51 AM41 Comments

First American Financial Corp. Data Records Leak

Krebs on Security is reporting a massive data leak by the real estate title insurance company First American Financial Corp.

“The title insurance agency collects all kinds of documents from both the buyer and seller, including Social Security numbers, drivers licenses, account statements, and even internal corporate documents if you’re a small business. You give them all kinds of private information and you expect that to stay private.”

Shoval shared a document link he’d been given by First American from a recent transaction, which referenced a record number that was nine digits long and dated April 2019. Modifying the document number in his link by numbers in either direction yielded other peoples’ records before or after the same date and time, indicating the document numbers may have been issued sequentially.

The earliest document number available on the site—000000075—referenced a real estate transaction from 2003. From there, the dates on the documents get closer to real time with each forward increment in the record number.

This is not an uncommon vulnerability: documents without security, just “protected” by a unique serial number that ends up being easily guessable.

Krebs has no evidence that anyone harvested all this data, but that’s not the point. The company said this in a statement: “At First American, security, privacy and confidentiality are of the highest priority and we are committed to protecting our customers’ information.” That’s obviously not true; security and privacy are probably pretty low priorities for the company. This is basic stuff, and companies like First America Corp. should be held liable for their poor security practices.

Posted on May 28, 2019 at 9:59 AM17 Comments

NSA Hawaii

Recently I’ve heard Edward Snowden talk about his working at the NSA in Hawaii as being “under a pineapple field.” CBS News recently ran a segment on that NSA listening post on Oahu.

Not a whole lot of actual information. “We’re in office building, in a pineapple field, on Oahu….” And part of it is underground—we see a tunnel. We didn’t get to see any pineapples, though.

Posted on May 24, 2019 at 2:14 PM16 Comments

Germany Talking about Banning End-to-End Encryption

Der Spiegel is reporting that the German Ministry for Internal Affairs is planning to require all Internet message services to provide plaintext messages on demand, basically outlawing strong end-to-end encryption. Anyone not complying will be blocked, although the article doesn’t say how. (Cory Doctorow has previously explained why this would be impossible.)

The article is in German, and I would appreciate additional information from those who can speak the language.

EDITED TO ADD (6/2): Slashdot thread. This seems to be nothing more than political grandstanding: see this post from the Carnegie Endowment for International Peace.

Posted on May 24, 2019 at 8:39 AM46 Comments

Thangrycat: A Serious Cisco Vulnerability

Summary:

Thangrycat is caused by a series of hardware design flaws within Cisco’s Trust Anchor module. First commercially introduced in 2013, Cisco Trust Anchor module (TAm) is a proprietary hardware security module used in a wide range of Cisco products, including enterprise routers, switches and firewalls. TAm is the root of trust that underpins all other Cisco security and trustworthy computing mechanisms in these devices. Thangrycat allows an attacker to make persistent modification to the Trust Anchor module via FPGA bitstream modification, thereby defeating the secure boot process and invalidating Cisco’s chain of trust at its root. While the flaws are based in hardware, Thangrycat can be exploited remotely without any need for physical access. Since the flaws reside within the hardware design, it is unlikely that any software security patch will fully resolve the fundamental security vulnerability.

From a news article:

Thrangrycat is awful for two reasons. First, if a hacker exploits this weakness, they can do whatever they want to your routers. Second, the attack can happen remotely ­ it’s a software vulnerability. But the fix can only be applied at the hardware level. Like, physical router by physical router. In person. Yeesh.

That said, Thrangrycat only works once you have administrative access to the device. You need a two-step attack in order to get Thrangrycat working. Attack #1 gets you remote administrative access, Attack #2 is Thrangrycat. Attack #2 can’t happen without Attack #1. Cisco can protect you from Attack #1 by sending out a software update. If your I.T. people have your systems well secured and are applying updates and patches consistently and you’re not a regular target of nation-state actors, you’re relatively safe from Attack #1, and therefore, pretty safe from Thrangrycat.

Unfortunately, Attack #1 is a garden variety vulnerability. Many systems don’t even have administrative access configured correctly. There’s opportunity for Thrangrycat to be exploited.

And from Boing Boing:

Thangrycat relies on attackers being able to run processes as the system’s administrator, and Red Balloon, the security firm that disclosed the vulnerability, also revealed a defect that allows attackers to run code as admin.

It’s tempting to dismiss the attack on the trusted computing module as a ho-hum flourish: after all, once an attacker has root on your system, all bets are off. But the promise of trusted computing is that computers will be able to detect and undo this kind of compromise, by using a separate, isolated computer to investigate and report on the state of the main system (Huang and Snowden call this an introspection engine). Once this system is compromised, it can be forced to give false reports on the state of the system: for example, it might report that its OS has been successfully updated to patch a vulnerability when really the update has just been thrown away.

As Charlie Warzel and Sarah Jeong discuss in the New York Times, this is an attack that can be executed remotely, but can only be detected by someone physically in the presence of the affected system (and only then after a very careful inspection, and there may still be no way to do anything about it apart from replacing the system or at least the compromised component).

Posted on May 23, 2019 at 11:52 AM27 Comments

Visiting the NSA

Yesterday, I visited the NSA. It was Cyber Command’s birthday, but that’s not why I was there. I visited as part of the Berklett Cybersecurity Project, run out of the Berkman Klein Center and funded by the Hewlett Foundation. (BERKman hewLETT—get it? We have a web page, but it’s badly out of date.)

It was a full day of meetings, all unclassified but under the Chatham House Rule. Gen. Nakasone welcomed us and took questions at the start. Various senior officials spoke with us on a variety of topics, but mostly focused on three areas:

  • Russian influence operations, both what the NSA and US Cyber Command did during the 2018 election and what they can do in the future;
  • China and the threats to critical infrastructure from untrusted computer hardware, both the 5G network and more broadly;
  • Machine learning, both how to ensure a ML system is compliant with all laws, and how ML can help with other compliance tasks.

It was all interesting. Those first two topics are ones that I am thinking and writing about, and it was good to hear their perspective. I find that I am much more closely aligned with the NSA about cybersecurity than I am about privacy, which made the meeting much less fraught than it would have been if we were discussing Section 702 of the FISA Amendments Act, Section 215 the USA Freedom Act (up for renewal next year), or any 4th Amendment violations. I don’t think we’re past those issues by any means, but they make up less of what I am working on.

Posted on May 22, 2019 at 2:11 PM58 Comments

Fingerprinting iPhones

This clever attack allows someone to uniquely identify a phone when you visit a website, based on data from the accelerometer, gyroscope, and magnetometer sensors.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Overall, our attack has the following advantages:

  • The attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you.
  • The attack takes less than one second to generate a fingerprint.
  • The attack can generate a globally unique fingerprint for iOS devices.
  • The calibration fingerprint never changes, even after a factory reset.
  • The attack provides an effective means to track you as you browse across the web and move between apps on your phone.

* Following our disclosure, Apple has patched this vulnerability in iOS 12.2.

Research paper.

Posted on May 22, 2019 at 6:24 AM23 Comments

How Technology and Politics Are Changing Spycraft

Interesting article about how traditional nation-based spycraft is changing. Basically, the Internet makes it increasingly difficult to generate a good cover story; cell phone and other electronic surveillance techniques make tracking people easier; and machine learning will make all of this automatic. Meanwhile, Western countries have new laws and norms that put them at a disadvantage over other countries. And finally, much of this has gone corporate.

Posted on May 21, 2019 at 6:19 AM22 Comments

The Concept of "Return on Data"

This law review article by Noam Kolt, titled “Return on Data,” proposes an interesting new way of thinking of privacy law.

Abstract: Consumers routinely supply personal data to technology companies in exchange for services. Yet, the relationship between the utility (U) consumers gain and the data (D) they supply—”return on data” (ROD)—remains largely unexplored. Expressed as a ratio, ROD = U / D. While lawmakers strongly advocate protecting consumer privacy, they tend to overlook ROD. Are the benefits of the services enjoyed by consumers, such as social networking and predictive search, commensurate with the value of the data extracted from them? How can consumers compare competing data-for-services deals? Currently, the legal frameworks regulating these transactions, including privacy law, aim primarily to protect personal data. They treat data protection as a standalone issue, distinct from the benefits which consumers receive. This article suggests that privacy concerns should not be viewed in isolation, but as part of ROD. Just as companies can quantify return on investment (ROI) to optimize investment decisions, consumers should be able to assess ROD in order to better spend and invest personal data. Making data-for-services transactions more transparent will enable consumers to evaluate the merits of these deals, negotiate their terms and make more informed decisions. Pivoting from the privacy paradigm to ROD will both incentivize data-driven service providers to offer consumers higher ROD, as well as create opportunities for new market entrants.

Posted on May 20, 2019 at 1:30 PM38 Comments

Why Are Cryptographers Being Denied Entry into the US?

In March, Adi Shamir—that’s the “S” in RSA—was denied a US visa to attend the RSA Conference. He’s Israeli.

This month, British citizen Ross Anderson couldn’t attend an awards ceremony in DC because of visa issues. (You can listen to his recorded acceptance speech.) I’ve heard of two other prominent cryptographers who are in the same boat. Is there some cryptographer blacklist? Is something else going on? A lot of us would like to know.

Posted on May 17, 2019 at 6:18 AM86 Comments

More Attacks against Computer Automatic Update Systems

Last month, Kaspersky discovered that Asus’s live update system was infected with malware, an operation it called Operation Shadowhammer. Now we learn that six other companies were targeted in the same operation.

As we mentioned before, ASUS was not the only company used by the attackers. Studying this case, our experts found other samples that used similar algorithms. As in the ASUS case, the samples were using digitally signed binaries from three other Asian vendors:

  • Electronics Extreme, authors of the zombie survival game called Infestation: Survivor Stories,
  • Innovative Extremist, a company that provides Web and IT infrastructure services but also used to work in game development,
  • Zepetto, the South Korean company that developed the video game Point Blank.

According to our researchers, the attackers either had access to the source code of the victims’ projects or they injected malware at the time of project compilation, meaning they were in the networks of those companies. And this reminds us of an attack that we reported on a year ago: the CCleaner incident.

Also, our experts identified three additional victims: another video gaming company, a conglomerate holding company and a pharmaceutical company, all in South Korea. For now we cannot share additional details about those victims, because we are in the process of notifying them about the attack.

Me on supply chain security.

EDITED TO ADD (6/12): Kaspersky’s expanded report.

Posted on May 16, 2019 at 1:34 PM5 Comments

Another Intel Chip Flaw

Remember the Spectre and Meltdown attacks from last year? They were a new class of attacks against complex CPUs, finding subliminal channels in optimization techniques that allow hackers to steal information. Since their discovery, researchers have found additional similar vulnerabilities.

A whole bunch more have just been discovered.

I don’t think we’re finished yet. A year and a half ago I wrote: “But more are coming, and they’ll be worse. 2018 will be the year of microprocessor vulnerabilities, and it’s going to be a wild ride.” I think more are still coming.

EDITED TO ADD (6/13): A mathematical analysis of the problem that claims we’ll never completely fix this class of problems.

Posted on May 16, 2019 at 9:28 AM14 Comments

WhatsApp Vulnerability Fixed

WhatsApp fixed a devastating vulnerability that allowed someone to remotely hack a phone by initiating a WhatsApp voice call. The recipient didn’t even have to answer the call.

The Israeli cyber-arms manufacturer NSO Group is believed to be behind the exploit, but of course there is no definitive proof.

If you use WhatsApp, update your app immediately.

Posted on May 15, 2019 at 2:22 PM38 Comments

Cryptanalysis of SIMON-32/64

A weird paper was posted on the Cryptology ePrint Archive (working link is via the Wayback Machine), claiming an attack against the NSA-designed cipher SIMON. You can read some commentary about it here. Basically, the authors claimed an attack so devastating that they would only publish a zero-knowledge proof of their attack. Which they didn’t. Nor did they publish anything else of interest, near as I can tell.

The paper has since been deleted from the ePrint Archive, which feels like the correct decision on someone’s part.

Posted on May 14, 2019 at 6:11 AM18 Comments

Another NSA Leaker Identified and Charged

In 2015, the Intercept started publishing “The Drone Papers,” based on classified documents leaked by an unknown whistleblower. Today, someone who worked at the NSA, and then at the National Geospatial-Intelligence Agency, was charged with the crime. It is unclear how he was initially identified. It might have been this: “At the agency, prosecutors said, Mr. Hale printed 36 documents from his Top Secret computer.”

The article talks about evidence collected after he was identified and searched:

According to the indictment, in August 2014, Mr. Hale’s cellphone contact list included information for the reporter, and he possessed two thumb drives. One thumb drive contained a page marked “secret” from a classified document that Mr. Hale had printed in February 2014. Prosecutors said Mr. Hale had tried to delete the document from the thumb drive.

The other thumb drive contained Tor software and the Tails operating system, which were recommended by the reporter’s online news outlet in an article published on its website regarding how to anonymously leak documents.

Posted on May 9, 2019 at 3:17 PM31 Comments

Amazon Is Losing the War on Fraudulent Sellers

Excellent article on fraudulent seller tactics on Amazon.

The most prominent black hat companies for US Amazon sellers offer ways to manipulate Amazon’s ranking system to promote products, protect accounts from disciplinary actions, and crush competitors. Sometimes, these black hat companies bribe corporate Amazon employees to leak information from the company’s wiki pages and business reports, which they then resell to marketplace sellers for steep prices. One black hat company charges as much as $10,000 a month to help Amazon sellers appear at the top of product search results. Other tactics to promote sellers’ products include removing negative reviews from product pages and exploiting technical loopholes on Amazon’s site to lift products’ overall sales rankings.

[…]

AmzPandora’s services ranged from small tasks to more ambitious strategies to rank a product higher using Amazon’s algorithm. While it was online, it offered to ping internal contacts at Amazon for $500 to get information about why a seller’s account had been suspended, as well as advice on how to appeal the suspension. For $300, the company promised to remove an unspecified number of negative reviews on a listing within three to seven days, which would help increase the overall star rating for a product. For $1.50, the company offered a service to fool the algorithm into believing a product had been added to a shopper’s cart or wish list by writing a super URL. And for $1,200, an Amazon seller could purchase a “frequently bought together” spot on another marketplace product’s page that would appear for two weeks, which AmzPandora promised would lead to a 10% increase in sales.

This was a good article on this from last year. (My blog post.)

Amazon has a real problem here, primarily because trust in the system is paramount to Amazon’s success. As much as they need to crack down on fraudulent sellers, they really want articles like these to not be written.

Slashdot thread. Boing Boing post.

Posted on May 9, 2019 at 5:58 AM22 Comments

Leaked NSA Hacking Tools

In 2016, a hacker group calling itself the Shadow Brokers released a trove of 2013 NSA hacking tools and related documents. Most people believe it is a front for the Russian government. Since, then the vulnerabilities and tools have been used by both government and criminals, and put the NSA’s ability to secure its own cyberweapons seriously into question.

Now we have learned that the Chinese used the tools fourteen months before the Shadow Brokers released them.

Does this mean that both the Chinese and the Russians stole the same set of NSA tools? Did the Russians steal them from the Chinese, who stole them from us? Did it work the other way? I don’t think anyone has any idea. But this certainly illustrates how dangerous it is for the NSA—or US Cyber Command—to hoard zero-day vulnerabilities.

EDITED TO ADD (5/16): Symantec report.

Posted on May 8, 2019 at 11:30 AM20 Comments

Malicious MS Office Macro Creator

Evil Clippy is a tool for creating malicious Microsoft Office macros:

At BlackHat Asia we released Evil Clippy, a tool which assists red teamers and security testers in creating malicious MS Office documents. Amongst others, Evil Clippy can hide VBA macros, stomp VBA code (via p-code) and confuse popular macro analysis tools. It runs on Linux, OSX and Windows.

The VBA stomping is the most powerful feature, because it gets around antivirus programs:

VBA stomping abuses a feature which is not officially documented: the undocumented PerformanceCache part of each module stream contains compiled pseudo-code (p-code) for the VBA engine. If the MS Office version specified in the _VBA_PROJECT stream matches the MS Office version of the host program (Word or Excel) then the VBA source code in the module stream is ignored and the p-code is executed instead.

In summary: if we know the version of MS Office of a target system (e.g. Office 2016, 32 bit), we can replace our malicious VBA source code with fake code, while the malicious code will still get executed via p-code. In the meantime, any tool analyzing the VBA source code (such as antivirus) is completely fooled.

Posted on May 8, 2019 at 6:03 AM14 Comments

First Physical Retaliation for a Cyberattack

Israel has acknowledged that its recent airstrikes against Hamas were a real-time response to an ongoing cyberattack. From Twitter:

CLEARED FOR RELEASE: We thwarted an attempted Hamas cyber offensive against Israeli targets. Following our successful cyber defensive operation, we targeted a building where the Hamas cyber operatives work.

HamasCyberHQ.exe has been removed. pic.twitter.com/AhgKjiOqS7

­Israel Defense Forces (@IDF) May 5, 2019

I expect this sort of thing to happen more—not against major countries, but by larger countries against smaller powers. Cyberattacks are too much of a nation-state equalizer otherwise.

Another article.

EDITED TO ADD (5/7): Commentary.

Posted on May 6, 2019 at 4:09 PM48 Comments

Protecting Yourself from Identity Theft

I don’t have a lot of good news for you. The truth is there’s nothing we can do to protect our data from being stolen by cybercriminals and others.

Ten years ago, I could have given you all sorts of advice about using encryption, not sending information over email, securing your web connections, and a host of other things­—but most of that doesn’t matter anymore. Today, your sensitive data is controlled by others, and there’s nothing you can personally to do affect its security.

I could give you advice like don’t stay at a hotel (the Marriott breach), don’t get a government clearance (the Office of Personnel Management hack), don’t store your photos online (Apple breach and others), don’t use email (many, many different breaches), and don’t have anything other than an anonymous cash-only relationship with anyone, ever (the Equifax breach). But that’s all ridiculous advice for anyone trying to live a normal life in the 21st century.

The reality is that your sensitive data has likely already been stolen, multiple times. Cybercriminals have your credit card information. They have your social security number and your mother’s maiden name. They have your address and phone number. They obtained the data by hacking any one of the hundreds of companies you entrust with the data­—and you have no visibility into those companies’ security practices, and no recourse when they lose your data.

Given this, your best option is to turn your efforts toward trying to make sure that your data isn’t used against you. Enable two-factor authentication for all important accounts whenever possible. Don’t reuse passwords for anything important—­and get a password manager to remember them all.

Do your best to disable the “secret questions” and other backup authentication mechanisms companies use when you forget your password­—those are invariably insecure. Watch your credit reports and your bank accounts for suspicious activity. Set up credit freezes with the major credit bureaus. Be wary of email and phone calls you get from people purporting to be from companies you do business with.

Of course, it’s unlikely you will do a lot of this. Pretty much no one does. That’s because it’s annoying and inconvenient. This is the reality, though. The companies you do business with have no real incentive to secure your data. The best way for you to protect yourself is to change that incentive, which means agitating for government oversight of this space. This includes proscriptive regulations, more flexible security standards, liabilities, certification, licensing, and meaningful labeling. Once that happens, the market will step in and provide companies with the technologies they can use to secure your data.

This essay previously appeared in the Rochester Review, as part of an alumni forum that asked: “How do you best protect yourself from identity theft?”

Posted on May 6, 2019 at 7:08 AM45 Comments

Cybersecurity for the Public Interest

The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there’s no way to provide this capability without also weakening the security of every user of those devices and communications systems.

It’s an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism­—as practiced by the Internet companies that are already spying on everyone—­matters. So does society’s underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?

The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals­—that occasionally become law­—that are technological disasters.

This isn’t sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand—­and are involved in—­policy. We need public-interest technologists.

Let’s pause at that term. The Ford Foundation defines public-interest technologists as “technology practitioners who focus on social justice, the common good, and/or the public interest.” A group of academics recently wrote that public-interest technologists are people who “study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good.” Tim Berners-Lee has called them “philosophical engineers.” I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it’s not the best term­—and I know not everyone likes it­—but it’s a decent umbrella term that can encompass all these roles.

We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn’t want to formulate public policy about artificial intelligence and robotics without a security technologist involved.

Public-interest technology isn’t new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.

This might all seem like a lot, but it’s really not. There aren’t enough people doing it, there aren’t enough people who know it needs to be done, and there aren’t enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.

There are many barriers. There’s a report titled A Pivotal Moment that includes this quote: “While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress.”

That quote speaks to the three places for intervention. One: the supply side. There just isn’t enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with “clinics” at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they’ve learned and done.

Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama’s US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.

Three: the marketplace. We need job boards, conferences, and skills exchanges­—places where people on the supply side can learn about the demand.

Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.

This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.

This is bigger than computer security. Technology now permeates society in a way it didn’t just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There’s a pervasive myth in Silicon Valley that technology is politically neutral. It’s not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn’t matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: “What should be governed by the state, and what should be governed by the market?” This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: “How much of our lives should be governed by technology, and under what terms?” In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

This essay previously appeared in the January/February 2019 issue of IEEE Security & Privacy. I maintain a public-interest tech resources page here.

Posted on May 3, 2019 at 4:33 AM35 Comments

Why Isn't GDPR Being Enforced?

Politico has a long article making the case that the lead GDPR regulator, Ireland, has too cozy a relationship with Silicon Valley tech companies to effectively regulate their privacy practices.

Despite its vows to beef up its threadbare regulatory apparatus, Ireland has a long history of catering to the very companies it is supposed to oversee, having wooed top Silicon Valley firms to the Emerald Isle with promises of low taxes, open access to top officials, and help securing funds to build glittering new headquarters.

Now, data-privacy experts and regulators in other countries alike are questioning Ireland’s commitment to policing imminent privacy concerns like Facebook’s reintroduction of facial recognition software and data sharing with its recently purchased subsidiary WhatsApp, and Google’s sharing of information across its burgeoning number of platforms.

EDITED TO ADD (5/13): Daragh O Brien, a regular critic of the DPC and who was quoted in the story, believes that he was misquoted, and that the article wasn’t entirely fair.

Posted on May 2, 2019 at 5:17 AM20 Comments

On Security Tokens

Mark Risher of Google extols the virtues of security keys:

I’ll say it again for the people in the back: with Security Keys, instead of the *user* needing to verify the site, the *site* has to prove itself to the key. Good security these days is about human factors; we have to take the onus off of the user as much as we can.

Furthermore, this “proof” from the site to the key is only permitted over close physical proximity (like USB, NFC, or Bluetooth). Unless the phisher is in the same room as the victim, they can’t gain access to the second factor.

This is why I keep using words like “transformative,” “revolutionary,” and “lit” (not so much anymore): SKs basically shrink your threat model from “anyone anywhere in the world who knows your password” to “people in the room with you right now.” Huge!

Cory Doctorow makes a critical point, that the system is only as good as its backup system:

I agree, but there’s an important caveat. Security keys usually have fallback mechanisms—some way to attach a new key to your account for when you lose or destroy your old key. These mechanisms may also rely on security keys, but chances are that they don’t (and somewhere down the line, there’s probably a fallback mechanism that uses SMS, or Google Authenticator, or an email confirmation loop, or a password, or an administrator who can be sweet talked by a social engineer).

So while the insight that traditional 2FA is really “something you know and something else you know, albeit only very recently,” security keys are “Something you know and something you have, which someone else can have, if they know something you know.”

And just because there are vulnerabilities in cell phone-based two-factor authentication systems doesn’t mean that they are useless. They’re still much better than traditional password-only authentication systems.

Posted on May 1, 2019 at 6:14 AM42 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.