Crypto-Gram

June 15, 2019

by Bruce Schneier
CTO, IBM Resilient
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. International Spy Museum Reopens
  2. WhatsApp Vulnerability Fixed
  3. Another Intel Chip Flaw
  4. More Attacks against Computer Automatic Update Systems
  5. Why Are Cryptographers Being Denied Entry into the US?
  6. The Concept of “Return on Data”
  7. How Technology and Politics Are Changing Spycraft
  8. Fingerprinting iPhones
  9. Visiting the NSA
  10. Thangrycat: A Serious Cisco Vulnerability
  11. German SG-41 Encryption Machine Up for Auction
  12. Germany Talking about Banning End-to-End Encryption
  13. NSA Hawaii
  14. First American Financial Corp. Data Records Leak
  15. Alex Stamos on Content Moderation and Security
  16. Fraudulent Academic Papers
  17. The Human Cost of Cyberattacks
  18. The Importance of Protecting Cybersecurity Whistleblowers
  19. The Cost of Cybercrime
  20. Lessons Learned Trying to Secure Congressional Campaigns
  21. Chinese Military Wants to Develop Custom OS
  22. Security and Human Behavior (SHB) 2019
  23. iOS Shortcut for Recording the Police
  24. Employment Scam
  25. Workshop on the Economics of Information Security
  26. Rock-Paper-Scissors Robot
  27. Report on the Stalkerware Industry
  28. Video Surveillance by Computer
  29. Computers and Video Surveillance
  30. Upcoming Speaking Engagements

International Spy Museum Reopens

[2019.05.15] The International Spy Museum has reopened in Washington, DC.


WhatsApp Vulnerability Fixed

[2019.05.15] WhatsApp fixed a devastating vulnerability that allowed someone to remotely hack a phone by initiating a WhatsApp voice call. The recipient didn’t even have to answer the call.

The Israeli cyber-arms manufacturer NSO Group is believed to be behind the exploit, but of course there is no definitive proof.

If you use WhatsApp, update your app immediately.


Another Intel Chip Flaw

[2019.05.16] Remember the Spectre and Meltdown attacks from last year? They were a new class of attacks against complex CPUs, finding subliminal channels in optimization techniques that allow hackers to steal information. Since their discovery, researchers have found additional similar vulnerabilities.

A whole bunch more have just been discovered.

I don’t think we’re finished yet. A year and a half ago I wrote: “But more are coming, and they’ll be worse. 2018 will be the year of microprocessor vulnerabilities, and it’s going to be a wild ride.” I think more are still coming.

EDITED TO ADD (6/13): A mathematical analysis of the problem that claims we’ll never completely fix this class of problems.


More Attacks against Computer Automatic Update Systems

[2019.05.16] Last month, Kaspersky discovered that Asus’s live update system was infected with malware, an operation it called Operation Shadowhammer. Now we learn that six other companies were targeted in the same operation.

As we mentioned before, ASUS was not the only company used by the attackers. Studying this case, our experts found other samples that used similar algorithms. As in the ASUS case, the samples were using digitally signed binaries from three other Asian vendors:

  • Electronics Extreme, authors of the zombie survival game called Infestation: Survivor Stories,
  • Innovative Extremist, a company that provides Web and IT infrastructure services but also used to work in game development,
  • Zepetto, the South Korean company that developed the video game Point Blank.

According to our researchers, the attackers either had access to the source code of the victims’ projects or they injected malware at the time of project compilation, meaning they were in the networks of those companies. And this reminds us of an attack that we reported on a year ago: the CCleaner incident.

Also, our experts identified three additional victims: another video gaming company, a conglomerate holding company and a pharmaceutical company, all in South Korea. For now we cannot share additional details about those victims, because we are in the process of notifying them about the attack.

Me on supply chain security.

EDITED TO ADD (6/12): Kaspersky’s expanded report.


Why Are Cryptographers Being Denied Entry into the US?

[2019.05.17] In March, Adi Shamir—that’s the “S” in RSA—was denied a US visa to attend the RSA Conference. He’s Israeli.

This month, British citizen Ross Anderson couldn’t attend an awards ceremony in DC because of visa issues. (You can listen to his recorded acceptance speech.) I’ve heard of two other prominent cryptographers who are in the same boat. Is there some cryptographer blacklist? Is something else going on? A lot of us would like to know.


The Concept of “Return on Data”

[2019.05.20] This law review article by Noam Kolt, titled “Return on Data,” proposes an interesting new way of thinking of privacy law.

Abstract: Consumers routinely supply personal data to technology companies in exchange for services. Yet, the relationship between the utility (U) consumers gain and the data (D) they supply—”return on data” (ROD)—remains largely unexplored. Expressed as a ratio, ROD = U / D. While lawmakers strongly advocate protecting consumer privacy, they tend to overlook ROD. Are the benefits of the services enjoyed by consumers, such as social networking and predictive search, commensurate with the value of the data extracted from them? How can consumers compare competing data-for-services deals? Currently, the legal frameworks regulating these transactions, including privacy law, aim primarily to protect personal data. They treat data protection as a standalone issue, distinct from the benefits which consumers receive. This article suggests that privacy concerns should not be viewed in isolation, but as part of ROD. Just as companies can quantify return on investment (ROI) to optimize investment decisions, consumers should be able to assess ROD in order to better spend and invest personal data. Making data-for-services transactions more transparent will enable consumers to evaluate the merits of these deals, negotiate their terms and make more informed decisions. Pivoting from the privacy paradigm to ROD will both incentivize data-driven service providers to offer consumers higher ROD, as well as create opportunities for new market entrants.


How Technology and Politics Are Changing Spycraft

[2019.05.21] Interesting article about how traditional nation-based spycraft is changing. Basically, the Internet makes it increasingly difficult to generate a good cover story; cell phone and other electronic surveillance techniques make tracking people easier; and machine learning will make all of this automatic. Meanwhile, Western countries have new laws and norms that put them at a disadvantage over other countries. And finally, much of this has gone corporate.


Fingerprinting iPhones

[2019.05.22] This clever attack allows someone to uniquely identify a phone when you visit a website, based on data from the accelerometer, gyroscope, and magnetometer sensors.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Overall, our attack has the following advantages:

  • The attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you.
  • The attack takes less than one second to generate a fingerprint.
  • The attack can generate a globally unique fingerprint for iOS devices.
  • The calibration fingerprint never changes, even after a factory reset.
  • The attack provides an effective means to track you as you browse across the web and move between apps on your phone.

* Following our disclosure, Apple has patched this vulnerability in iOS 12.2.

Research paper.


Visiting the NSA

[2019.05.22] Yesterday, I visited the NSA. It was Cyber Command’s birthday, but that’s not why I was there. I visited as part of the Berklett Cybersecurity Project, run out of the Berkman Klein Center and funded by the Hewlett Foundation. (BERKman hewLETT—get it? We have a web page, but it’s badly out of date.)

It was a full day of meetings, all unclassified but under the Chatham House Rule. Gen. Nakasone welcomed us and took questions at the start. Various senior officials spoke with us on a variety of topics, but mostly focused on three areas:

  • Russian influence operations, both what the NSA and US Cyber Command did during the 2018 election and what they can do in the future;
  • China and the threats to critical infrastructure from untrusted computer hardware, both the 5G network and more broadly;
  • Machine learning, both how to ensure a ML system is compliant with all laws, and how ML can help with other compliance tasks.

It was all interesting. Those first two topics are ones that I am thinking and writing about, and it was good to hear their perspective. I find that I am much more closely aligned with the NSA about cybersecurity than I am about privacy, which made the meeting much less fraught than it would have been if we were discussing Section 702 of the FISA Amendments Act, Section 215 the USA Freedom Act (up for renewal next year), or any 4th Amendment violations. I don’t think we’re past those issues by any means, but they make up less of what I am working on.


Thangrycat: A Serious Cisco Vulnerability

[2019.05.23] Summary:

Thangrycat is caused by a series of hardware design flaws within Cisco’s Trust Anchor module. First commercially introduced in 2013, Cisco Trust Anchor module (TAm) is a proprietary hardware security module used in a wide range of Cisco products, including enterprise routers, switches and firewalls. TAm is the root of trust that underpins all other Cisco security and trustworthy computing mechanisms in these devices. Thangrycat allows an attacker to make persistent modification to the Trust Anchor module via FPGA bitstream modification, thereby defeating the secure boot process and invalidating Cisco’s chain of trust at its root. While the flaws are based in hardware, Thangrycat can be exploited remotely without any need for physical access. Since the flaws reside within the hardware design, it is unlikely that any software security patch will fully resolve the fundamental security vulnerability.

From a news article:

Thrangrycat is awful for two reasons. First, if a hacker exploits this weakness, they can do whatever they want to your routers. Second, the attack can happen remotely it’s a software vulnerability. But the fix can only be applied at the hardware level. Like, physical router by physical router. In person. Yeesh.

That said, Thrangrycat only works once you have administrative access to the device. You need a two-step attack in order to get Thrangrycat working. Attack #1 gets you remote administrative access, Attack #2 is Thrangrycat. Attack #2 can’t happen without Attack #1. Cisco can protect you from Attack #1 by sending out a software update. If your I.T. people have your systems well secured and are applying updates and patches consistently and you’re not a regular target of nation-state actors, you’re relatively safe from Attack #1, and therefore, pretty safe from Thrangrycat.

Unfortunately, Attack #1 is a garden variety vulnerability. Many systems don’t even have administrative access configured correctly. There’s opportunity for Thrangrycat to be exploited.

And from Boing Boing:

Thangrycat relies on attackers being able to run processes as the system’s administrator, and Red Balloon, the security firm that disclosed the vulnerability, also revealed a defect that allows attackers to run code as admin.

It’s tempting to dismiss the attack on the trusted computing module as a ho-hum flourish: after all, once an attacker has root on your system, all bets are off. But the promise of trusted computing is that computers will be able to detect and undo this kind of compromise, by using a separate, isolated computer to investigate and report on the state of the main system (Huang and Snowden call this an introspection engine). Once this system is compromised, it can be forced to give false reports on the state of the system: for example, it might report that its OS has been successfully updated to patch a vulnerability when really the update has just been thrown away.

As Charlie Warzel and Sarah Jeong discuss in the New York Times, this is an attack that can be executed remotely, but can only be detected by someone physically in the presence of the affected system (and only then after a very careful inspection, and there may still be no way to do anything about it apart from replacing the system or at least the compromised component).


German SG-41 Encryption Machine Up for Auction

[2019.05.23] A German auction house is selling an SG-41. It looks beautiful. Starting price is 75,000 euros. My guess is that it will sell for around 100K euros.

EDITED TO ADD (6/13): It sold for 98K euros.


Germany Talking about Banning End-to-End Encryption

[2019.05.24] Der Spiegel is reporting that the German Ministry for Internal Affairs is planning to require all Internet message services to provide plaintext messages on demand, basically outlawing strong end-to-end encryption. Anyone not complying will be blocked, although the article doesn’t say how. (Cory Doctorow has previously explained why this would be impossible.)

The article is in German, and I would appreciate additional information from those who can speak the language.

EDITED TO ADD (6/2): Slashdot thread. This seems to be nothing more than political grandstanding: see this post from the Carnegie Endowment for International Peace.


NSA Hawaii

[2019.05.24] Recently I’ve heard Edward Snowden talk about his working at the NSA in Hawaii as being “under a pineapple field.” CBS News recently ran a segment on that NSA listening post on Oahu.

Not a whole lot of actual information. “We’re in office building, in a pineapple field, on Oahu….” And part of it is underground—we see a tunnel. We didn’t get to see any pineapples, though.


First American Financial Corp. Data Records Leak

[2019.05.28] Krebs on Security is reporting a massive data leak by the real estate title insurance company First American Financial Corp.

“The title insurance agency collects all kinds of documents from both the buyer and seller, including Social Security numbers, drivers licenses, account statements, and even internal corporate documents if you’re a small business. You give them all kinds of private information and you expect that to stay private.”

Shoval shared a document link he’d been given by First American from a recent transaction, which referenced a record number that was nine digits long and dated April 2019. Modifying the document number in his link by numbers in either direction yielded other peoples’ records before or after the same date and time, indicating the document numbers may have been issued sequentially.

The earliest document number available on the site—000000075—referenced a real estate transaction from 2003. From there, the dates on the documents get closer to real time with each forward increment in the record number.

This is not an uncommon vulnerability: documents without security, just “protected” by a unique serial number that ends up being easily guessable.

Krebs has no evidence that anyone harvested all this data, but that’s not the point. The company said this in a statement: “At First American, security, privacy and confidentiality are of the highest priority and we are committed to protecting our customers’ information.” That’s obviously not true; security and privacy are probably pretty low priorities for the company. This is basic stuff, and companies like First America Corp. should be held liable for their poor security practices.


Alex Stamos on Content Moderation and Security

[2019.05.29] Really interesting talk by former Facebook CISO Alex Stamos about the problems inherent in content moderation by social media platforms. Well worth watching.


Fraudulent Academic Papers

[2019.05.30] The term “fake news” has lost much of its meaning, but it describes a real and dangerous Internet trend. Because it’s hard for many people to differentiate a real news site from a fraudulent one, they can be hoodwinked by fictitious news stories pretending to be real. The result is that otherwise reasonable people believe lies.

The trends fostering fake news are more general, though, and we need to start thinking about how it could affect different areas of our lives. In particular, I worry about how it will affect academia. In addition to fake news, I worry about fake research.

An example of this seems to have happened recently in the cryptography field. SIMON is a block cipher designed by the National Security Agency (NSA) and made public in 2013. It’s a general design optimized for hardware implementation, with a variety of block sizes and key lengths. Academic cryptanalysts have been trying to break the cipher since then, with some pretty good results, although the NSA’s specified parameters are still immune to attack. Last week, a paper appeared on the International Association for Cryptologic Research (IACR) ePrint archive purporting to demonstrate a much more effective break of SIMON, one that would affect actual implementations. The paper was sufficiently weird, the authors sufficiently unknown and the details of the attack sufficiently absent, that the editors took it down a few days later. No harm done in the end.

In recent years, there has been a push to speed up the process of disseminating research results. Instead of the laborious process of academic publication, researchers have turned to faster online publishing processes, preprint servers, and simply posting research results. The IACR ePrint archive is one of those alternatives. This has all sorts of benefits, but one of the casualties is the process of peer review. As flawed as that process is, it does help ensure the accuracy of results. (Of course, bad papers can still make it through the process. We’re still dealing with the aftermath of a flawed, and now retracted, Lancet paper linking vaccines with autism.)

Like the news business, academic publishing is subject to abuse. We can only speculate about the motivations of the three people who are listed as authors on the SIMON paper, but you can easily imagine better-executed and more nefarious scenarios. In a world of competitive research, one group might publish a fake result to throw other researchers off the trail. It might be a company trying to gain an advantage over a potential competitor, or even a country trying to gain an advantage over another country.

Reverting to a slower and more accurate system isn’t the answer; the world is just moving too fast for that. We need to recognize that fictitious research results can now easily be injected into our academic publication system, and tune our skepticism meters accordingly.

This essay previously appeared on Lawfare.com.


The Human Cost of Cyberattacks

[2019.05.31] The International Committee of the Red Cross has just published a report: “The Potential Human Cost of Cyber-Operations.” It’s the result of an “ICRC Expert Meeting” from last year, but was published this week.

Here’s a shorter blog post if you don’t want to read the whole thing. And commentary by one of the authors.


The Importance of Protecting Cybersecurity Whistleblowers

[2019.06.03] Interesting essay arguing that we need better legislation to protect cybersecurity whistleblowers.

Congress should act to protect cybersecurity whistleblowers because information security has never been so important, or so challenging. In the wake of a barrage of shocking revelations about data breaches and companies mishandling of customer data, a bipartisan consensus has emerged in support of legislation to give consumers more control over their personal information, require companies to disclose how they collect and use consumer data, and impose penalties for data breaches and misuse of consumer data. The Federal Trade Commission (“FTC”) has been held out as the best agency to implement this new regulation. But for any such legislation to be effective, it must protect the courageous whistleblowers who risk their careers to expose data breaches and unauthorized use of consumers’ private data.

Whistleblowers strengthen regulatory regimes, and cybersecurity regulation would be no exception. Republican and Democratic leaders from the executive and legislative branches have extolled the virtues of whistleblowers. High-profile cases abound. Recently, Christopher Wylie exposed Cambridge Analytica’s misuse of Facebook user data to manipulate voters, including its apparent theft of data from 50 million Facebook users as part of a psychological profiling campaign. Though additional research is needed, the existing empirical data reinforces the consensus that whistleblowers help prevent, detect, and remedy misconduct. Therefore it is reasonable to conclude that protecting and incentivizing whistleblowers could help the government address the many complex challenges facing our nation’s information systems.


The Cost of Cybercrime

[2019.06.04] Really interesting paper calculating the worldwide cost of cybercrime:

Abstract: In 2012 we presented the first systematic study of the costs of cybercrime. In this paper, we report what has changed in the seven years since. The period has seen major platform evolution, with the mobile phone replacing the PC and laptop as the consumer terminal of choice, with Android replacing Windows, and with many services moving to the cloud. The use of social networks has become extremely widespread. The executive summary is that about half of all property crime, by volume and by value, is now online. We hypothesised in 2012 that this might be so; it is now established by multiple victimisation studies. Many cybercrime patterns appear to be fairly stable, but there are some interesting changes. Payment fraud, for example, has more than doubled in value but has fallen slightly as a proportion of payment value; the payment system has simply become bigger, and slightly more efficient. Several new cybercrimes are significant enough to mention, including business email compromise and crimes involving cryptocurrencies. The move to the cloud means that system misconfiguration may now be responsible for as many breaches as phishing. Some companies have suffered large losses as a side-effect of denial-of-service worms released by state actors, such as NotPetya; we have to take a view on whether they count as cybercrime. The infrastructure supporting cybercrime, such as botnets, continues to evolve, and specific crimes such as premium-rate phone scams have evolved some interesting variants. The over-all picture is the same as in 2012: traditional offences that are now technically ‘computercrimes’ such as tax and welfare fraud cost the typical citizen in the low hundreds of Euros/dollars a year; payment frauds and similar offences, where the modus operandi has been completely changed by computers, cost in the tens; while the new computer crimes cost in the tens of cents. Defending against the platforms used to support the latter two types of crime cost citizens in the tens of dollars. Our conclusions remain broadly the same as in 2012: it would be economically rational to spend less in anticipation of cybercrime (on antivirus, firewalls, etc.) and more on response. We are particularly bad at prosecuting criminals who operate infrastructure that other wrongdoers exploit. Given the growing realisation among policymakers that crime hasn’t been falling over the past decade, merely moving online, we might reasonably hope for better funded and coordinated law-enforcement action.

Richard Clayton gave a presentation on this yesterday at WEIS. His final slide contained a summary.

  • Payment fraud is up, but credit card sales are up even more—so we’re winning.
  • Cryptocurrencies are enabling new scams, but the big money is still being lost in more traditional investment fraud.
  • Telcom fraud is down, basically because Skype is free.
  • Anti-virus fraud has almost disappeared, but tech support scams are growing very rapidly.
  • The big money is still in tax fraud, welfare fraud, VAT fraud, and so on.
  • We spend more money on cyber defense than we do on the actual losses.
  • Criminals largely act with impunity. They don’t believe they will get caught, and mostly that’s correct.

Bottom line: the technology has changed a lot since 2012, but the economic considerations remain unchanged.


Lessons Learned Trying to Secure Congressional Campaigns

[2019.06.05] Really interesting first-hand experience from Maciej Cegłowski.


Chinese Military Wants to Develop Custom OS

[2019.06.06] Citing security concerns, the Chinese military wants to replace Windows with its own custom operating system:

Thanks to the Snowden, Shadow Brokers, and Vault7 leaks, Beijing officials are well aware of the US’ hefty arsenal of hacking tools, available for anything from smart TVs to Linux servers, and from routers to common desktop operating systems, such as Windows and Mac.

Since these leaks have revealed that the US can hack into almost anything, the Chinese government’s plan is to adopt a “security by obscurity” approach and run a custom operating system that will make it harder for foreign threat actors—mainly the US—to spy on Chinese military operations.

It’s unclear exactly how custom this new OS will be. It could be a Linux variant, like North Korea’s Red Star OS. Or it could be something completely new. Normally, I would be highly skeptical of a country being able to write and field its own custom operating system, but China is one of the few that is large enough to actually be able to do it. So I’m just moderately skeptical.

EDITED TO ADD (6/12): Russia also wants to develop its own flavor of Linux.


Security and Human Behavior (SHB) 2019

[2019.06.06] Today is the second day of the twelfth Workshop on Security and Human Behavior, which I am hosting at Harvard University.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions—all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks—remotely, because he was denied a visa earlier this year.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, and eleventh SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.


iOS Shortcut for Recording the Police

[2019.06.07]Hey Siri; I’m getting pulled over” can be a shortcut:

Once the shortcut is installed and configured, you just have to say, for example, “Hey Siri, I’m getting pulled over.” Then the program pauses music you may be playing, turns down the brightness on the iPhone, and turns on “do not disturb” mode.

It also sends a quick text to a predetermined contact to tell them you’ve been pulled over, and it starts recording using the iPhone’s front-facing camera. Once you’ve stopped recording, it can text or email the video to a different predetermined contact and save it to Dropbox.


Employment Scam

[2019.06.10] Interesting story of an old-school remote-deposit capture fraud scam, wrapped up in a fake employment scam.

Slashdot thread.


Workshop on the Economics of Information Security

[2019.06.11] Last week, I hosted the eighteenth Workshop on the Economics of Information Security at Harvard. Ross Anderson liveblogged the talks.


Rock-Paper-Scissors Robot

[2019.06.12] How in the world did I not know about this for three years?

Researchers at the University of Tokyo have developed a robot that always wins at rock-paper-scissors. It watches the human player’s hand, figures out which finger position the human is about to deploy, and reacts quickly enough to always win.

EDITED TO ADD (6/13): Seems like this is even older—from 2013.


Report on the Stalkerware Industry

[2019.06.13] Citizen Lab just published an excellent report on the stalkerware industry.

Boing Boing post.


Video Surveillance by Computer

[2019.06.14] The ACLU’s Jay Stanley has just published a fantastic report: “The Dawn of Robot Surveillance” (blog post here) Basically, it lays out a future of ubiquitous video cameras watched by increasingly sophisticated video analytics software, and discusses the potential harms to society.

I’m not going to excerpt a piece, because you really need to read the whole thing.


Computers and Video Surveillance

[2019.06.14] It used to be that surveillance cameras were passive. Maybe they just recorded, and no one looked at the video unless they needed to. Maybe a bored guard watched a dozen different screens, scanning for something interesting. In either case, the video was only stored for a few days because storage was expensive.

Increasingly, none of that is true. Recent developments in video analytics—fueled by artificial intelligence techniques like machine learning—enable computers to watch and understand surveillance videos with human-like discernment. Identification technologies make it easier to automatically figure out who is in the videos. And finally, the cameras themselves have become cheaper, more ubiquitous, and much better; cameras mounted on drones can effectively watch an entire city. Computers can watch all the video without human issues like distraction, fatigue, training, or needing to be paid. The result is a level of surveillance that was impossible just a few years ago.

An ACLU report published Thursday called “the Dawn of Robot Surveillance” says AI-aided video surveillance “won’t just record us, but will also make judgments about us based on their understanding of our actions, emotions, skin color, clothing, voice, and more. These automated ‘video analytics’ technologies threaten to fundamentally change the nature of surveillance.”

Let’s take the technologies one at a time. First: video analytics. Computers are getting better at recognizing what’s going on in a video. Detecting when a person or vehicle enters a forbidden area is easy. Modern systems can alarm when someone is walking in the wrong direction—going in through an exit-only corridor, for example. They can count people or cars. They can detect when luggage is left unattended, or when previously unattended luggage is picked up and removed. They can detect when someone is loitering in an area, is lying down, or is running. Increasingly, they can detect particular actions by people. Amazon’s cashier-less stores rely on video analytics to figure out when someone picks an item off a shelf and doesn’t put it back.

More than identifying actions, video analytics allow computers to understand what’s going on in a video: They can flag people based on their clothing or behavior, identify people’s emotions through body language and behavior, and find people who are acting “unusual” based on everyone else around them. Those same Amazon in-store cameras can analyze customer sentiment. Other systems can describe what’s happening in a video scene.

Computers can also identify people. AIs are getting better at identifying people in those videos. Facial recognition technology is improving all the time, made easier by the enormous stockpile of tagged photographs we give to Facebook and other social media sites, and the photos governments collect in the process of issuing ID cards and drivers licenses. The technology already exists to automatically identify everyone a camera “sees” in real time. Even without video identification, we can be identified by the unique information continuously broadcasted by the smartphones we carry with us everywhere, or by our laptops or Bluetooth-connected devices. Police have been tracking phones for years, and this practice can now be combined with video analytics.

Once a monitoring system identifies people, their data can be combined with other data, either collected or purchased: from cell phone records, GPS surveillance history, purchasing data, and so on. Social media companies like Facebook have spent years learning about our personalities and beliefs by what we post, comment on, and “like.” This is “data inference,” and when combined with video it offers a powerful window into people’s behaviors and motivations.

Camera resolution is also improving. Gigapixel cameras as so good that they can capture individual faces and identify license places in photos taken miles away. “Wide-area surveillance” cameras can be mounted on airplanes and drones, and can operate continuously. On the ground, cameras can be hidden in street lights and other regular objects. In space, satellite cameras have also dramatically improved.

Data storage has become incredibly cheap, and cloud storage makes it all so easy. Video data can easily be saved for years, allowing computers to conduct all of this surveillance backwards in time.

In democratic countries, such surveillance is marketed as crime prevention—or counterterrorism. In countries like China, it is blatantly used to suppress political activity and for social control. In all instances, it’s being implemented without a lot of public debate by law-enforcement agencies and by corporations in public spaces they control.

This is bad, because ubiquitous surveillance will drastically change our relationship to society. We’ve never lived in this sort of world, even those of us who have lived through previous totalitarian regimes. The effects will be felt in many different areas. False positives—when the surveillance system gets it wrong—will lead to harassment and worse. Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change. A recent ACLU report discusses these harms in more depth. While it’s possible that some of this surveillance is worth the trade-offs, we as society need to deliberately and intelligently make decisions about it.

Some jurisdictions are starting to notice. Last month, San Francisco became the first city to ban facial recognition technology by police and other government agencies. A similar ban is being considered in Somerville, MA, and Oakland, CA. These are exceptions, and limited to the more liberal areas of the country.

We often believe that technological change is inevitable, and that there’s nothing we can do to stop it—or even to steer it. That’s simply not true. We’re led to believe this because we don’t often see it, understand it, or have a say in how or when it is deployed. The problem is that technologies of cameras, resolution, machine learning, and artificial intelligence are complex and specialized.

Laws like what was just passed in San Francisco won’t stop the development of these technologies, but they’re not intended to. They’re intended as pauses, so our policy making can catch up with technology. As a general rule, the US government tends to ignore technologies as they’re being developed and deployed, so as not to stifle innovation. But as the rate of technological change increases, so does the unanticipated effects on our lives. Just as we’ve been surprised by the threats to democracy caused by surveillance capitalism, AI-enabled video surveillance will have similar surprising effects. Maybe a pause in our headlong deployment of these technologies will allow us the time to discuss what kind of society we want to live in, and then enact rules to bring that kind of society about.

This essay previously appeared on Vice Motherboard.


Upcoming Speaking Engagements

[2019.06.14] This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM or IBM Security.

Copyright © 2019 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.