Blog: June 2019 Archives

Friday Squid Blogging: Fantastic Video of a Juvenile Giant Squid

It’s amazing:

Then, about 20 hours into the recording from the Medusa’s fifth deployment, Dr. Robinson saw the sharp points of tentacles sneaking into the camera’s view. “My heart felt like exploding,” he said on Thursday, over a shaky phone connection from the ship’s bridge.

At first, the animal stayed on the edge of the screen, suggesting that a squid was stalking the LED bait, pacing alongside it.

And then, through the drifting marine snow, the entire creature emerged from the center of the dark screen: a long, undulating animal that suddenly opened into a mass of twisting arms and tentacles. Two reached out and made a grab for the lure.

For a long moment, the squid seemed to explore the strange non-jellyfish in puzzlement. And then it was gone, shooting back into the dark.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on June 28, 2019 at 4:11 PM75 Comments

I'm Leaving IBM

Today is my last day at IBM.

If you’ve been following along, IBM bought my startup Resilient Systems in Spring 2016. Since then, I have been with IBM, holding the nicely ambiguous title of “Special Advisor.” As of the end of the month, I will be back on my own.

I will continue to write and speak, and do the occasional consulting job. I will continue to teach at the Harvard Kennedy School. I will continue to serve on boards for organizations I believe in: EFF, Access Now, Tor, EPIC, Verified Voting. And I will increasingly be an advocate for public-interest technology.

Posted on June 28, 2019 at 2:04 PM39 Comments

Spanish Soccer League App Spies on Fans

The Spanish Soccer League’s smartphone app spies on fans in order to find bars that are illegally streaming its games. The app listens with the microphone for the broadcasts, and then uses geolocation to figure out where the phone is.

The Spanish data protection agency has ordered the league to stop doing this. Not because it’s creepy spying, but because the terms of service—which no one reads anyway—weren’t clear.

Posted on June 27, 2019 at 6:41 AM11 Comments

MongoDB Offers Field Level Encryption

MongoDB now has the ability to encrypt data by field:

MongoDB calls the new feature Field Level Encryption. It works kind of like end-to-end encrypted messaging, which scrambles data as it moves across the internet, revealing it only to the sender and the recipient. In such a “client-side” encryption scheme, databases utilizing Field Level Encryption will not only require a system login, but will additionally require specific keys to process and decrypt specific chunks of data locally on a user’s device as needed. That means MongoDB itself and cloud providers won’t be able to access customer data, and a database’s administrators or remote managers don’t need to have access to everything either.

For regular users, not much will be visibly different. If their credentials are stolen and they aren’t using multifactor authentication, an attacker will still be able to access everything the victim could. But the new feature is meant to eliminate single points of failure. With Field Level Encryption in place, a hacker who steals an administrative username and password, or finds a software vulnerability that gives them system access, still won’t be able to use these holes to access readable data.

Posted on June 26, 2019 at 1:03 PM22 Comments

iPhone Apps Surreptitiously Communicated with Unknown Servers

Long news article (alternate source) on iPhone privacy, specifically the enormous amount of data your apps are collecting without your knowledge. A lot of this happens in the middle of the night, when you’re probably not otherwise using your phone:

IPhone apps I discovered tracking me by passing information to third parties ­ just while I was asleep ­ include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic.

Posted on June 25, 2019 at 6:35 AM39 Comments

Backdoor Built into Android Firmware

In 2017, some Android phones came with a backdoor pre-installed:

Criminals in 2017 managed to get an advanced backdoor preinstalled on Android devices before they left the factories of manufacturers, Google researchers confirmed on Thursday.

Triada first came to light in 2016 in articles published by Kaspersky here and here, the first of which said the malware was “one of the most advanced mobile Trojans” the security firm’s analysts had ever encountered. Once installed, Triada’s chief purpose was to install apps that could be used to send spam and display ads. It employed an impressive kit of tools, including rooting exploits that bypassed security protections built into Android and the means to modify the Android OS’ all-powerful Zygote process. That meant the malware could directly tamper with every installed app. Triada also connected to no fewer than 17 command and control servers.

In July 2017, security firm Dr. Web reported that its researchers had found Triada built into the firmware of several Android devices, including the Leagoo M5 Plus, Leagoo M8, Nomu S10, and Nomu S20. The attackers used the backdoor to surreptitiously download and install modules. Because the backdoor was embedded into one of the OS libraries and located in the system section, it couldn’t be deleted using standard methods, the report said.

On Thursday, Google confirmed the Dr. Web report, although it stopped short of naming the manufacturers. Thursday’s report also said the supply chain attack was pulled off by one or more partners the manufacturers used in preparing the final firmware image used in the affected devices.

This is a supply chain attack. It seems to be the work of criminals, but it could just as easily have been a nation-state.

Posted on June 21, 2019 at 11:42 AM23 Comments

Fake News and Pandemics

When the next pandemic strikes, we’ll be fighting it on two fronts. The first is the one you immediately think about: understanding the disease, researching a cure and inoculating the population. The second is new, and one you might not have thought much about: fighting the deluge of rumors, misinformation and flat-out lies that will appear on the internet.

The second battle will be like the Russian disinformation campaigns during the 2016 presidential election, only with the addition of a deadly health crisis and possibly without a malicious government actor. But while the two problems—misinformation affecting democracy and misinformation affecting public health—will have similar solutions, the latter is much less political. If we work to solve the pandemic disinformation problem, any solutions are likely to also be applicable to the democracy one.

Pandemics are part of our future. They might be like the 1968 Hong Kong flu, which killed a million people, or the 1918 Spanish flu, which killed over 40 million. Yes, modern medicine makes pandemics less likely and less deadly. But global travel and trade, increased population density, decreased wildlife habitats, and increased animal farming to satisfy a growing and more affluent population have made them more likely. Experts agree that it’s not a matter of if—it’s only a matter of when.

When the next pandemic strikes, accurate information will be just as important as effective treatments. We saw this in 2014, when the Nigerian government managed to contain a subcontinentwide Ebola epidemic to just 20 infections and eight fatalities. Part of that success was because of the ways officials communicated health information to all Nigerians, using government-sponsored videos, social media campaigns and international experts. Without that, the death toll in Lagos, a city of 21 million people, would have probably been greater than the 11,000 the rest of the continent experienced.

There’s every reason to expect misinformation to be rampant during a pandemic. In the early hours and days, information will be scant and rumors will abound. Most of us are not health professionals or scientists. We won’t be able to tell fact from fiction. Even worse, we’ll be scared. Our brains work differently when we are scared, and they latch on to whatever makes us feel safer—even if it’s not true.

Rumors and misinformation could easily overwhelm legitimate news channels, as people share tweets, images and videos. Much of it will be well-intentioned but wrong—like the misinformation spread by the anti-vaccination community today ­—but some of it may be malicious. In the 1980s, the KGB ran a sophisticated disinformation campaign ­—Operation Infektion ­—to spread the rumor that HIV/AIDS was a result of an American biological weapon gone awry. It’s reasonable to assume some group or country would deliberately spread intentional lies in an attempt to increase death and chaos.

It’s not just misinformation about which treatments work (and are safe), and which treatments don’t work (and are unsafe). Misinformation can affect society’s ability to deal with a pandemic at many different levels. Right now, Ebola relief efforts in the Democratic Republic of Congo are being stymied by mistrust of health workers and government officials.

It doesn’t take much to imagine how this can lead to disaster. Jay Walker, curator of the TEDMED conferences, laid out some of the possibilities in a 2016 essay: people overwhelming and even looting pharmacies trying to get some drug that is irrelevant or nonexistent, people needlessly fleeing cities and leaving them paralyzed, health workers not showing up for work, truck drivers and other essential people being afraid to enter infected areas, official sites like CDC.gov being hacked and discredited. This kind of thing can magnify the health effects of a pandemic many times over, and in extreme cases could lead to a total societal collapse.

This is going to be something that government health organizations, medical professionals, social media companies and the traditional media are going to have to work out together. There isn’t any single solution; it will require many different interventions that will all need to work together. The interventions will look a lot like what we’re already talking about with regard to government-run and other information influence campaigns that target our democratic processes: methods of visibly identifying false stories, the identification and deletion of fake posts and accounts, ways to promote official and accurate news, and so on. At the scale these are needed, they will have to be done automatically and in real time.

Since the 2016 presidential election, we have been talking about propaganda campaigns, and about how social media amplifies fake news and allows damaging messages to spread easily. It’s a hard discussion to have in today’s hyperpolarized political climate. After any election, the winning side has every incentive to downplay the role of fake news.

But pandemics are different; there’s no political constituency in favor of people dying because of misinformation. Google doesn’t want the results of peoples’ well-intentioned searches to lead to fatalities. Facebook and Twitter don’t want people on their platforms sharing misinformation that will result in either individual or mass deaths. Focusing on pandemics gives us an apolitical way to collectively approach the general problem of misinformation and fake news. And any solutions for pandemics are likely to also be applicable to the more general ­—and more political ­—problems.

Pandemics are inevitable. Bioterror is already possible, and will only get easier as the requisite technologies become cheaper and more common. We’re experiencing the largest measles outbreak in 25 years thanks to the anti-vaccination movement, which has hijacked social media to amplify its messages; we seem unable to beat back the disinformation and pseudoscience surrounding the vaccine. Those same forces will dramatically increase death and social upheaval in the event of a pandemic.

Let the Russian propaganda attacks on the 2016 election serve as a wake-up call for this and other threats. We need to solve the problem of misinformation during pandemics together—­ governments and industries in collaboration with medical officials, all across the world ­—before there’s a crisis. And the solutions will also help us shore up our democracy in the process.

This essay previously appeared in the New York Times.

Posted on June 21, 2019 at 5:10 AM22 Comments

How Apple's "Find My" Feature Works

Matthew Green intelligently speculates about how Apple’s new “Find My” feature works.

If you haven’t already been inspired by the description above, let me phrase the question you ought to be asking: how is this system going to avoid being a massive privacy nightmare?

Let me count the concerns:

  • If your device is constantly emitting a BLE signal that uniquely identifies it, the whole world is going to have (yet another) way to track you. Marketers already use WiFi and Bluetooth MAC addresses to do this: Find My could create yet another tracking channel.
  • It also exposes the phones who are doing the tracking. These people are now going to be sending their current location to Apple (which they may or may not already be doing). Now they’ll also be potentially sharing this information with strangers who “lose” their devices. That could go badly.
  • Scammers might also run active attacks in which they fake the location of your device. While this seems unlikely, people will always surprise you.

The good news is that Apple claims that their system actually does provide strong privacy, and that it accomplishes this using clever cryptography. But as is typical, they’ve declined to give out the details how they’re going to do it. Andy Greenberg talked me through an incomplete technical description that Apple provided to Wired, so that provides many hints. Unfortunately, what Apple provided still leaves huge gaps. It’s into those gaps that I’m going to fill in my best guess for what Apple is actually doing.

Posted on June 20, 2019 at 12:27 PM19 Comments

Hacking Hardware Security Modules

Security researchers Gabriel Campana and Jean-Baptiste Bédrune are giving a hardware security module (HSM) talk at BlackHat in August:

This highly technical presentation targets an HSM manufactured by a vendor whose solutions are usually found in major banks and large cloud service providers. It will demonstrate several attack paths, some of them allowing unauthenticated attackers to take full control of the HSM. The presented attacks allow retrieving all HSM secrets remotely, including cryptographic keys and administrator credentials. Finally, we exploit a cryptographic bug in the firmware signature verification to upload a modified firmware to the HSM. This firmware includes a persistent backdoor that survives a firmware update.

They have an academic paper in French, and a presentation of the work. Here’s a summary in English.

There were plenty of technical challenges to solve along the way, in what was clearly a thorough and professional piece of vulnerability research:

  1. They started by using legitimate SDK access to their test HSM to upload a firmware module that would give them a shell inside the HSM. Note that this SDK access was used to discover the attacks, but is not necessary to exploit them.
  2. They then used the shell to run a fuzzer on the internal implementation of PKCS#11 commands to find reliable, exploitable buffer overflows.
  3. They checked they could exploit these buffer overflows from outside the HSM, i.e. by just calling the PKCS#11 driver from the host machine
  4. They then wrote a payload that would override access control and, via another issue in the HSM, allow them to upload arbitrary (unsigned) firmware. It’s important to note that this backdoor is persistent ­ a subsequent update will not fix it.
  5. They then wrote a module that would dump all the HSM secrets, and uploaded it to the HSM.

Posted on June 20, 2019 at 6:56 AM17 Comments

Risks of Password Managers

Stuart Schechter writes about the security risks of using a password manager. It’s a good piece, and nicely discusses the trade-offs around password managers: which one to choose, which passwords to store in it, and so on.

My own Password Safe is mentioned. My particular choices about security and risk is to only store passwords on my computer—not on my phone—and not to put anything in the cloud. In my way of thinking, that reduces the risks of a password manager considerably. Yes, there are losses in convenience.

Posted on June 19, 2019 at 1:26 PM45 Comments

Maciej Cegłowski on Privacy in the Information Age

Maciej Cegłowski has a really good essay explaining how to think about privacy today:

For the purposes of this essay, I’ll call it “ambient privacy”—the understanding that there is value in having our everyday interactions with one another remain outside the reach of monitoring, and that the small details of our daily lives should pass by unremembered. What we do at home, work, church, school, or in our leisure time does not belong in a permanent record. Not every conversation needs to be a deposition.

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale.

Ambient privacy is not a property of people, or of their data, but of the world around us. Just like you can’t drop out of the oil economy by refusing to drive a car, you can’t opt out of the surveillance economy by forswearing technology (and for many people, that choice is not an option). While there may be worthy reasons to take your life off the grid, the infrastructure will go up around you whether you use it or not.

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent.

Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

He’s not the first person to talk about privacy as a societal property, or to use pollution metaphors. But his framing is really cogent. And “ambient privacy” is new—and a good phrasing.

Posted on June 19, 2019 at 5:21 AM14 Comments

Data, Surveillance, and the AI Arms Race

According to foreign policy experts and the defense establishment, the United States is caught in an artificial intelligence arms race with China—one with serious implications for national security. The conventional version of this story suggests that the United States is at a disadvantage because of self-imposed restraints on the collection of data and the privacy of its citizens, while China, an unrestrained surveillance state, is at an advantage. In this vision, the data that China collects will be fed into its systems, leading to more powerful AI with capabilities we can only imagine today. Since Western countries can’t or won’t reap such a comprehensive harvest of data from their citizens, China will win the AI arms race and dominate the next century.

This idea makes for a compelling narrative, especially for those trying to justify surveillance—whether government- or corporate-run. But it ignores some fundamental realities about how AI works and how AI research is conducted.

Thanks to advances in machine learning, AI has flipped from theoretical to practical in recent years, and successes dominate public understanding of how it works. Machine learning systems can now diagnose pneumonia from X-rays, play the games of go and poker, and read human lips, all better than humans. They’re increasingly watching surveillance video. They are at the core of self-driving car technology and are playing roles in both intelligence-gathering and military operations. These systems monitor our networks to detect intrusions and look for spam and malware in our email.

And it’s true that there are differences in the way each country collects data. The United States pioneered “surveillance capitalism,” to use the Harvard University professor Shoshana Zuboff’s term, where data about the population is collected by hundreds of large and small companies for corporate advantage—and mutually shared or sold for profit The state picks up on that data, in cases such as the Centers for Disease Control and Prevention’s use of Google search data to map epidemics and evidence shared by alleged criminals on Facebook, but it isn’t the primary user.

China, on the other hand, is far more centralized. Internet companies collect the same sort of data, but it is shared with the government, combined with government-collected data, and used for social control. Every Chinese citizen has a national ID number that is demanded by most services and allows data to easily be tied together. In the western region of Xinjiang, ubiquitous surveillance is used to oppress the Uighur ethnic minority—although at this point there is still a lot of human labor making it all work. Everyone expects that this is a test bed for the entire country.

Data is increasingly becoming a part of control for the Chinese government. While many of these plans are aspirational at the moment—there isn’t, as some have claimed, a single “social credit score,” but instead future plans to link up a wide variety of systems—data collection is universally pushed as essential to the future of Chinese AI. One executive at search firm Baidu predicted that the country’s connected population will provide them with the raw data necessary to become the world’s preeminent tech power. China’s official goal is to become the world AI leader by 2030, aided in part by all of this massive data collection and correlation.

This all sounds impressive, but turning massive databases into AI capabilities doesn’t match technological reality. Current machine learning techniques aren’t all that sophisticated. All modern AI systems follow the same basic methods. Using lots of computing power, different machine learning models are tried, altered, and tried again. These systems use a large amount of data (the training set) and an evaluation function to distinguish between those models and variations that work well and those that work less well. After trying a lot of models and variations, the system picks the one that works best. This iterative improvement continues even after the system has been fielded and is in use.

So, for example, a deep learning system trying to do facial recognition will have multiple layers (hence the notion of “deep”) trying to do different parts of the facial recognition task. One layer will try to find features in the raw data of a picture that will help find a face, such as changes in color that will indicate an edge. The next layer might try to combine these lower layers into features like shapes, looking for round shapes inside of ovals that indicate eyes on a face. The different layers will try different features and will be compared by the evaluation function until the one that is able to give the best results is found, in a process that is only slightly more refined than trial and error.

Large data sets are essential to making this work, but that doesn’t mean that more data is automatically better or that the system with the most data is automatically the best system. Train a facial recognition algorithm on a set that contains only faces of white men, and the algorithm will have trouble with any other kind of face. Use an evaluation function that is based on historical decisions, and any past bias is learned by the algorithm. For example, mortgage loan algorithms trained on historic decisions of human loan officers have been found to implement redlining. Similarly, hiring algorithms trained on historical data manifest the same sexism as human staff often have. Scientists are constantly learning about how to train machine learning systems, and while throwing a large amount of data and computing power at the problem can work, more subtle techniques are often more successful. All data isn’t created equal, and for effective machine learning, data has to be both relevant and diverse in the right ways.

Future research advances in machine learning are focused on two areas. The first is in enhancing how these systems distinguish between variations of an algorithm. As different versions of an algorithm are run over the training data, there needs to be some way of deciding which version is “better.” These evaluation functions need to balance the recognition of an improvement with not over-fitting to the particular training data. Getting functions that can automatically and accurately distinguish between two algorithms based on minor differences in the outputs is an art form that no amount of data can improve.

The second is in the machine learning algorithms themselves. While much of machine learning depends on trying different variations of an algorithm on large amounts of data to see which is most successful, the initial formulation of the algorithm is still vitally important. The way the algorithms interact, the types of variations attempted, and the mechanisms used to test and redirect the algorithms are all areas of active research. (An overview of some of this work can be found here; even trying to limit the research to 20 papers oversimplifies the work being done in the field.) None of these problems can be solved by throwing more data at the problem.

The British AI company DeepMind’s success in teaching a computer to play the Chinese board game go is illustrative. Its AlphaGo computer program became a grandmaster in two steps. First, it was fed some enormous number of human-played games. Then, the game played itself an enormous number of times, improving its own play along the way. In 2016, AlphaGo beat the grandmaster Lee Sedol four games to one.

While the training data in this case, the human-played games, was valuable, even more important was the machine learning algorithm used and the function that evaluated the relative merits of different game positions. Just one year later, DeepMind was back with a follow-on system: AlphaZero. This go-playing computer dispensed entirely with the human-played games and just learned by playing against itself over and over again. It plays like an alien. (It also became a grandmaster in chess and shogi.)

These are abstract games, so it makes sense that a more abstract training process works well. But even something as visceral as facial recognition needs more than just a huge database of identified faces in order to work successfully. It needs the ability to separate a face from the background in a two-dimensional photo or video and to recognize the same face in spite of changes in angle, lighting, or shadows. Just adding more data may help, but not nearly as much as added research into what to do with the data once we have it.

Meanwhile, foreign-policy and defense experts are talking about AI as if it were the next nuclear arms race, with the country that figures it out best or first becoming the dominant superpower for the next century. But that didn’t happen with nuclear weapons, despite research only being conducted by governments and in secret. It certainly won’t happen with AI, no matter how much data different nations or companies scoop up.

It is true that China is investing a lot of money into artificial intelligence research: The Chinese government believes this will allow it to leapfrog other countries (and companies in those countries) and become a major force in this new and transformative area of computing—and it may be right. On the other hand, much of this seems to be a wasteful boondoggle. Slapping “AI” on pretty much anything is how to get funding. The Chinese Ministry of Education, for instance, promises to produce “50 world-class AI textbooks,” with no explanation of what that means.

In the democratic world, the government is neither the leading researcher nor the leading consumer of AI technologies. AI research is much more decentralized and academic, and it is conducted primarily in the public eye. Research teams keep their training data and models proprietary but freely publish their machine learning algorithms. If you wanted to work on machine learning right now, you could download Microsoft’s Cognitive Toolkit, Google’s Tensorflow, or Facebook’s Pytorch. These aren’t toy systems; these are the state-of-the art machine learning platforms.

AI is not analogous to the big science projects of the previous century that brought us the atom bomb and the moon landing. AI is a science that can be conducted by many different groups with a variety of different resources, making it closer to computer design than the space race or nuclear competition. It doesn’t take a massive government-funded lab for AI research, nor the secrecy of the Manhattan Project. The research conducted in the open science literature will trump research done in secret because of the benefits of collaboration and the free exchange of ideas.

While the United States should certainly increase funding for AI research, it should continue to treat it as an open scientific endeavor. Surveillance is not justified by the needs of machine learning, and real progress in AI doesn’t need it.

This essay was written with Jim Waldo, and previously appeared in Foreign Policy.

Posted on June 17, 2019 at 5:52 AM37 Comments

Computers and Video Surveillance

It used to be that surveillance cameras were passive. Maybe they just recorded, and no one looked at the video unless they needed to. Maybe a bored guard watched a dozen different screens, scanning for something interesting. In either case, the video was only stored for a few days because storage was expensive.

Increasingly, none of that is true. Recent developments in video analytics—fueled by artificial intelligence techniques like machine learning—enable computers to watch and understand surveillance videos with human-like discernment. Identification technologies make it easier to automatically figure out who is in the videos. And finally, the cameras themselves have become cheaper, more ubiquitous, and much better; cameras mounted on drones can effectively watch an entire city. Computers can watch all the video without human issues like distraction, fatigue, training, or needing to be paid. The result is a level of surveillance that was impossible just a few years ago.

An ACLU report published Thursday called “the Dawn of Robot Surveillance” says AI-aided video surveillance “won’t just record us, but will also make judgments about us based on their understanding of our actions, emotions, skin color, clothing, voice, and more. These automated ‘video analytics’ technologies threaten to fundamentally change the nature of surveillance.”

Let’s take the technologies one at a time. First: video analytics. Computers are getting better at recognizing what’s going on in a video. Detecting when a person or vehicle enters a forbidden area is easy. Modern systems can alarm when someone is walking in the wrong direction—going in through an exit-only corridor, for example. They can count people or cars. They can detect when luggage is left unattended, or when previously unattended luggage is picked up and removed. They can detect when someone is loitering in an area, is lying down, or is running. Increasingly, they can detect particular actions by people. Amazon’s cashier-less stores rely on video analytics to figure out when someone picks an item off a shelf and doesn’t put it back.

More than identifying actions, video analytics allow computers to understand what’s going on in a video: They can flag people based on their clothing or behavior, identify people’s emotions through body language and behavior, and find people who are acting “unusual” based on everyone else around them. Those same Amazon in-store cameras can analyze customer sentiment. Other systems can describe what’s happening in a video scene.

Computers can also identify people. AIs are getting better at identifying people in those videos. Facial recognition technology is improving all the time, made easier by the enormous stockpile of tagged photographs we give to Facebook and other social media sites, and the photos governments collect in the process of issuing ID cards and drivers licenses. The technology already exists to automatically identify everyone a camera “sees” in real time. Even without video identification, we can be identified by the unique information continuously broadcasted by the smartphones we carry with us everywhere, or by our laptops or Bluetooth-connected devices. Police have been tracking phones for years, and this practice can now be combined with video analytics.

Once a monitoring system identifies people, their data can be combined with other data, either collected or purchased: from cell phone records, GPS surveillance history, purchasing data, and so on. Social media companies like Facebook have spent years learning about our personalities and beliefs by what we post, comment on, and “like.” This is “data inference,” and when combined with video it offers a powerful window into people’s behaviors and motivations.

Camera resolution is also improving. Gigapixel cameras as so good that they can capture individual faces and identify license places in photos taken miles away. “Wide-area surveillance” cameras can be mounted on airplanes and drones, and can operate continuously. On the ground, cameras can be hidden in street lights and other regular objects. In space, satellite cameras have also dramatically improved.

Data storage has become incredibly cheap, and cloud storage makes it all so easy. Video data can easily be saved for years, allowing computers to conduct all of this surveillance backwards in time.

In democratic countries, such surveillance is marketed as crime prevention—or counterterrorism. In countries like China, it is blatantly used to suppress political activity and for social control. In all instances, it’s being implemented without a lot of public debate by law-enforcement agencies and by corporations in public spaces they control.

This is bad, because ubiquitous surveillance will drastically change our relationship to society. We’ve never lived in this sort of world, even those of us who have lived through previous totalitarian regimes. The effects will be felt in many different areas. False positives­—when the surveillance system gets it wrong­—will lead to harassment and worse. Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change. A recent ACLU report discusses these harms in more depth. While it’s possible that some of this surveillance is worth the trade-offs, we as society need to deliberately and intelligently make decisions about it.

Some jurisdictions are starting to notice. Last month, San Francisco became the first city to ban facial recognition technology by police and other government agencies. A similar ban is being considered in Somerville, MA, and Oakland, CA. These are exceptions, and limited to the more liberal areas of the country.

We often believe that technological change is inevitable, and that there’s nothing we can do to stop it—or even to steer it. That’s simply not true. We’re led to believe this because we don’t often see it, understand it, or have a say in how or when it is deployed. The problem is that technologies of cameras, resolution, machine learning, and artificial intelligence are complex and specialized.

Laws like what was just passed in San Francisco won’t stop the development of these technologies, but they’re not intended to. They’re intended as pauses, so our policy making can catch up with technology. As a general rule, the US government tends to ignore technologies as they’re being developed and deployed, so as not to stifle innovation. But as the rate of technological change increases, so does the unanticipated effects on our lives. Just as we’ve been surprised by the threats to democracy caused by surveillance capitalism, AI-enabled video surveillance will have similar surprising effects. Maybe a pause in our headlong deployment of these technologies will allow us the time to discuss what kind of society we want to live in, and then enact rules to bring that kind of society about.

This essay previously appeared on Vice Motherboard.

Posted on June 14, 2019 at 12:04 PM18 Comments

Video Surveillance by Computer

The ACLU’s Jay Stanley has just published a fantastic report: “The Dawn of Robot Surveillance” (blog post here) Basically, it lays out a future of ubiquitous video cameras watched by increasingly sophisticated video analytics software, and discusses the potential harms to society.

I’m not going to excerpt a piece, because you really need to read the whole thing.

Posted on June 14, 2019 at 6:28 AM9 Comments

iOS Shortcut for Recording the Police

Hey Siri; I’m getting pulled over” can be a shortcut:

Once the shortcut is installed and configured, you just have to say, for example, “Hey Siri, I’m getting pulled over.” Then the program pauses music you may be playing, turns down the brightness on the iPhone, and turns on “do not disturb” mode.

It also sends a quick text to a predetermined contact to tell them you’ve been pulled over, and it starts recording using the iPhone’s front-facing camera. Once you’ve stopped recording, it can text or email the video to a different predetermined contact and save it to Dropbox.

Posted on June 7, 2019 at 6:24 AM32 Comments

Security and Human Behavior (SHB) 2019

Today is the second day of the twelfth Workshop on Security and Human Behavior, which I am hosting at Harvard University.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions—all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks—remotely, because he was denied a visa earlier this year.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, and eleventh SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Posted on June 6, 2019 at 2:16 PM14 Comments

Chinese Military Wants to Develop Custom OS

Citing security concerns, the Chinese military wants to replace Windows with its own custom operating system:

Thanks to the Snowden, Shadow Brokers, and Vault7 leaks, Beijing officials are well aware of the US’ hefty arsenal of hacking tools, available for anything from smart TVs to Linux servers, and from routers to common desktop operating systems, such as Windows and Mac.

Since these leaks have revealed that the US can hack into almost anything, the Chinese government’s plan is to adopt a “security by obscurity” approach and run a custom operating system that will make it harder for foreign threat actors—mainly the US—to spy on Chinese military operations.

It’s unclear exactly how custom this new OS will be. It could be a Linux variant, like North Korea’s Red Star OS. Or it could be something completely new. Normally, I would be highly skeptical of a country being able to write and field its own custom operating system, but China is one of the few that is large enough to actually be able to do it. So I’m just moderately skeptical.

EDITED TO ADD (6/12): Russia also wants to develop its own flavor of Linux.

Posted on June 6, 2019 at 7:04 AM66 Comments

The Cost of Cybercrime

Really interesting paper calculating the worldwide cost of cybercrime:

Abstract: In 2012 we presented the first systematic study of the costs of cybercrime. In this paper, we report what has changed in the seven years since. The period has seen major platform evolution, with the mobile phone replacing the PC and laptop as the consumer terminal of choice, with Android replacing Windows, and with many services moving to the cloud. The use of social networks has become extremely widespread. The executive summary is that about half of all property crime, by volume and by value, is now online. We hypothesised in 2012 that this might be so; it is now established by multiple victimisation studies. Many cybercrime patterns appear to be fairly stable, but there are some interesting changes. Payment fraud, for example, has more than doubled in value but has fallen slightly as a proportion of payment value; the payment system has simply become bigger, and slightly more efficient. Several new cybercrimes are significant enough to mention, including business email compromise and crimes involving cryptocurrencies. The move to the cloud means that system misconfiguration may now be responsible for as many breaches as phishing. Some companies have suffered large losses as a side-effect of denial-of-service worms released by state actors, such as NotPetya; we have to take a view on whether they count as cybercrime. The infrastructure supporting cybercrime, such as botnets, continues to evolve, and specific crimes such as premium-rate phone scams have evolved some interesting variants. The over-all picture is the same as in 2012: traditional offences that are now technically ‘computercrimes’ such as tax and welfare fraud cost the typical citizen in the low hundreds of Euros/dollars a year; payment frauds and similar offences, where the modus operandi has been completely changed by computers, cost in the tens; while the new computer crimes cost in the tens of cents. Defending against the platforms used to support the latter two types of crime cost citizens in the tens of dollars. Our conclusions remain broadly the same as in 2012: it would be economically rational to spend less in anticipation of cybercrime (on antivirus, firewalls, etc.) and more on response. We are particularly bad at prosecuting criminals who operate infrastructure that other wrongdoers exploit. Given the growing realisation among policymakers that crime hasn’t been falling over the past decade, merely moving online, we might reasonably hope for better funded and coordinated law-enforcement action.

Richard Clayton gave a presentation on this yesterday at WEIS. His final slide contained a summary.

  • Payment fraud is up, but credit card sales are up even more—so we’re winning.
  • Cryptocurrencies are enabling new scams, but the big money is still being lost in more traditional investment fraud.
  • Telcom fraud is down, basically because Skype is free.
  • Anti-virus fraud has almost disappeared, but tech support scams are growing very rapidly.
  • The big money is still in tax fraud, welfare fraud, VAT fraud, and so on.
  • We spend more money on cyber defense than we do on the actual losses.
  • Criminals largely act with impunity. They don’t believe they will get caught, and mostly that’s correct.

Bottom line: the technology has changed a lot since 2012, but the economic considerations remain unchanged.

Posted on June 4, 2019 at 6:06 AM18 Comments

The Importance of Protecting Cybersecurity Whistleblowers

Interesting essay arguing that we need better legislation to protect cybersecurity whistleblowers.

Congress should act to protect cybersecurity whistleblowers because information security has never been so important, or so challenging. In the wake of a barrage of shocking revelations about data breaches and companies mishandling of customer data, a bipartisan consensus has emerged in support of legislation to give consumers more control over their personal information, require companies to disclose how they collect and use consumer data, and impose penalties for data breaches and misuse of consumer data. The Federal Trade Commission (“FTC”) has been held out as the best agency to implement this new regulation. But for any such legislation to be effective, it must protect the courageous whistleblowers who risk their careers to expose data breaches and unauthorized use of consumers’ private data.

Whistleblowers strengthen regulatory regimes, and cybersecurity regulation would be no exception. Republican and Democratic leaders from the executive and legislative branches have extolled the virtues of whistleblowers. High-profile cases abound. Recently, Christopher Wylie exposed Cambridge Analytica’s misuse of Facebook user data to manipulate voters, including its apparent theft of data from 50 million Facebook users as part of a psychological profiling campaign. Though additional research is needed, the existing empirical data reinforces the consensus that whistleblowers help prevent, detect, and remedy misconduct. Therefore it is reasonable to conclude that protecting and incentivizing whistleblowers could help the government address the many complex challenges facing our nation’s information systems.

Posted on June 3, 2019 at 6:30 AM17 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.