March 15, 2019
by Bruce Schneier
CTO, IBM Resilient
schneier@schneier.com
https://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Cataloging IoT Vulnerabilities
- I Am Not Associated with Swift Recovery Ltd.
- Estonia’s Volunteer Cyber Militia
- Details on Recent DNS Hijacking
- Reverse Location Search Warrants
- Gen. Nakasone on US Cyber Command
- On the Security of Password Managers
- Attacking Soldiers on Social Media
- “Insider Threat” Detection Software
- Can Everybody Read the US Terrorist Watch List?
- Data Leakage from Encrypted Databases
- The Latest in Creepy Spyware
- Cybersecurity for the Public Interest
- Digital Signatures in PDFs Are Broken
- Letterlocking
- Detecting Shoplifting Behavior
- Cybersecurity Insurance Not Paying for NotPetya Losses
- Videos and Links from the Public-Interest Technology Track at the RSA Conference
- Russia Is Testing Online Voting
- On Surveillance in the Workplace
- Judging Facebook’s Privacy Shift
- DARPA Is Developing an Open-Source Voting System
- Upcoming Speaking Engagements
Cataloging IoT Vulnerabilities
[2019.02.18] Recent articles about IoT vulnerabilities describe hacking of construction cranes, supermarket freezers, and electric scooters.
I Am Not Associated with Swift Recovery Ltd.
[2019.02.18] It seems that someone from a company called Swift Recovery Ltd. is impersonating me—at least on Telegram. The person is using a photo of me, and is using details of my life available on Wikipedia to convince people that they are me.
They are not.
If anyone has any more information—stories, screen shots of chats, etc.—please forward them to me.
Estonia’s Volunteer Cyber Militia
[2019.02.19] Interesting—although short and not very detailed—article about Estonia’s volunteer cyber-defense militia.
Padar’s militia of amateur IT workers, economists, lawyers, and other white-hat types are grouped in the city of Tartu, about 65 miles from the Russian border, and in the capital, Tallinn, about twice as far from it. The volunteers, who’ve inspired a handful of similar operations around the world, are readying themselves to defend against the kind of sustained digital attack that could cause mass service outages at hospitals, banks, and military bases, and with other critical operations, including voting systems. Officially, the team is part of Estonia’s 26,000-strong national guard, the Defense League.
[…]
Formally established in 2011, Padar’s unit mostly runs on about €150,000 ($172,000) in annual state funding, plus salaries for him and four colleagues. (If that sounds paltry, remember that the country’s median annual income is about €12,000.) Some volunteers oversee a website that calls out Russian propaganda posing as news directed at Estonians in Estonian, Russian, English, and German. Other members recently conducted forensic analysis on an attack against a military system, while yet others searched for signs of a broader campaign after discovering vulnerabilities in the country’s electronic ID cards, which citizens use to check bank and medical records and to vote. (The team says it didn’t find anything, and the security flaws were quickly patched.)
Mostly, the volunteers run weekend drills with troops, doctors, customs and tax agents, air traffic controllers, and water and power officials. “Somehow, this model is based on enthusiasm,” says Andrus Ansip, who was prime minister during the 2007 attack and now oversees digital affairs for the European Commission. To gauge officials’ responses to realistic attacks, the unit might send out emails with sketchy links or drop infected USB sticks to see if someone takes the bait.
EDITED TO ADD (3/11): Here’s a brief interview with the current commander—and one of the founding members of the unit. Here’s a longer presentation.
Details on Recent DNS Hijacking
[2019.02.20] At the end of January, the US Department of Homeland Security issued a warning regarding serious DNS hijacking attempts against US government domains.
Brian Krebs wrote an excellent article detailing the attacks and their implications. Strongly recommended.
Reverse Location Search Warrants
[2019.02.21] The police are increasingly getting search warrants for information about all cell phones in a certain location at a certain time:
Police departments across the country have been knocking at Google’s door for at least the last two years with warrants to tap into the company’s extensive stores of cellphone location data. Known as “reverse location search warrants,” these legal mandates allow law enforcement to sweep up the coordinates and movements of every cellphone in a broad area. The police can then check to see if any of the phones came close to the crime scene. In doing so, however, the police can end up not only fishing for a suspect, but also gathering the location data of potentially hundreds (or thousands) of innocent people. There have only been anecdotal reports of reverse location searches, so it’s unclear how widespread the practice is, but privacy advocates worry that Google’s data will eventually allow more and more departments to conduct indiscriminate searches.
Of course, it’s not just Google who can provide this information.
I am also reminded of a Canadian surveillance program disclosed by Snowden.
I spend a lot of time talking about this sort of thing in Data and Goliath. Once you have everyone under surveillance all the time, many things are possible.
EDITED TO ADD (3/13): Here’ the portal law enforcement uses to make its requests.
Gen. Nakasone on US Cyber Command
[2019.02.22] Really interesting article by and interview with Paul M. Nakasone (Commander of US Cyber Command, Director of the National Security Agency, and Chief of the Central Security Service) in the current issue of Joint Forces Quarterly. He talks about the evolving role of US Cyber Command, and its new posture of “persistent engagement” using a “cyber-persistant force.”
From the article:
We must “defend forward” in cyberspace, as we do in the physical domains. Our naval forces do not defend by staying in port, and our airpower does not remain at airfields. They patrol the seas and skies to ensure they are positioned to defend our country before our borders are crossed. The same logic applies in cyberspace. Persistent engagement of our adversaries in cyberspace cannot be successful if our actions are limited to DOD networks. To defend critical military and national interests, our forces must operate against our enemies on their virtual territory as well. Shifting from a response outlook to a persistence force that defends forward moves our cyber capabilities out of their virtual garrisons, adopting a posture that matches the cyberspace operational environment.
From the interview:
As we think about cyberspace, we should agree on a few foundational concepts. First, our nation is in constant contact with its adversaries; we’re not waiting for adversaries to come to us. Our adversaries understand this, and they are always working to improve that contact. Second, our security is challenged in cyberspace. We have to actively defend; we have to conduct reconnaissance; we have to understand where our adversary is and his capabilities; and we have to understand their intent. Third, superiority in cyberspace is temporary; we may achieve it for a period of time, but it’s ephemeral. That’s why we must operate continuously to seize and maintain the initiative in the face of persistent threats. Why do the threats persist in cyberspace? They persist because the barriers to entry are low and the capabilities are rapidly available and can be easily repurposed. Fourth, in this domain, the advantage favors those who have initiative. If we want to have an advantage in cyberspace, we have to actively work to either improve our defenses, create new accesses, or upgrade our capabilities. This is a domain that requires constant action because we’re going to get reactions from our adversary.
[…]
Persistent engagement is the concept that states we are in constant contact with our adversaries in cyberspace, and success is determined by how we enable and act. In persistent engagement, we enable other interagency partners. Whether it’s the FBI or DHS, we enable them with information or intelligence to share with elements of the CIKR [critical infrastructure and key resources] or with select private-sector companies. The recent midterm elections is an example of how we enabled our partners. As part of the Russia Small Group, USCYBERCOM and the National Security Agency [NSA] enabled the FBI and DHS to prevent interference and influence operations aimed at our political processes. Enabling our partners is two-thirds of persistent engagement. The other third rests with our ability to act—that is, how we act against our adversaries in cyberspace. Acting includes defending forward. How do we warn, how do we influence our adversaries, how do we position ourselves in case we have to achieve outcomes in the future? Acting is the concept of operating outside our borders, being outside our networks, to ensure that we understand what our adversaries are doing. If we find ourselves defending inside our own networks, we have lost the initiative and the advantage.
[…]
The concept of persistent engagement has to be teamed with “persistent presence” and “persistent innovation.” Persistent presence is what the Intelligence Community is able to provide us to better understand and track our adversaries in cyberspace. The other piece is persistent innovation. In the last couple of years, we have learned that capabilities rapidly change; accesses are tenuous; and tools, techniques, and tradecraft must evolve to keep pace with our adversaries. We rely on operational structures that are enabled with the rapid development of capabilities. Let me offer an example regarding the need for rapid change in technologies. Compare the air and cyberspace domains. Weapons like JDAMs [Joint Direct Attack Munitions] are an important armament for air operations. How long are those JDAMs good for? Perhaps 5, 10, or 15 years, some-times longer given the adversary. When we buy a capability or tool for cyberspace…we rarely get a prolonged use we can measure in years. Our capabilities rarely last 6 months, let alone 6 years. This is a big difference in two important domains of future conflict. Thus, we will need formations that have ready access to developers.
Solely from a military perspective, these are obviously the right things to be doing. From a societal perspective—from the perspective a potential arms race—I’m much less sure. I’m also worried about the singular focus on nation-state actors in an environment where capabilities diffuse so quickly. But Cyber Command’s job is not cybersecurity and resilience.
The whole thing is worth reading, regardless of whether you agree or disagree.
EDITED TO ADD (2/26): As an example, US Cyber Command disrupted a Russian troll farm during the 2018 midterm elections.
On the Security of Password Managers
[2019.02.25] There’s new research on the security of password managers, specifically 1Password, Dashlane, KeePass, and Lastpass. This work specifically looks at password leakage on the host computer. That is, does the password manager accidentally leave plaintext copies of the password lying around memory?
All password managers we examined sufficiently secured user secrets while in a “not running” state. That is, if a password database were to be extracted from disk and if a strong master password was used, then brute forcing of a password manager would be computationally prohibitive.
Each password manager also attempted to scrub secrets from memory. But residual buffers remained that contained secrets, most likely due to memory leaks, lost memory references, or complex GUI frameworks which do not expose internal memory management mechanisms to sanitize secrets.
This was most evident in 1Password7 where secrets, including the master password and its associated secret key, were present in both a locked and unlocked state. This is in contrast to 1Password4, where at most, a single entry is exposed in a “running unlocked” state and the master password exists in memory in an obfuscated form, but is easily recoverable. If 1Password4 scrubbed the master password memory region upon successful unlocking, it would comply with all proposed security guarantees we outlined earlier.
This paper is not meant to criticize specific password manager implementations; however, it is to establish a reasonable minimum baseline which all password managers should comply with. It is evident that attempts are made to scrub and sensitive memory in all password managers. However, each password manager fails in implementing proper secrets sanitization for various reasons.
For example:
LastPass obfuscates the master password while users are typing in the entry, and when the password manager enters an unlocked state, database entries are only decrypted into memory when there is user interaction. However, ISE reported that these entries persist in memory after the software enters a locked state. It was also possible for the researchers to extract the master password and interacted-with password entries due to a memory leak.
KeePass scrubs the master password from memory and is not recoverable. However, errors in workflows permitted the researchers from extracting credential entries which have been interacted with. In the case of Windows APIs, sometimes, various memory buffers which contain decrypted entries may not be scrubbed correctly.
Whether this is a big deal or not depends on whether you consider your computer to be trusted.
Several people have emailed me to ask why my own Password Safe was not included in the evaluation, and whether it has the same vulnerabilities. My guess about the former is that Password Safe isn’t as popular as the others. (This is for two reasons: 1) I don’t publicize it very much, and 2) it doesn’t have an easy way to synchronize passwords across devices or otherwise store password data in the cloud.) As to the latter: we tried to code Password Safe not to leave plaintext passwords lying around in memory.
So, Independent Security Evaluators: take a look at Password Safe.
Also, remember the vulnerabilities found in many cloud-based password managers back in 2014?
News article. Slashdot thread.
Attacking Soldiers on Social Media
[2019.02.26] A research group at NATO’s Strategic Communications Center of Excellence catfished soldiers involved in an European military exercise—we don’t know what country they were from—to demonstrate the power of the attack technique.
Over four weeks, the researchers developed fake pages and closed groups on Facebook that looked like they were associated with the military exercise, as well as profiles impersonating service members both real and imagined.
To recruit soldiers to the pages, they used targeted Facebook advertising. Those pages then promoted the closed groups the researchers had created. Inside the groups, the researchers used their phony accounts to ask the real service members questions about their battalions and their work. They also used these accounts to “friend” service members. According to the report, Facebook’s Suggested Friends feature proved helpful in surfacing additional targets.
The researchers also tracked down service members’ Instagram and Twitter accounts and searched for other information available online, some of which a bad actor might be able to exploit. “We managed to find quite a lot of data on individual people, which would include sensitive information,” Biteniece says. “Like a serviceman having a wife and also being on dating apps.”
By the end of the exercise, the researchers identified 150 soldiers, found the locations of several battalions, tracked troop movements, and compelled service members to engage in “undesirable behavior,” including leaving their positions against orders.
“Every person has a button. For somebody there’s a financial issue, for somebody it’s a very appealing date, for somebody it’s a family thing,” Sarts says. “It’s varied, but everybody has a button. The point is, what’s openly available online is sufficient to know what that is.”
This is the future of warfare. It’s one of the reasons China stole all of that data from the Office of Personal Management. If indeed a country’s intelligence service was behind the Equifax attack, this is why they did it.
Go back and read this scenario from the Center for Strategic and International Studies. Why wouldn’t a country intent on starting a war do it that way?
“Insider Threat” Detection Software
[2019.02.27] Notice this bit from an article on the arrest of Christopher Hasson:
It was only after Hasson’s arrest last Friday at his workplace that the chilling plans prosecutors assert he was crafting became apparent, detected by an internal Coast Guard program that watches for any “insider threat.”
The program identified suspicious computer activity tied to Hasson, prompting the agency’s investigative service to launch an investigation last fall, said Lt. Cmdr. Scott McBride, a service spokesman.
Any detection system of this kind is going to have to balance false positives with false negatives. Could it be something as simple as visiting right-wing extremist websites or watching their videos? It just has to be something more sophisticated than researching pressure cookers. I’m glad that Hasson was arrested before he killed anyone rather than after, but I worry that these systems are basically creating thoughtcrime.
Can Everybody Read the US Terrorist Watch List?
[2019.02.28] After years of claiming that the Terrorist Screening Database is kept secret within the government, we have now learned that the DHS shares it “with more than 1,400 private entities, including hospitals and universities….”
Critics say that the watchlist is wildly overbroad and mismanaged, and that large numbers of people wrongly included on the list suffer routine difficulties and indignities because of their inclusion.
The government’s admission comes in a class-action lawsuit filed in federal court in Alexandria by Muslims who say they regularly experience difficulties in travel, financial transactions and interactions with law enforcement because they have been wrongly added to the list.
Of course that is the effect.
We need more transparency into this process. People need a way to challenge their inclusion on the list, and a redress process if they are being falsely accused.
Data Leakage from Encrypted Databases
[2019.03.01] Matthew Green has a super-interesting blog post about information leakage from encrypted databases. It describes the recent work by Paul Grubbs, Marie-Sarah Lacharité, Brice Minaud, and Kenneth G. Paterson.
Even the summary is too much to summarize, so read it.
The Latest in Creepy Spyware
[2019.03.04] The Nest home alarm system shipped with a secret microphone, which—according to the company—was only an accidental secret:
On Tuesday, a Google spokesperson told Business Insider the company had made an “error.”
“The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the spokesperson said. “That was an error on our part.”
Where are the consumer protection agencies? They should be all over this.
And while they’re figuring out which laws Google broke, they should also look at American Airlines. Turns out that some of their seats have built-in cameras:
American Airlines spokesperson Ross Feinstein confirmed to BuzzFeed News that cameras are present on some of the airlines’ in-flight entertainment systems, but said “they have never been activated, and American is not considering using them.” Feinstein added, “Cameras are a standard feature on many in-flight entertainment systems used by multiple airlines. Manufacturers of those systems have included cameras for possible future uses, such as hand gestures to control in-flight entertainment.”
That makes it all okay, doesn’t it?
Actually, I kind of understand the airline seat camera thing. My guess is that whoever designed the in-flight entertainment system just specced a standard tablet computer, and they all came with unnecessary features like cameras. This is how we end up with refrigerators with Internet connectivity and Roombas with microphones. It’s cheaper to leave the functionality in than it is to remove it.
Still, we need better disclosure laws.
Cybersecurity for the Public Interest
[2019.03.05] The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there’s no way to provide this capability without also weakening the security of every user of those devices and communications systems.
It’s an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism—as practiced by the Internet companies that are already spying on everyone—matters. So does society’s underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?
The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals—that occasionally become law—that are technological disasters.
This isn’t sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand—and are involved in—policy. We need public-interest technologists.
Let’s pause at that term. The Ford Foundation defines public-interest technologists as “technology practitioners who focus on social justice, the common good, and/or the public interest.” A group of academics recently wrote that public-interest technologists are people who “study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good.” Tim Berners-Lee has called them “philosophical engineers.” I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it’s not the best term—and I know not everyone likes it—but it’s a decent umbrella term that can encompass all these roles.
We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn’t want to formulate public policy about artificial intelligence and robotics without a security technologist involved.
Public-interest technology isn’t new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.
This might all seem like a lot, but it’s really not. There aren’t enough people doing it, there aren’t enough people who know it needs to be done, and there aren’t enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.
There are many barriers. There’s a report titled A Pivotal Moment that includes this quote: “While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress.”
That quote speaks to the three places for intervention. One: the supply side. There just isn’t enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with “clinics” at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they’ve learned and done.
Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama’s US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.
Three: the marketplace. We need job boards, conferences, and skills exchanges—places where people on the supply side can learn about the demand.
Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.
This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.
This is bigger than computer security. Technology now permeates society in a way it didn’t just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.
More generally, technologists need to understand the policy ramifications of their work. There’s a pervasive myth in Silicon Valley that technology is politically neutral. It’s not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn’t matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.
This is where the core issues of society lie. The defining political question of the 20th century was: “What should be governed by the state, and what should be governed by the market?” This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: “How much of our lives should be governed by technology, and under what terms?” In the last century, economists drove public policy. In this century, it will be technologists.
The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.
This essay previously appeared in the January/February issue of IEEE Security & Privacy.
Together with the Ford Foundation, I am hosting a one-day mini-track on public-interest technologists at the RSA Conference this week on Thursday. We’ve had some press coverage.
EDITED TO ADD (3/7): More news articles.
Digital Signatures in PDFs Are Broken
[2019.03.06] Researchers have demonstrated spoofing of digital signatures in PDF files.
This would matter more if PDF digital signatures were widely used. Still, the researchers have worked with the various companies that make PDF readers to close the vulnerabilities. You should update your software.
News article.
Letterlocking
[2019.03.07] Really good article on the now-lost art of letterlocking.
Detecting Shoplifting Behavior
[2019.03.07] This system claims to detect suspicious behavior that indicates shoplifting:
Vaak, a Japanese startup, has developed artificial intelligence software that hunts for potential shoplifters, using footage from security cameras for fidgeting, restlessness and other potentially suspicious body language.
The article has no detail or analysis, so we don’t know how well it works. But this kind of thing is surely the future of video surveillance.
Cybersecurity Insurance Not Paying for NotPetya Losses
[2019.03.08] This will complicate things:
To complicate matters, having cyber insurance might not cover everyone’s losses. Zurich American Insurance Company refused to pay out a $100 million claim from Mondelez, saying that since the U.S. and other governments labeled the NotPetya attack as an action by the Russian military their claim was excluded under the “hostile or warlike action in time of peace or war” exemption.
I get that $100 million is real money, but the insurance industry needs to figure out how to properly insure commercial networks against this sort of thing.
Videos and Links from the Public-Interest Technology Track at the RSA Conference
[2019.03.08] Yesterday at the RSA Conference, I gave a keynote talk about the role of public-interest technologists in cybersecurity. (Video here).
I also hosted a one-day mini-track on the topic. We had six panels, and they were all great. If you missed it live, we have videos:
- How Public Interest Technologists are Changing the World: Matt Mitchell, Tactical Tech; Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; and J. Bob Alotta, Astraea Foundation (Moderator). (Video here.)
- Public Interest Tech in Silicon Valley: Mitchell Baker, Chairwoman, Mozilla Corporation; Cindy Cohn, EFF; and Lucy Vasserman, Software Engineer, Google. (Video here.)
- Working in Civil Society: Sarah Aoun, Digital Security Technologist; Peter Eckersley, Partnership on AI; Harlo Holmes, Director of Newsroom Digital Security, Freedom of the Press Foundation; and John Scott-Railton, Senior Researcher, Citizen Lab. (Video here.)
- Government Needs You: Travis Moore, TechCongress; Hashim Mteuzi, Senior Manager, Network Talent Initiative, Code for America; Gigi Sohn, Distinguished Fellow, Georgetown Law Institute for Technology, Law and Policy; and Ashkan Soltani, Independent Consultant. (Video here.)
- Changing Academia: Latanya Sweeney, Harvard; Dierdre Mulligan, UC Berkeley; and Danny Weitzner, MIT CSAIL. (Video here.)
- The Future of Public Interest Tech: Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; Ben Wizner, ACLU; and Jenny Toomey, Director, Internet Freedom, Ford Foundation (Moderator). (Video here.)
I also conducted eight short video interviews with different people involved in public-interest technology: independent security technologist Sarah Aoun, TechCongress’s Travis Moore, Ford Foundation’s Jenny Toomey, CitizenLab’s John-Scott Railton, Dierdre Mulligan from UC Berkeley, ACLU’s Jon Callas, Matt Mitchell of TacticalTech, and Kelley Misata from Sightline Security.
Here is my blog post about the event. Here’s Ford Foundation’s blog post on why they helped me organize the event.
We got some good press coverage about the event. (Hey MeriTalk: you spelled my name wrong.)
Related: Here’s my longer essay on the need for public-interest technologists in Internet security, and my public-interest technology resources page.
And just so we have all the URLs in one place, here is a page from the RSA Conference website with links to all of the videos.
If you liked this mini-track, please rate it highly on your RSA Conference evaluation form. I’d like to do it again next year.
Russia Is Testing Online Voting
[2019.03.11] This is a bad idea:
A second innovation will allow “electronic absentee voting” within voters’ home precincts. In other words, Russia is set to introduce its first online voting system. The system will be tested in a Moscow neighborhood that will elect a single member to the capital’s city council in September. The details of how the experiment will work are not yet known; the State Duma’s proposal on Internet voting does not include logistical specifics. The Central Election Commission’s reference materials on the matter simply reference “absentee voting, blockchain technology.” When Dmitry Vyatkin, one of the bill’s co-sponsors, attempted to describe how exactly blockchains would be involved in the system, his explanation was entirely disconnected from the actual functions of that technology. A discussion of this new type of voting is planned for an upcoming public forum in Moscow.
Surely the Russians know that online voting is insecure. Could they not care, or do they think the surveillance is worth the risk?
On Surveillance in the Workplace
[2019.03.12] Data & Society just published a report entitled “Workplace Monitoring & Surveillance“:
This explainer highlights four broad trends in employee monitoring and surveillance technologies:
- Prediction and flagging tools that aim to predict characteristics or behaviors of employees or that are designed to identify or deter perceived rule-breaking or fraud. Touted as useful management tools, they can augment biased and discriminatory practices in workplace evaluations and segment workforces into risk categories based on patterns of behavior.
- Biometric and health data of workers collected through tools like wearables, fitness tracking apps, and biometric timekeeping systems as a part of employer- provided health care programs, workplace wellness, and digital tracking work shifts tools. Tracking non-work-related activities and information, such as health data, may challenge the boundaries of worker privacy, open avenues for discrimination, and raise questions about consent and workers’ ability to opt out of tracking.
- Remote monitoring and time-tracking used to manage workers and measure performance remotely. Companies may use these tools to decentralize and lower costs by hiring independent contractors, while still being able to exert control over them like traditional employees with the aid of remote monitoring tools. More advanced time-tracking can generate itemized records of on-the-job activities, which can be used to facilitate wage theft or allow employers to trim what counts as paid work time.
- Gamification and algorithmic management of work activities through continuous data collection. Technology can take on management functions, such as sending workers automated “nudges” or adjusting performance benchmarks based on a worker’s real-time progress, while gamification renders work activities into competitive, game-like dynamics driven by performance metrics. However, these practices can create punitive work environments that place pressures on workers to meet demanding and shifting efficiency benchmarks.
In a blog post about this report, Cory Doctorow mentioned “the adoption curve for oppressive technology, which goes, ‘refugee, immigrant, prisoner, mental patient, children, welfare recipient, blue collar worker, white collar worker.'” I don’t agree with the ordering, but the sentiment is correct. These technologies are generally used first against people with diminished rights: prisoners, children, the mentally ill, and soldiers.
Judging Facebook’s Privacy Shift
[2019.03.13] Facebook is making a new and stronger commitment to privacy. Last month, the company hired three of its most vociferous critics and installed them in senior technical positions. And on Wednesday, Mark Zuckerberg wrote that the company will pivot to focus on private conversations over the public sharing that has long defined the platform, even while conceding that “frankly we don’t currently have a strong reputation for building privacy protective services.”
There is ample reason to question Zuckerberg’s pronouncement: The company has made—and broken—many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook’s surveillance capitalism business model. All the post discusses is making private chats more central to the company, which seems to be a play for increased market dominance and to counter the Chinese company WeChat.
In security and privacy, the devil is always in the details—and Zuckerberg’s post provides none. But we’ll take him at his word and try to fill in some of the details here. What follows is a list of changes we should expect if Facebook is serious about changing its business model and improving user privacy.
How Facebook treats people on its platform
Increased transparency over advertiser and app accesses to user data. Today, Facebook users can download and view much of the data the company has about them. This is important, but it doesn’t go far enough. The company could be more transparent about what data it shares with advertisers and others and how it allows advertisers to select users they show ads to. Facebook could use its substantial skills in usability testing to help people understand the mechanisms advertisers use to show them ads or the reasoning behind what it chooses to show in user timelines. It could deliver on promises in this area.
Better—and more usable—privacy options. Facebook users have limited control over how their data is shared with other Facebook users and almost no control over how it is shared with Facebook’s advertisers, which are the company’s real customers. Moreover, the controls are buried deep behind complex and confusing menu options. To be fair, some of this is because privacy is complex, and it’s hard to understand the results of different options. But much of this is deliberate; Facebook doesn’t want its users to make their data private from other users.
The company could give people better control over how—and whether—their data is used, shared, and sold. For example, it could allow users to turn off individually targeted news and advertising. By this, we don’t mean simply making those advertisements invisible; we mean turning off the data flows into those tailoring systems. Finally, since most users stick to the default options when it comes to configuring their apps, a changing Facebook could tilt those defaults toward more privacy, requiring less tailoring most of the time.
More user protection from stalking. “Facebook stalking” is often thought of as “stalking light,” or “harmless.” But stalkers are rarely harmless. Facebook should acknowledge this class of misuse and work with experts to build tools that protect all of its users, especially its most vulnerable ones. Such tools should guide normal people away from creepiness and give victims power and flexibility to enlist aid from sources ranging from advocates to police.
Fully ending real-name enforcement. Facebook’s real-names policy, requiring people to use their actual legal names on the platform, hurts people such as activists, victims of intimate partner violence, police officers whose work makes them targets, and anyone with a public persona who wishes to have control over how they identify to the public. There are many ways Facebook can improve on this, from ending enforcement to allowing verifying pseudonyms for everyone—not just celebrities like Lady Gaga. Doing so would mark a clear shift.
How Facebook runs its platform
Increased transparency of Facebook’s business practices. One of the hard things about evaluating Facebook is the effort needed to get good information about its business practices. When violations are exposed by the media, as they regularly are, we are all surprised at the different ways Facebook violates user privacy. Most recently, the company used phone numbers provided for two-factor authentication for advertising and networking purposes. Facebook needs to be both explicit and detailed about how and when it shares user data. In fact, a move from discussing “sharing” to discussing “transfers,” “access to raw information,” and “access to derived information” would be a visible improvement.
Increased transparency regarding censorship rules. Facebook makes choices about what content is acceptable on its site. Those choices are controversial, implemented by thousands of low-paid workers quickly implementing unclear rules. These are tremendously hard problems without clear solutions. Even obvious rules like banning hateful words run into challenges when people try to legitimately discuss certain important topics. Whatever Facebook does in this regard, the company needs be more transparent about its processes. It should allow regulators and the public to audit the company’s practices. Moreover, Facebook should share any innovative engineering solutions with the world, much as it currently shares its data center engineering.
Better security for collected user data. There have been numerous examples of attackers targeting cloud service platforms to gain access to user data. Facebook has a large and skilled product security team that says some of the right things. That team needs to be involved in the design trade-offs for features and not just review the near-final designs for flaws. Shutting down a feature based on internal security analysis would be a clear message.
Better data security so Facebook sees less. Facebook eavesdrops on almost every aspect of its users’ lives. On the other hand, WhatsApp—purchased by Facebook in 2014—provides users with end-to-end encrypted messaging. While Facebook knows who is messaging whom and how often, Facebook has no way of learning the contents of those messages. Recently, Facebook announced plans to combine WhatsApp, Facebook Messenger, and Instagram, extending WhatsApp’s security to the consolidated system. Changing course here would be a dramatic and negative signal.
Collecting less data from outside of Facebook. Facebook doesn’t just collect data about you when you’re on the platform. Because its “like” button is on so many other pages, the company can collect data about you when you’re not on Facebook. It even collects what it calls “shadow profiles“—data about you even if you’re not a Facebook user. This data is combined with other surveillance data the company buys, including health and financial data. Collecting and saving less of this data would be a strong indicator of a new direction for the company.
Better use of Facebook data to prevent violence. There is a trade-off between Facebook seeing less and Facebook doing more to prevent hateful and inflammatory speech. Dozens of people have been killed by mob violence because of fake news spread on WhatsApp. If Facebook were doing a convincing job of controlling fake news without end-to-end encryption, then we would expect to hear how it could use patterns in metadata to handle encrypted fake news.
How Facebook manages for privacy
Create a team measured on privacy and trust. Where companies spend their money tells you what matters to them. Facebook has a large and important growth team, but what team, if any, is responsible for privacy, not as a matter of compliance or pushing the rules, but for engineering? Transparency in how it is staffed relative to other teams would be telling.
Hire a senior executive responsible for trust. Facebook’s current team has been focused on growth and revenue. Its one chief security officer, Alex Stamos, was not replaced when he left in 2018, which may indicate that having an advocate for security on the leadership team led to debate and disagreement. Retaining a voice for security and privacy issues at the executive level, before those issues affected users, was a good thing. Now that responsibility is diffuse. It’s unclear how Facebook measures and assesses its own progress and who might be held accountable for failings. Facebook can begin the process of fixing this by designating a senior executive who is responsible for trust.
Engage with regulators. Much of Facebook’s posturing seems to be an attempt to forestall regulation. Facebook sends lobbyists to Washington and other capitals, and until recently the company sent support staff to politician’s offices. It has secret lobbying campaigns against privacy laws. And Facebook has repeatedly violated a 2011 Federal Trade Commission consent order regarding user privacy. Regulating big technical projects is not easy. Most of the people who understand how these systems work understand them because they build them. Societies will regulate Facebook, and the quality of that regulation requires real education of legislators and their staffs. While businesses often want to avoid regulation, any focus on privacy will require strong government oversight. If Facebook is serious about privacy being a real interest, it will accept both government regulation and community input.
User privacy is traditionally against Facebook’s core business interests. Advertising is its business model, and targeted ads sell better and more profitably—and that requires users to engage with the platform as much as possible. Increased pressure on Facebook to manage propaganda and hate speech could easily lead to more surveillance. But there is pressure in the other direction as well, as users equate privacy with increased control over how they present themselves on the platform.
We don’t expect Facebook to abandon its advertising business model, relent in its push for monopolistic dominance, or fundamentally alter its social networking platforms. But the company can give users important privacy protections and controls without abandoning surveillance capitalism. While some of these changes will reduce profits in the short term, we hope Facebook’s leadership realizes that they are in the best long-term interest of the company.
Facebook talks about community and bringing people together. These are admirable goals, and there’s plenty of value (and profit) in having a sustainable platform for connecting people. But as long as the most important measure of success is short-term profit, doing things that help strengthen communities will fall by the wayside. Surveillance, which allows individually targeted advertising, will be prioritized over user privacy. Outrage, which drives engagement, will be prioritized over feelings of belonging. And corporate secrecy, which allows Facebook to evade both regulators and its users, will be prioritized over societal oversight. If Facebook now truly believes that these latter options are critical to its long-term success as a company, we welcome the changes that are forthcoming.
This essay was co-authored with Adam Shostack, and originally appeared on Medium OneZero. We wrote a similar essay in 2002 about judging Microsoft’s then newfound commitment to security.
DARPA Is Developing an Open-Source Voting System
[2019.03.14] This sounds like a good development:
…a new $10 million contract the Defense Department’s Defense Advanced Research Projects Agency (DARPA) has launched to design and build a secure voting system that it hopes will be impervious to hacking.
The first-of-its-kind system will be designed by an Oregon-based firm called Galois, a longtime government contractor with experience in designing secure and verifiable systems. The system will use fully open source voting software, instead of the closed, proprietary software currently used in the vast majority of voting machines, which no one outside of voting machine testing labs can examine. More importantly, it will be built on secure open source hardware, made from special secure designs and techniques developed over the last year as part of a special program at DARPA. The voting system will also be designed to create fully verifiable and transparent results so that voters don’t have to blindly trust that the machines and election officials delivered correct results.
But DARPA and Galois won’t be asking people to blindly trust that their voting systems are secure—as voting machine vendors currently do. Instead they’ll be publishing source code for the software online and bring prototypes of the systems to the Def Con Voting Village this summer and next, so that hackers and researchers will be able to freely examine the systems themselves and conduct penetration tests to gauge their security. They’ll also be working with a number of university teams over the next year to have them examine the systems in formal test environments.
Upcoming Speaking Engagements
I’m teaching a live online class called “Spotlight on Cloud: The Future of Internet Security with Bruce Schneier” on O’Reilly’s learning platform, Thursday, April 4, at 10:00 AM PT/1:00 PM ET.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of 14 books—including the New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security and the CTO of IBM Resilient.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM, IBM Security, or IBM Resilient.
Copyright © 2019 by Bruce Schneier.