Page 89

Friday Squid Blogging: Squid Can Edit Their RNA

This is just crazy:

Scientists don’t yet know for sure why octopuses, and other shell-less cephalopods including squid and cuttlefish, are such prolific editors. Researchers are debating whether this form of genetic editing gave cephalopods an evolutionary leg (or tentacle) up or whether the editing is just a sometimes useful accident. Scientists are also probing what consequences the RNA alterations may have under various conditions.

I sometimes think that cephalopods are aliens that crash-landed on this planet eons ago.

Another article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on June 16, 2023 at 5:13 PMView Comments

Security and Human Behavior (SHB) 2023

I’m just back from the sixteenth Workshop on Security and Human Behavior, hosted by Alessandro Acquisti at Carnegie Mellon University in Pittsburgh.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The fifty or so attendees include psychologists, economists, computer security researchers, criminologists, sociologists, political scientists, designers, lawyers, philosophers, anthropologists, geographers, neuroscientists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. Short talks limit presenters’ ability to get into the boring details of their work, and the interdisciplinary audience discourages jargon.

For the past decade and a half, this workshop has been the most intellectually stimulating two days of my professional year. It influences my thinking in different and sometimes surprising ways­ 00 and has resulted in some unexpected collaborations.

And that’s what’s valuable. One of the most important outcomes of the event is new collaborations. Over the years, we have seen new interdisciplinary research between people who met at the workshop, and ideas and methodologies move from one field into another based on connections made at the workshop. This is why some of us have been coming back every year for over a decade.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is live blogging the talks. We are back 100% in person after two years of fully remote and one year of hybrid.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, thirteenth, fourteenth, and fifteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio/video recordings of the sessions. Ross also maintains a good webpage of psychology and security resources.

It’s actually hard to believe that the workshop has been going on for this long, and that it’s still vibrant. We rotate between organizers, so next year is my turn in Cambridge (the Massachusetts one).

Posted on June 16, 2023 at 3:07 PMView Comments

On the Need for an AI Public Option

Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of US tech companies?

Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” Elon Musk bought Twitter in order to censor political speech, retaliate against journalists, and ease access to the platform for Russian and Chinese propagandists. Facebook lied about how it enabled Russian interference in the 2016 US presidential election and paid a public relations firm to blame Google and George Soros instead.

These and countless other ethical lapses should prompt us to consider whether we want to give technology companies further abilities to learn our personal details and influence our day-to-day decisions. Tech companies can already access our daily whereabouts and search queries. Digital devices monitor more and more aspects of our lives: We have cameras in our homes and heartbeat sensors on our wrists sending what they detect to Silicon Valley.

Now, tech giants are developing ever more powerful AI systems that don’t merely monitor you; they actually interact with you—and with others on your behalf. If searching on Google in the 2010s was like being watched on a security camera, then using AI in the late 2020s will be like having a butler. You will willingly include them in every conversation you have, everything you write, every item you shop for, every want, every fear, everything. It will never forget. And, despite your reliance on it, it will be surreptitiously working to further the interests of one of these for-profit corporations.

There’s a reason Google, Microsoft, Facebook, and other large tech companies are leading the AI revolution: Building a competitive large language model (LLM) like the one powering ChatGPT is incredibly expensive. It requires upward of $100 million in computational costs for a single model training run, in addition to access to large amounts of data. It also requires technical expertise, which, while increasingly open and available, remains heavily concentrated in a small handful of companies. Efforts to disrupt the AI oligopoly by funding start-ups are self-defeating as Big Tech profits from the cloud computing services and AI models powering those start-ups—and often ends up acquiring the start-ups themselves.

Yet corporations aren’t the only entities large enough to absorb the cost of large-scale model training. Governments can do it, too. It’s time to start taking AI development out of the exclusive hands of private companies and bringing it into the public sector. The United States needs a government-funded-and-directed AI program to develop widely reusable models in the public interest, guided by technical expertise housed in federal agencies.

So far, the AI regulation debate in Washington has focused on the governance of private-sector activity—which the US Congress is in no hurry to advance. Congress should not only hurry up and push AI regulation forward but also go one step further and develop its own programs for AI. Legislators should reframe the AI debate from one about public regulation to one about public development.

The AI development program could be responsive to public input and subject to political oversight. It could be directed to respond to critical issues such as privacy protection, underpaid tech workers, AI’s horrendous carbon emissions, and the exploitation of unlicensed data. Compared to keeping AI in the hands of morally dubious tech companies, the public alternative is better both ethically and economically. And the switch should take place soon: By the time AI becomes critical infrastructure, essential to large swaths of economic activity and daily life, it will be too late to get started.

Other countries are already there. China has heavily prioritized public investment in AI research and development by betting on a handpicked set of giant companies that are ostensibly private but widely understood to be an extension of the state. The government has tasked Alibaba, Huawei, and others with creating products that support the larger ecosystem of state surveillance and authoritarianism.

The European Union is also aggressively pushing AI development. The European Commission already invests 1 billion euros per year in AI, with a plan to increase that figure to 20 billion euros annually by 2030. The money goes to a continent-wide network of public research labs, universities, and private companies jointly working on various parts of AI. The Europeans’ focus is on knowledge transfer, developing the technology sector, use of AI in public administration, mitigating safety risks, and preserving fundamental rights. The EU also continues to be at the cutting edge of aggressively regulating both data and AI.

Neither the Chinese nor the European model is necessarily right for the United States. State control of private enterprise remains anathema in American political culture and would struggle to gain mainstream traction. The tech companies—and their supporters in both US political parties—are opposed to robust public governance of AI. But Washington can take inspiration from China and Europe’;s long-range planning and leadership on regulation and public investment. With boosters pointing to hundreds of trillions of dollars of global economic value associated with AI, the stakes of international competition are compelling. As in energy and medical research, which have their own federal agencies in the Department of Energy and the National Institutes of Health, respectively, there is a place for AI research and development inside government.

Beside the moral argument against letting private companies develop AI, there’s a strong economic argument in favor of a public option as well. A publicly funded LLM could serve as an open platform for innovation, helping any small business, nonprofit, or individual entrepreneur to build AI-assisted applications.

There’s also a practical argument. Building AI is within public reach because governments don’t need to own and operate the entire AI supply chain. Chip and computer production, cloud data centers, and various value-added applications—such as those that integrate AI with consumer electronics devices or entertainment software—do not need to be publicly controlled or funded.

One reason to be skeptical of public funding for AI is that it might result in a lower quality and slower innovation, given greater ethical scrutiny, political constraints, and fewer incentives due to a lack of market competition. But even if that is the case, it would be worth broader access to the most important technology of the 21st century. And it is by no means certain that public AI has to be at a disadvantage. The open-source community is proof that it’s not always private companies that are the most innovative.

Those who worry about the quality trade-off might suggest a public buyer model, whereby Washington licenses or buys private language models from Big Tech instead of developing them itself. But that doesn’t go far enough to ensure that the tools are aligned with public priorities and responsive to public needs. It would not give the public detailed insight into or control of the inner workings and training procedures for these models, and it would still require strict and complex regulation.

There is political will to take action to develop AI via public, rather than private, funds—but this does not yet equate to the will to create a fully public AI development agency. A task force created by Congress recommended in January a $2.6 billion federal investment in computing and data resources to prime the AI research ecosystem in the United States. But this investment would largely serve to advance the interests of Big Tech, leaving the opportunity for public ownership and oversight unaddressed.

Nonprofit and academic organizations have already created open-access LLMs. While these should be celebrated, they are not a substitute for a public option. Nonprofit projects are still beholden to private interests, even if they are benevolent ones. These private interests can change without public input, as when OpenAI effectively abandoned its nonprofit origins, and we can’t be sure that their founding intentions or operations will survive market pressures, fickle donors, and changes in leadership.

The US government is by no means a perfect beacon of transparency, a secure and responsible store of our data, or a genuine reflection of the public’s interests. But the risks of placing AI development entirely in the hands of demonstrably untrustworthy Silicon Valley companies are too high. AI will impact the public like few other technologies, so it should also be developed by the public.

This essay was written with Nathan Sanders, and appeared in Foreign Policy.

Posted on June 14, 2023 at 7:02 AMView Comments

Identifying the Idaho Killer

The New York Times has a long article on the investigative techniques used to identify the person who stabbed and killed four University of Idaho students.

Pay attention to the techniques:

The case has shown the degree to which law enforcement investigators have come to rely on the digital footprints that ordinary Americans leave in nearly every facet of their lives. Online shopping, car sales, carrying a cellphone, drives along city streets and amateur genealogy all played roles in an investigation that was solved, in the end, as much through technology as traditional sleuthing.

[…]

At that point, investigators decided to try genetic genealogy, a method that until now has been used primarily to solve cold cases, not active murder investigations. Among the growing number of genealogy websites that help people trace their ancestors and relatives via their own DNA, some allow users to select an option that permits law enforcement to compare crime scene DNA samples against the websites’ data.

A distant cousin who has opted into the system can help investigators building a family tree from crime scene DNA to triangulate and identify a potential perpetrator of a crime.

[…]

On Dec. 23, investigators sought and received Mr. Kohberger’s cellphone records. The results added more to their suspicions: His phone was moving around in the early morning hours of Nov. 13, but was disconnected from cell networks ­- perhaps turned off—in the two hours around when the killings occurred.

Posted on June 13, 2023 at 7:03 AMView Comments

AI-Generated Steganography

New research suggests that AIs can produce perfectly secure steganographic images:

Abstract: Steganography is the practice of encoding secret information into innocuous content in such a manner that an adversarial third party would not realize that there is hidden meaning. While this problem has classically been studied in security literature, recent advances in generative models have led to a shared interest among security and machine learning researchers in developing scalable steganography techniques. In this work, we show that a steganography procedure is perfectly secure under Cachin (1998)’s information theoretic-model of steganography if and only if it is induced by a coupling. Furthermore, we show that, among perfectly secure procedures, a procedure is maximally efficient if and only if it is induced by a minimum entropy coupling. These insights yield what are, to the best of our knowledge, the first steganography algorithms to achieve perfect security guarantees with non-trivial efficiency; additionally, these algorithms are highly scalable. To provide empirical validation, we compare a minimum entropy coupling-based approach to three modern baselines—arithmetic coding, Meteor, and adaptive dynamic grouping—using GPT-2, WaveRNN, and Image Transformer as communication channels. We find that the minimum entropy coupling-based approach achieves superior encoding efficiency, despite its stronger security constraints. In aggregate, these results suggest that it may be natural to view information-theoretic steganography through the lens of minimum entropy coupling.

News article.

EDITED TO ADD (6/13): Comments.

Posted on June 12, 2023 at 7:18 AMView Comments

Friday Squid Blogging: Light-Emitting Squid

It’s a Taningia danae:

Their arms are lined with two rows of sharp retractable hooks. And, like most deep-sea squid, they are adorned with light organs called photophores. They have some on the underside of their mantle. There are more facing upward, near one of their eyes. But it’s the photophores at the tip of two stubby arms that are truly unique. The size and shape of lemons­—each nestled within a retractable lid like an eyeball in a socket­—they are by far the largest photophores known to science.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on June 9, 2023 at 5:05 PMView Comments

Operation Triangulation: Zero-Click iPhone Malware

Kaspersky is reporting a zero-click iOS exploit in the wild:

Mobile device backups contain a partial copy of the filesystem, including some of the user data and service databases. The timestamps of the files, folders and the database records allow to roughly reconstruct the events happening to the device. The mvt-ios utility produces a sorted timeline of events into a file called “timeline.csv,” similar to a super-timeline used by conventional digital forensic tools.

Using this timeline, we were able to identify specific artifacts that indicate the compromise. This allowed to move the research forward, and to reconstruct the general infection sequence:

  • The target iOS device receives a message via the iMessage service, with an attachment containing an exploit.
  • Without any user interaction, the message triggers a vulnerability that leads to code execution.
  • The code within the exploit downloads several subsequent stages from the C&C server, that include additional exploits for privilege escalation.
  • After successful exploitation, a final payload is downloaded from the C&C server, that is a fully-featured APT platform.
  • The initial message and the exploit in the attachment is deleted

The malicious toolset does not support persistence, most likely due to the limitations of the OS. The timelines of multiple devices indicate that they may be reinfected after rebooting. The oldest traces of infection that we discovered happened in 2019. As of the time of writing in June 2023, the attack is ongoing, and the most recent version of the devices successfully targeted is iOS 15.7.

No attribution as of yet.

Posted on June 9, 2023 at 7:12 AMView Comments

Paragon Solutions Spyware: Graphite

Paragon Solutions is yet another Israeli spyware company. Their product is called “Graphite,” and is a lot like NSO Group’s Pegasus. And Paragon is working with what seems to be US approval:

American approval, even if indirect, has been at the heart of Paragon’s strategy. The company sought a list of allied nations that the US wouldn’t object to seeing deploy Graphite. People with knowledge of the matter suggested 35 countries are on that list, though the exact nations involved could not be determined. Most were in the EU and some in Asia, the people said.

Remember when NSO Group was banned in the US a year and a half ago? The Drug Enforcement Agency uses Graphite.

We’re never going to reduce the power of these cyberweapons arms merchants by going after them one by one. We need to deal with the whole industry. And we’re not going to do it as long as the democracies of the world use their products as well.

Posted on June 8, 2023 at 7:30 AMView Comments

How Attorneys Are Harming Cybersecurity Incident Response

New paper: “Lessons Lost: Incident Response in the Age of Cyber Insurance and Breach Attorneys“:

Abstract: Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop. The first stage of our study established 11 stylized facts that describe how cyber insurance sends work to a small numbers of IR firms, drives down the fee paid, and appoints lawyers to direct technical investigators. The second stage showed that lawyers when directing incident response often: introduce legalistic contractual and communication steps that slow-down incident response; advise IR practitioners not to write down remediation steps or to produce formal reports; and restrict access to any documents produced.

So, we’re not able to learn from these breaches because the attorneys are limiting what information becomes public. This is where we think about shielding companies from liability in exchange for making breach data public. It’s the sort of thing we do for airplane disasters.

EDITED TO ADD (6/13): A podcast interview with two of the authors.

Posted on June 7, 2023 at 7:06 AMView Comments

Snowden Ten Years Later

In 2013 and 2014, I wrote extensively about new revelations regarding NSA surveillance based on the documents provided by Edward Snowden. But I had a more personal involvement as well.

I wrote the essay below in September 2013. The New Yorker agreed to publish it, but the Guardian asked me not to. It was scared of UK law enforcement, and worried that this essay would reflect badly on it. And given that the UK police would raid its offices in July 2014, it had legitimate cause to be worried.

Now, ten years later, I offer this as a time capsule of what those early months of Snowden were like.


It’s a surreal experience, paging through hundreds of top-secret NSA documents. You’re peering into a forbidden world: strange, confusing, and fascinating all at the same time.

I had flown down to Rio de Janeiro in late August at the request of Glenn Greenwald. He had been working on the Edward Snowden archive for a couple of months, and had a pile of more technical documents that he wanted help interpreting. According to Greenwald, Snowden also thought that bringing me down was a good idea.

It made sense. I didn’t know either of them, but I have been writing about cryptography, security, and privacy for decades. I could decipher some of the technical language that Greenwald had difficulty with, and understand the context and importance of various document. And I have long been publicly critical of the NSA’s eavesdropping capabilities. My knowledge and expertise could help figure out which stories needed to be reported.

I thought about it a lot before agreeing. This was before David Miranda, Greenwald’s partner, was detained at Heathrow airport by the UK authorities; but even without that, I knew there was a risk. I fly a lot—a quarter of a million miles per year—and being put on a TSA list, or being detained at the US border and having my electronics confiscated, would be a major problem. So would the FBI breaking into my home and seizing my personal electronics. But in the end, that made me more determined to do it.

I did spend some time on the phone with the attorneys recommended to me by the ACLU and the EFF. And I talked about it with my partner, especially when Miranda was detained three days before my departure. Both Greenwald and his employer, the Guardian, are careful about whom they show the documents to. They publish only those portions essential to getting the story out. It was important to them that I be a co-author, not a source. I didn’t follow the legal reasoning, but the point is that the Guardian doesn’t want to leak the documents to random people. It will, however, write stories in the public interest, and I would be allowed to review the documents as part of that process. So after a Skype conversation with someone at the Guardian, I signed a letter of engagement.

And then I flew to Brazil.

I saw only a tiny slice of the documents, and most of what I saw was surprisingly banal. The concerns of the top-secret world are largely tactical: system upgrades, operational problems owing to weather, delays because of work backlogs, and so on. I paged through weekly reports, presentation slides from status meetings, and general briefings to educate visitors. Management is management, even inside the NSA Reading the documents, I felt as though I were sitting through some of those endless meetings.

The meeting presenters try to spice things up. Presentations regularly include intelligence success stories. There were details—what had been found, and how, and where it helped—and sometimes there were attaboys from “customers” who used the intelligence. I’m sure these are intended to remind NSA employees that they’re doing good. It definitely had an effect on me. Those were all things I want the NSA to be doing.

There were so many code names. Everything has one: every program, every piece of equipment, every piece of software. Sometimes code names had their own code names. The biggest secrets seem to be the underlying real-world information: which particular company MONEYROCKET is; what software vulnerability EGOTISTICALGIRAFFE—really, I am not making that one up—is; how TURBINE works. Those secrets collectively have a code name—ECI, for exceptionally compartmented information—and almost never appear in the documents. Chatting with Snowden on an encrypted IM connection, I joked that the NSA cafeteria menu probably has code names for menu items. His response: “Trust me when I say you have no idea.”

Those code names all come with logos, most of them amateurish and a lot of them dumb. Note to the NSA: take some of that more than ten-billion-dollar annual budget and hire yourself a design firm. Really; it’ll pay off in morale.

Once in a while, though, I would see something that made me stop, stand up, and pace around in circles. It wasn’t that what I read was particularly exciting, or important. It was just that it was startling. It changed—ever so slightly—how I thought about the world.

Greenwald said that that reaction was normal when people started reading through the documents.

Intelligence professionals talk about how disorienting it is living on the inside. You read so much classified information about the world’s geopolitical events that you start seeing the world differently. You become convinced that only the insiders know what’s really going on, because the news media is so often wrong. Your family is ignorant. Your friends are ignorant. The world is ignorant. The only thing keeping you from ignorance is that constant stream of classified knowledge. It’s hard not to feel superior, not to say things like “If you only knew what we know” all the time. I can understand how General Keith Alexander, the director of the NSA, comes across as so supercilious; I only saw a minute fraction of that secret world, and I started feeling it.

It turned out to be a terrible week to visit Greenwald, as he was still dealing with the fallout from Miranda’s detention. Two other journalists, one from the Nation and the other from the Hindu, were also in town working with him. A lot of my week involved Greenwald rushing into my hotel room, giving me a thumb drive of new stuff to look through, and rushing out again.

A technician from the Guardian got a search capability working while I was there, and I spent some time with it. Question: when you’re given the capability to search through a database of NSA secrets, what’s the first thing you look for? Answer: your name.

It wasn’t there. Neither were any of the algorithm names I knew, not even algorithms I knew that the US government used.

I tried to talk to Greenwald about his own operational security. It had been incredibly stupid for Miranda to be traveling with NSA documents on the thumb drive. Transferring files electronically is what encryption is for. I told Greenwald that he and Laura Poitras should be sending large encrypted files of dummy documents back and forth every day.

Once, at Greenwald’s home, I walked into the backyard and looked for TEMPEST receivers hiding in the trees. I didn’t find any, but that doesn’t mean they weren’t there. Greenwald has a lot of dogs, but I don’t think that would hinder professionals. I’m sure that a bunch of major governments have a complete copy of everything Greenwald has. Maybe the black bag teams bumped into each other in those early weeks.

I started doubting my own security procedures. Reading about the NSA’s hacking abilities will do that to you. Can it break the encryption on my hard drive? Probably not. Has the company that makes my encryption software deliberately weakened the implementation for it? Probably. Are NSA agents listening in on my calls back to the US? Very probably. Could agents take control of my computer over the Internet if they wanted to? Definitely. In the end, I decided to do my best and stop worrying about it. It was the agency’s documents, after all. And what I was working on would become public in a few weeks.

I wasn’t sleeping well, either. A lot of it was the sheer magnitude of what I saw. It’s not that any of it was a real surprise. Those of us in the information security community had long assumed that the NSA was doing things like this. But we never really sat down and figured out the details, and to have the details confirmed made a big difference. Maybe I can make it clearer with an analogy. Everyone knows that death is inevitable; there’s absolutely no surprise about that. Yet it arrives as a surprise, because we spend most of our lives refusing to think about it. The NSA documents were a bit like that. Knowing that it is surely true that the NSA is eavesdropping on the world, and doing it in such a methodical and robust manner, is very different from coming face-to-face with the reality that it is and the details of how it is doing it.

I also found it incredibly difficult to keep the secrets. The Guardian’s process is slow and methodical. I move much faster. I drafted stories based on what I found. Then I wrote essays about those stories, and essays about the essays. Writing was therapy; I would wake up in the wee hours of the morning, and write an essay. But that put me at least three levels beyond what was published.

Now that my involvement is out, and my first essays are out, I feel a lot better. I’m sure it will get worse again when I find another monumental revelation; there are still more documents to go through.

I’ve heard it said that Snowden wants to damage America. I can say with certainty that he does not. So far, everyone involved in this incident has been incredibly careful about what is released to the public. There are many documents that could be immensely harmful to the US, and no one has any intention of releasing them. The documents the reporters release are carefully redacted. Greenwald and I repeatedly debated with Guardian editors the newsworthiness of story ideas, stressing that we would not expose government secrets simply because they’re interesting.

The NSA got incredibly lucky; this could have ended with a massive public dump like Chelsea Manning’s State Department cables. I suppose it still could. Despite that, I can imagine how this feels to the NSA. It’s used to keeping this stuff behind multiple levels of security: gates with alarms, armed guards, safe doors, and military-grade cryptography. It’s not supposed to be on a bunch of thumb drives in Brazil, Germany, the UK, the US, and who knows where else, protected largely by some random people’s opinions about what should or should not remain secret. This is easily the greatest intelligence failure in the history of ever. It’s amazing that one person could have had so much access with so little accountability, and could sneak all of this data out without raising any alarms. The odds are close to zero that Snowden is the first person to do this; he’s just the first person to make public that he did. It’s a testament to General Alexander’s power that he hasn’t been forced to resign.

It’s not that we weren’t being careful about security, it’s that our standards of care are so different. From the NSA’s point of view, we’re all major security risks, myself included. I was taking notes about classified material, crumpling them up, and throwing them into the wastebasket. I was printing documents marked “TOP SECRET/COMINT/NOFORN” in a hotel lobby. And once, I took the wrong thumb drive with me to dinner, accidentally leaving the unencrypted one filled with top-secret documents in my hotel room. It was an honest mistake; they were both blue.

If I were an NSA employee, the policy would be to fire me for that alone.

Many have written about how being under constant surveillance changes a person. When you know you’re being watched, you censor yourself. You become less open, less spontaneous. You look at what you write on your computer and dwell on what you’ve said on the telephone, wonder how it would sound taken out of context, from the perspective of a hypothetical observer. You’re more likely to conform. You suppress your individuality. Even though I have worked in privacy for decades, and already knew a lot about the NSA and what it does, the change was palpable. That feeling hasn’t faded. I am now more careful about what I say and write. I am less trusting of communications technology. I am less trusting of the computer industry.

After much discussion, Greenwald and I agreed to write three stories together to start. All of those are still in progress. In addition, I wrote two commentaries on the Snowden documents that were recently made public. There’s a lot more to come; even Greenwald hasn’t looked through everything.

Since my trip to Brazil [one month before], I’ve flown back to the US once and domestically seven times—all without incident. I’m not on any list yet. At least, none that I know about.


As it happened, I didn’t write much more with Greenwald or the Guardian. Those two had a falling out, and by the time everything settled and both began writing about the documents independently—Greenwald at the newly formed website the Intercept—I got cut out of the process somehow. I remember hearing that Greenwald was annoyed with me, but I never learned the reason. We haven’t spoken since.

Still, I was happy with the one story I was part of: how the NSA hacks Tor. I consider it a personal success that I pushed the Guardian to publish NSA documents detailing QUANTUM. I don’t think that would have gotten out any other way. And I still use those pages today when I teach cybersecurity to policymakers at the Harvard Kennedy School.

Other people wrote about the Snowden files, and wrote a lot. It was a slow trickle at first, and then a more consistent flow. Between Greenwald, Bart Gellman, and the Guardian reporters, there ended up being steady stream of news. (Bart brought in Ashkan Soltani to help him with the technical aspects, which was a great move on his part, even if it cost Ashkan a government job later.) More stories were covered by other publications.

It started getting weird. Both Greenwald and Gellman held documents back so they could publish them in their books. Jake Appelbaum, who had not yet been accused of sexual assault by multiple women, was working with Laura Poitras. He partnered with Spiegel to release an implant catalog from the NSA’s Tailored Access Operations group. To this day, I am convinced that that document was not in the Snowden archives: that Jake got it somehow, and it was released with the implication that it was from Edward Snowden. I thought it was important enough that I started writing about each item in that document in my blog: “NSA Exploit of the Week.” That got my website blocked by the DoD: I keep a framed print of the censor’s message on my wall.

Perhaps the most surreal document disclosures were when artists started writing fiction based on the documents. This was in 2016, when Poitras built a secure room in New York to house the documents. By then, the documents were years out of date. And now they’re over a decade out of date. (They were leaked in 2013, but most of them were from 2012 or before.)

I ended up being something of a public ambassador for the documents. When I got back from Rio, I gave talks at a private conference in Woods Hole, the Berkman Center at Harvard, something called the Congress and Privacy and Surveillance in Geneva, events at both CATO and New America in DC, an event at the University of Pennsylvania, an event at EPIC and a “Stop Watching Us” rally in DC, the RISCS conference in London, the ISF in Paris, and…then…at the IETF meeting in Vancouver in November 2013. (I remember little of this; I am reconstructing it all from my calendar.)

What struck me at the IETF was the indignation in the room, and the calls to action. And there was action, across many fronts. We technologists did a lot to help secure the Internet, for example.

The government didn’t do its part, though. Despite the public outcry, investigations by Congress, pronouncements by President Obama, and federal court rulings, I don’t think much has changed. The NSA canceled a program here and a program there, and it is now more public about defense. But I don’t think it is any less aggressive about either bulk or targeted surveillance. Certainly its government authorities haven’t been restricted in any way. And surveillance capitalism is still the business model of the Internet.

And Edward Snowden? We were in contact for a while on Signal. I visited him once in Moscow, in 2016. And I had him do an guest lecture to my class at Harvard for a few years, remotely by Jitsi. Afterwards, I would hold a session where I promised to answer every question he would evade or not answer, explain every response he did give, and be candid in a way that someone with an outstanding arrest warrant simply cannot. Sometimes I thought I could channel Snowden better than he could.

But now it’s been a decade. Everything he knows is old and out of date. Everything we know is old and out of date. The NSA suffered an even worse leak of its secrets by the Russians, under the guise of the Shadow Brokers, in 2016 and 2017. The NSA has rebuilt. It again has capabilities we can only surmise.

This essay previously appeared in an IETF publication, as part of an Edward Snowden ten-year retrospective.

EDITED TO ADD (6/7): Conversation between Snowden, Greenwald, and Poitras.

Posted on June 6, 2023 at 7:17 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.