June 15, 2020
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- On Marcus Hutchins
- Ramsay Malware
- AI and Cybersecurity
- Criminals and the Normalization of Masks
- Bart Gellman on Snowden
- Ann Mitchell, Bletchley Park Cryptanalyst, Dies
- Bluetooth Vulnerability: BIAS
- Websites Conducting Port Scans
- Thermal Imaging as Security Theater
- Facebook Announces Messenger Security Features that Don’t Compromise Privacy
- Bogus Security Technology: An Anti-5G USB Stick
- Password Changing After a Breach
- “Sign in with Apple” Vulnerability
- Wallpaper that Crashes Android Phones
- Zoom’s Commitment to User Security Depends on Whether you Pay It or Not
- New Research: “Privacy Threats in Intimate Relationships”
- Phishing Attacks against Trump and Biden Campaigns
- Gene Spafford on Internet Voting
- Security Analysis of the Democracy Live Online Voting System
- Availability Attacks against Neural Networks
- Another Intel Speculative Execution Vulnerability
- Facebook Helped Develop a Tails Exploit
On Marcus Hutchins
[2020.05.15] Long and nuanced story about Marcus Hutchins, the British hacker who wrote most of the Kronos malware and also stopped WannaCry in real time. Well worth reading.
[2020.05.18] A new malware, called Ramsay, can jump air gaps:
ESET said they’ve been able to track down three different versions of the Ramsay malware, one compiled in September 2019 (Ramsay v1), and two others in early and late March 2020 (Ramsay v2.a and v2.b).
Each version was different and infected victims through different methods, but at its core, the malware’s primary role was to scan an infected computer, and gather Word, PDF, and ZIP documents in a hidden storage folder, ready to be exfiltrated at a later date.
Other versions also included a spreader module that appended copies of the Ramsay malware to all PE (portable executable) files found on removable drives and network shares. This is believed to be the mechanism the malware was employing to jump the air gap and reach isolated networks, as users would most likely moved the infected executables between the company’s different network layers, and eventually end up on an isolated system.
ESET says that during its research, it was not able to positively identify Ramsay’s exfiltration module, or determine how the Ramsay operators retrieved data from air-gapped systems.
Honestly, I can’t think of any threat actor that wants this kind of feature other than governments:
The researcher has not made a formal attribution as who might be behind Ramsay. However, Sanmillan said that the malware contained a large number of shared artifacts with Retro, a malware strain previously developed by DarkHotel, a hacker group that many believe to operate in the interests of the South Korean government.
AI and Cybersecurity
[2020.05.19] Ben Buchanan has written “A National Security Research Agenda for Cybersecurity and Artificial Intelligence.” It’s really good—well worth reading.
Criminals and the Normalization of Masks
[2020.05.20] I was wondering about this:
Masks that have made criminals stand apart long before bandanna-wearing robbers knocked over stagecoaches in the Old West and ski-masked bandits held up banks now allow them to blend in like concerned accountants, nurses and store clerks trying to avoid a deadly virus.
“Criminals, they’re smart and this is a perfect opportunity for them to conceal themselves and blend right in,” said Richard Bell, police chief in the tiny Pennsylvania community of Frackville. He said he knows of seven recent armed robberies in the region where every suspect wore a mask.
Just how many criminals are taking advantage of the pandemic to commit crimes is impossible to estimate, but law enforcement officials have no doubt the numbers are climbing. Reports are starting to pop up across the United States and in other parts of the world of crimes pulled off in no small part because so many of us are now wearing masks.
In March, two men walked into Aqueduct Racetrack in New York wearing the same kind of surgical masks as many racing fans there and, at gunpoint, robbed three workers of a quarter-million dollars they were moving from gaming machines to a safe. Other robberies involving suspects wearing surgical masks have occurred in North Carolina, and Washington, D.C, and elsewhere in recent weeks.
The article is all anecdote and no real data. But this is probably a trend.
Bart Gellman on Snowden
[2020.05.20] Bart Gellman’s long-awaited (at least by me) book on Edward Snowden, Dark Mirror: Edward Snowden and the American Surveillance State, will finally be published in a couple of weeks. There is an adapted excerpt in the Atlantic.
It’s an interesting read, mostly about the government surveillance of him and other journalists. He speaks about an NSA program called FIRSTFRUITS that specifically spies on US journalists. (This isn’t news; we learned about this in 2006. But there are lots of new details.)
One paragraph in the excerpt struck me:
Years later Richard Ledgett, who oversaw the NSA’s media-leaks task force and went on to become the agency’s deputy director, told me matter-of-factly to assume that my defenses had been breached. “My take is, whatever you guys had was pretty immediately in the hands of any foreign intelligence service that wanted it,” he said, “whether it was Russians, Chinese, French, the Israelis, the Brits. Between you, Poitras, and Greenwald, pretty sure you guys can’t stand up to a full-fledged nation-state attempt to exploit your IT. To include not just remote stuff, but hands-on, sneak-into-your-house-at-night kind of stuff. That’s my guess.”
I remember thinking the same thing. It was the summer of 2013, and I was visiting Glenn Greenwald in Rio de Janeiro. This was just after Greenwald’s partner was detained in the UK trying to ferry some documents from Laura Poitras in Berlin back to Greenwald. It was an opsec disaster; they would have been much more secure if they’d emailed the encrypted files. In fact, I told them to do that, every single day. I wanted them to send encrypted random junk back and forth constantly, to hide when they were actually sharing real data.
As soon as I saw their house I realized exactly what Ledgett said. I remember standing outside the house, looking into the dense forest for TEMPEST receivers. I didn’t see any, which only told me they were well hidden. I guessed that black-bag teams from various countries had already been all over the house when they were out for dinner, and wondered what would have happened if teams from different countries bumped into each other. I assumed that all the countries Ledgett listed above—plus the US and a few more—had a full take of what Snowden gave the journalists. These journalists against those governments just wasn’t a fair fight.
I’m looking forward to reading Gellman’s book. I’m kind of surprised no one sent me an advance copy.
Ann Mitchell, Bletchley Park Cryptanalyst, Dies
Bluetooth Vulnerability: BIAS
[2020.05.26] This is new research on a Bluetooth vulnerability (called BIAS) that allows someone to impersonate a trusted device:
Abstract: Bluetooth (BR/EDR) is a pervasive technology for wireless communication used by billions of devices. The Bluetooth standard includes a legacy authentication procedure and a secure authentication procedure, allowing devices to authenticate to each other using a long term key. Those procedures are used during pairing and secure connection establishment to prevent impersonation attacks. In this paper, we show that the Bluetooth specification contains vulnerabilities enabling to perform impersonation attacks during secure connection establishment. Such vulnerabilities include the lack of mandatory mutual authentication, overly permissive role switching, and an authentication procedure downgrade. We describe each vulnerability in detail, and we exploit them to design, implement, and evaluate master and slave impersonation attacks on both the legacy authentication procedure and the secure authentication procedure. We refer to our attacks as Bluetooth Impersonation AttackS (BIAS).
Our attacks are standard compliant, and are therefore effective against any standard compliant Bluetooth device regardless the Bluetooth version, the security mode (e.g., Secure Connections), the device manufacturer, and the implementation details. Our attacks are stealthy because the Bluetooth standard does not require to notify end users about the outcome of an authentication procedure, or the lack of mutual authentication. To confirm that the BIAS attacks are practical, we successfully conduct them against 31 Bluetooth devices (28 unique Bluetooth chips) from major hardware and software vendors, implementing all the major Bluetooth versions, including Apple, Qualcomm, Intel, Cypress, Broadcom, Samsung, and CSR.
Websites Conducting Port Scans
[2020.05.27] Security researcher Charlie Belmer is reporting that commercial websites such as eBay are conducting port scans of their visitors.
Looking at the list of ports they are scanning, they are looking for VNC services being run on the host, which is the same thing that was reported for bank sites. I marked out the ports and what they are known for (with a few blanks for ones I am unfamiliar with):
- 5900: VNC
- 5901: VNC port 2
- 5902: VNC port 3
- 5903: VNC port 4
- 3389: Windows remote desktop / RDP
- 5931: Ammy Admin remote desktop
- 5950: WinVNC
- 6039: X window system
- 6040: X window system
- 63333: TrippLite power alert UPS
- 7070: RealAudio
No one seems to know why:
I could not believe my eyes, but it was quickly reproduced by me (see below for my observation).
I surfed around to several sites, and found one more that does this (the citibank site, see below for my observation)
I further see, at least across ebay.com and citibank.com the same ports, in the same sequence getting scanned. That implies there may be a library in use across both sites that is doing this. (I have not debugged into the matter so far.)
- Is this port scanning “a thing” built into some standard fingerprinting or security library? (if so, which?)
- Is there a plugin for firefox that can block such behavior? (or can such blocking be added to an existing plugin)?
I’m curious, too.
Thermal Imaging as Security Theater
[2020.05.28] Seems like thermal imaging is the security theater technology of today.
These features are so tempting that thermal cameras are being installed at an increasing pace. They’re used in airports and other public transportation centers to screen travelers, increasingly used by companies to screen employees and by businesses to screen customers, and even used in health care facilities to screen patients. Despite their prevalence, thermal cameras have many fatal limitations when used to screen for the coronavirus.
- They are not intended for medical purposes.
- Their accuracy can be reduced by their distance from the people being inspected.
- They are “an imprecise method for scanning crowds” now put into a context where precision is critical.
- They will create false positives, leaving people stigmatized, harassed, unfairly quarantined, and denied rightful opportunities to work, travel, shop, or seek medical help.
- They will create false negatives, which, perhaps most significantly for public health purposes, “could miss many of the up to one-quarter or more people infected with the virus who do not exhibit symptoms,” as the New York Times recently put it. Thus they will abjectly fail at the core task of slowing or preventing the further spread of the virus.
Facebook Announces Messenger Security Features that Don’t Compromise Privacy
[2020.05.29] Note that this is “announced,” so we don’t know when it’s actually going to be implemented.
Facebook today announced new features for Messenger that will alert you when messages appear to come from financial scammers or potential child abusers, displaying warnings in the Messenger app that provide tips and suggest you block the offenders. The feature, which Facebook started rolling out on Android in March and is now bringing to iOS, uses machine learning analysis of communications across Facebook Messenger’s billion-plus users to identify shady behaviors. But crucially, Facebook says that the detection will occur only based on metadata—not analysis of the content of messages—so that it doesn’t undermine the end-to-end encryption that Messenger offers in its Secret Conversations feature. Facebook has said it will eventually roll out that end-to-end encryption to all Messenger chats by default.
That default Messenger encryption will take years to implement.
Facebook hasn’t revealed many details about how its machine-learning abuse detection tricks will work. But a Facebook spokesperson tells WIRED the detection mechanisms are based on metadata alone: who is talking to whom, when they send messages, with what frequency, and other attributes of the relevant accounts—essentially everything other than the content of communications, which Facebook’s servers can’t access when those messages are encrypted. “We can get pretty good signals that we can develop through machine learning models, which will obviously improve over time,” a Facebook spokesperson told WIRED in a phone call. They declined to share more details in part because the company says it doesn’t want to inadvertently help bad actors circumvent its safeguards.
The company’s blog post offers the example of an adult sending messages or friend requests to a large number of minors as one case where its behavioral detection mechanisms can spot a likely abuser. In other cases, Facebook says, it will weigh a lack of connections between two people’s social graphs—a sign that they don’t know each other—or consider previous instances where users reported or blocked a someone as a clue that they’re up to something shady.
One screenshot from Facebook, for instance, shows an alert that asks if a message recipient knows a potential scammer. If they say no, the alert suggests blocking the sender, and offers tips about never sending money to a stranger. In another example, the app detects that someone is using a name and profile photo to impersonate the recipient’s friend. An alert then shows the impersonator’s and real friend’s profiles side-by-side, suggesting that the user block the fraudster.
Details from Facebook
Bogus Security Technology: An Anti-5G USB Stick
[2020.05.29] The 5GBioShield sells for £339.60, and the description sounds like snake oil:
…its website, which describes it as a USB key that “provides protection for your home and family, thanks to the wearable holographic nano-layer catalyser, which can be worn or placed near to a smartphone or any other electrical, radiation or EMF [electromagnetic field] emitting device”.
“Through a process of quantum oscillation, the 5GBioShield USB key balances and re-harmonises the disturbing frequencies arising from the electric fog induced by devices, such as laptops, cordless phones, wi-fi, tablets, et cetera,” it adds.
Turns out that it’s just a regular USB stick.
Password Changing After a Breach
[2020.06.01] This study shows that most people don’t change their passwords after a breach, and if they do they change it to a weaker password.
Abstract: To protect against misuse of passwords compromised in a breach, consumers should promptly change affected passwords and any similar passwords on other accounts. Ideally, affected companies should strongly encourage this behavior and have mechanisms in place to mitigate harm. In order to make recommendations to companies about how to help their users perform these and other security-enhancing actions after breaches, we must first have some understanding of the current effectiveness of companies’ post-breach practices. To study the effectiveness of password-related breach notifications and practices enforced after a breach, we examine—based on real-world password data from 249 participants—whether and how constructively participants changed their passwords after a breach announcement.
Of the 249 participants, 63 had accounts on breached domains;only 33% of the 63 changed their passwords and only 13% (of 63)did so within three months of the announcement. New passwords were on average 1.3× stronger than old passwords (when comparing log10-transformed strength), though most were weaker or of equal strength. Concerningly, new passwords were overall more similar to participants’ other passwords, and participants rarely changed passwords on other sites even when these were the same or similar to their password on the breached domain.Our results highlight the need for more rigorous password-changing requirements following a breach and more effective breach notifications that deliver comprehensive advice.
EDITED TO ADD (6/2): Another news aricle. Slashdot thread.
“Sign in with Apple” Vulnerability
[2020.06.02] Researcher Bhavuk Jain discovered a vulnerability in the “Sign in with Apple” feature, and received a $100,000 bug bounty from Apple. Basically, forged tokens could gain access to pretty much any account.
It is fixed.
EDITED TO ADD (6/2): Another story.
Wallpaper that Crashes Android Phones
[2020.06.03] This is interesting:
The image, a seemingly innocuous sunset (or dawn) sky above placid waters, may be viewed without harm. But if loaded as wallpaper, the phone will crash.
The fault does not appear to have been maliciously created. Rather, according to developers following Ice Universe’s Twitter thread, the problem lies in the way color space is handled by the Android OS.
The image was created using the RGB color space to display image hues, while Android 10 uses the sRGB color space protocol, according to 9to5Google contributor Dylan Roussel. When the Android phone cannot properly convert the Adobe RGB image, it crashes.
Zoom’s Commitment to User Security Depends on Whether you Pay It or Not
[2020.06.04] Zoom was doing so well…. And now we have this:
Corporate clients will get access to Zoom’s end-to-end encryption service now being developed, but Yuan said free users won’t enjoy that level of privacy, which makes it impossible for third parties to decipher communications.
“Free users for sure we don’t want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose,” Yuan said on the call.
This is just dumb. Imagine the scene in the terrorist/drug kingpin/money launderer hideout: “I’m sorry, boss. We could have have strong encryption to secure our bad intentions from the FBI, but we can’t afford the $20.” This decision will only affect protesters and dissidents and human rights workers and journalists.
Here’s advisor Alex Stamos doing damage control:
Nico, it’s incorrect to say that free calls won’t be encrypted and this turns out to be a really difficult balancing act between different kinds of harms. More details here:
Some facts on Zoom’s current plans for E2E encryption, which are complicated by the product requirements for an enterprise conferencing product and some legitimate safety issues. The E2E design is available here: https://github.com/zoom/zoom-e2e-whitepaper/blob/master/zoom_e2e.pdf
I read that document, and it doesn’t explain why end-to-end encryption is only available to paying customers. And note that Stamos said “encrypted” and not “end-to-end encrypted.” He knows the difference.
Anyway, people were rightly incensed by his remarks. And yesterday, Yuan tried to clarify:
Yuan sought to assuage users’ concerns Wednesday in his weekly webinar, saying the company was striving to “do the right thing” for vulnerable groups, including children and hate-crime victims, whose abuse is sometimes broadcast through Zoom’s platform.
“We plan to provide end-to-end encryption to users for whom we can verify identity, thereby limiting harm to vulnerable groups,” he said. “I wanted to clarify that Zoom does not monitor meeting content. We do not have backdoors where participants, including Zoom employees or law enforcement, can enter meetings without being visible to others. None of this will change.”
Notice that is specifically did not say that he was offering end-to-end encryption to users of the free platform. Only to “users we can verify identity,” which I’m guessing means users that give him a credit card number.
The Twitter feed was similarly sloppily evasive:
We are seeing some misunderstandings on Twitter today around our encryption. We want to provide these facts.
Zoom does not provide information to law enforcement except in circumstances such as child sexual abuse.
Zoom does not proactively monitor meeting content.
Zoom does no have backdoors where Zoom or others can enter meetings without being visible to participants.
AES 256 GCM encryption is turned on for all Zoom users—free and paid.
Those facts have nothing to do with any “misunderstanding.” That was about end-to-end encryption, which the statement very specifically left out of that last sentence. The corporate communications have been clear and consistent.
Come on, Zoom. You were doing so well. Of course you should offer premium features to paying customers, but please don’t include security and privacy in those premium features. They should be available to everyone.
And, hey, this is kind of a dumb time to side with the police over protesters.
I have emailed the CEO, and will report back if I hear back. But for now, assume that the free version of Zoom will not support end-to-end encryption.
EDITED TO ADD (6/4): Another article.
EDITED TO ADD (6/4): I understand that this is complicated, both technically and politically. (Note, though, Jitsi is doing it.) And, yes, lots of people confused end-to-end encryption with link encryption. (My readers tend to be more sophisticated than that.) My worry that the “we’ll offer end-to-end encryption only to paying customers we can verify, even though there’s plenty of evidence that ‘bad purpose’ people will just get paid accounts” story plays into the dangerous narrative that encryption itself is dangerous when widely available. And I disagree with the notion that the possibility of child exploitation is a valid reason to deny security to large groups of people.
Matthew Green on this issue. An excerpt:
Once the precedent is set that E2E encryption is too “dangerous” to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it’s going to be hard to put it back.
Want to help us work on end-to-end encrypted group video calling functionality that will be free for everyone? Zoom on over to our careers page….
New Research: “Privacy Threats in Intimate Relationships”
[2020.06.05] I just published a new paper with Karen Levy of Cornell: “Privacy Threats in Intimate Relationships.”
Abstract: This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships. Many common assumptions about privacy are upended in the context of these relationships, and many otherwise effective protective measures fail when applied to intimate threats. Those closest to us know the answers to our secret questions, have access to our devices, and can exercise coercive power over us. We survey a range of intimate relationships and describe their common features. Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.
This is an important issue that has gotten much too little attention in the cybersecurity community.
Phishing Attacks against Trump and Biden Campaigns
[2020.06.08] Google’s threat analysts have identified state-level attacks from China.
I hope both campaigns are working under the assumption that everything they say and do will be dumped on the Internet before the election. That feels like the most likely outcome.
Gene Spafford on Internet Voting
[2020.06.08] Good interview.
Security Analysis of the Democracy Live Online Voting System
[2020.06.09] New research: “Security Analysis of the Democracy Live Online Voting System“:
Abstract: Democracy Live’s OmniBallot platform is a web-based system for blank ballot delivery, ballot marking, and (optionally) online voting. Three states—Delaware, West Virginia, and New Jersey—recently announced that they will allow certain voters to cast votes online using OmniBallot, but, despite the well established risks of Internet voting, the system has never been the subject of a public, independent security review.
Availability Attacks against Neural Networks
[2020.06.10] New research on using specially crafted inputs to slow down machine-learning neural network systems:
Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief—from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.
So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.
Another Intel Speculative Execution Vulnerability
[2020.06.11] Remember Spectre and Meltdown? Back in early 2018, I wrote:
Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they—and the research into the Intel ME vulnerability—have shown researchers where to look, more is coming—and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.
That has turned out to be true. Here’s a new vulnerability:
On Tuesday, two separate academic teams disclosed two new and distinctive exploits that pierce Intel’s Software Guard eXtension, by far the most sensitive region of the company’s processors.
The new SGX attacks are known as SGAxe and CrossTalk. Both break into the fortified CPU region using separate side-channel attacks, a class of hack that infers sensitive data by measuring timing differences, power consumption, electromagnetic radiation, sound, or other information from the systems that store it. The assumptions for both attacks are roughly the same. An attacker has already broken the security of the target machine through a software exploit or a malicious virtual machine that compromises the integrity of the system. While that’s a tall bar, it’s precisely the scenario that SGX is supposed to defend against.
Another news article.
Facebook Helped Develop a Tails Exploit
[2020.06.12] This is a weird story:
Hernandez was able to evade capture for so long because he used Tails, a version of Linux designed for users at high risk of surveillance and which routes all inbound and outbound connections through the open-source Tor network to anonymize it. According to Vice, the FBI had tried to hack into Hernandez’s computer but failed, as the approach they used “was not tailored for Tails.” Hernandez then proceeded to mock the FBI in subsequent messages, two Facebook employees told Vice.
Facebook had tasked a dedicated employee to unmasking Hernandez, developed an automated system to flag recently created accounts that messaged minors, and made catching Hernandez a priority for its security teams, according to Vice. They also paid a third party contractor “six figures” to help develop a zero-day exploit in Tails: a bug in its video player that enabled them to retrieve the real I.P. address of a person viewing a clip. Three sources told Vice that an intermediary passed the tool onto the FBI, who then obtained a search warrant to have one of the victims send a modified video file to Hernandez (a tactic the agency has used before).
Facebook also never notified the Tails team of the flaw—breaking with a long industry tradition of disclosure in which the relevant developers are notified of vulnerabilities in advance of them becoming public so they have a chance at implementing a fix. Sources told Vice that since an upcoming Tails update was slated to strip the vulnerable code, Facebook didn’t bother to do so, though the social media company had no reason to believe Tails developers had ever discovered the bug.
“The only acceptable outcome to us was Buster Hernandez facing accountability for his abuse of young girls,” a Facebook spokesperson told Vice.. “This was a unique case, because he was using such sophisticated methods to hide his identity, that we took the extraordinary steps of working with security experts to help the FBI bring him to justice.”
I agree with that last paragraph. I’m fine with the FBI using vulnerabilities: lawful hacking, it’s called. I’m less okay with Facebook paying for a Tails exploit, giving it to the FBI, and then keeping its existence secret.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.
Copyright © 2020 by Bruce Schneier.