## Blog: July 2017 Archives

### Robot Safecracking

Robots can crack safes faster than humans—and differently:

So Seidle started looking for shortcuts. First he found that, like many safes, his SentrySafe had some tolerance for error. If the combination includes a 12, for instance, 11 or 13 would work, too. That simple convenience measure meant his bot could try every third number instead of every single number, immediately paring down the total test time to just over four days. Seidle also realized that the bot didn’t actually need to return the dial to its original position before trying every combination. By making attempts in a certain careful order, it could keep two of the three rotors in place, while trying new numbers on just the last, vastly cutting the time to try new combinations to a maximum of four seconds per try. That reduced the maximum bruteforcing time to about one day and 16 hours, or under a day on average.

But Seidle found one more clever trick, this time taking advantage of a design quirk in the safe intended to prevent traditional safecracking. Because the safe has a rod that slips into slots in the three rotors when they’re aligned to the combination’s numbers, a human safecracker can apply light pressure to the safe’s handle, turn its dial, and listen or feel for the moment when that rod slips into those slots. To block that technique, the third rotor of Seidle’s SentrySafe is indented with twelve notches that catch the rod if someone turns the dial while pulling the handle.

Seidle took apart the safe he and his wife had owned for years, and measured those twelve notches. To his surprise, he discovered the one that contained the slot for the correct combination was about a hundredth of an inch narrower than the other eleven. That’s not a difference any human can feel or listen for, but his robot can easily detect it with a few automated measurements that take seconds. That discovery defeated an entire rotor’s worth of combinations, dividing the possible solutions by a factor of 33, and reducing the total cracking time to the robot’s current hour-and-13 minute max.

We’re going to have to start thinking about robot adversaries as we design our security systems.

### Measuring Vulnerability Rediscovery

New paper: “Taking Stock: Estimating Vulnerability Rediscovery,” by Trey Herr, Bruce Schneier, and Christopher Morris:

Abstract: How often do multiple, independent, parties discover the same vulnerability? There are ample models of vulnerability discovery, but little academic work on this issue of rediscovery. The immature state of this research and subsequent debate is a problem for the policy community, where the government’s decision to disclose a given vulnerability hinges in part on that vulnerability’s likelihood of being discovered and used maliciously by another party. Research into the behavior of malicious software markets and the efficacy of bug bounty programs would similarly benefit from an accurate baseline estimate for how often vulnerabilities are discovered by multiple independent parties.

This paper presents a new dataset of more than 4,300 vulnerabilities, and estimates vulnerability rediscovery across different vendors and software types. It concludes that rediscovery happens more than twice as often as the 1-9% range previously reported. For our dataset, 15% to 20% of vulnerabilities are discovered independently at least twice within a year. For just Android, 13.9% of vulnerabilities are rediscovered within 60 days, rising to 20% within 90 days, and above 21% within 120 days. For the Chrome browser we found 12.57% rediscovery within 60 days; and the aggregate rate for our entire dataset generally rises over the eight-year span, topping out at 19.6% in 2016. We believe that the actual rate is even higher for certain types of software.

When combined with an estimate of the total count of vulnerabilities in use by the NSA, these rates suggest that rediscovery of vulnerabilities kept secret by the U.S. government may be the source of up to one-third of all zero-day vulnerabilities detected in use each year. These results indicate that the information security community needs to map the impact of rediscovery on the efficacy of bug bounty programs and policymakers should more rigorously evaluate the costs of non-disclosure of software vulnerabilities.

We wrote a blog post on the paper, and another when we issued a revised version.

Comments on the original paper by Dave Aitel. News articles.

### Friday Squid Blogging: Giant Squids Have Small Brains

New research:

In this study, the optic lobe of a giant squid (Architeuthis dux, male, mantle length 89 cm), which was caught by local fishermen off the northeastern coast of Taiwan, was scanned using high-resolution magnetic resonance imaging in order to examine its internal structure. It was evident that the volume ratio of the optic lobe to the eye in the giant squid is much smaller than that in the oval squid (Sepioteuthis lessoniana) and the cuttlefish (Sepia pharaonis). Furthermore, the cell density in the cortex of the optic lobe is significantly higher in the giant squid than in oval squids and cuttlefish, with the relative thickness of the cortex being much larger in Architeuthis optic lobe than in cuttlefish. This indicates that the relative size of the medulla of the optic lobe in the giant squid is disproportionally smaller compared with these two cephalopod species.

From the New York Times:

A recent, lucky opportunity to study part of a giant squid brain up close in Taiwan suggests that, compared with cephalopods that live in shallow waters, giant squids have a small optic lobe relative to their eye size.

Furthermore, the region in their optic lobes that integrates visual information with motor tasks is reduced, implying that giant squids don’t rely on visually guided behavior like camouflage and body patterning to communicate with one another, as other cephalopods do.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

### Me on Restaurant Surveillance Technology

I attended the National Restaurant Association exposition in Chicago earlier this year, and looked at all the ways modern restaurant IT is spying on people.

But there’s also a fundamentally creepy aspect to much of this. One of the prime ways to increase value for your brand is to use the Internet to practice surveillance of both your customers and employees. The customer side feels less invasive: Loyalty apps are pretty nice, if in fact you generally go to the same place, as is the ability to place orders electronically or make reservations with a click. The question, Schneier asks, is “who owns the data?” There’s value to collecting data on spending habits, as we’ve seen across e-commerce. Are restaurants fully aware of what they are giving away? Schneier, a critic of data mining, points out that it becomes especially invasive through “secondary uses,” when the “data is correlated with other data and sold to third parties.” For example, perhaps you’ve entered your name, gender, and age into a taco loyalty app (12th taco free!). Later, the vendors of that app sell your data to other merchants who know where and when you eat, whether you are a vegetarian, and lots of other data that you have accidentally shed. Is that what customers really want?

### Zero-Day Vulnerabilities against Windows in the NSA Tools Released by the Shadow Brokers

In April, the Shadow Brokers—presumably Russia—released a batch of Windows exploits from what is presumably the NSA. Included in that release were eight different Windows vulnerabilities. Given a presumed theft date of the data as sometime between 2012 and 2013—based on timestamps of the documents and the limited Windows 8 support of the tools:

• Three were already patched by Microsoft. That is, they were not zero days, and could only be used against unpatched targets. They are EMERALDTHREAD, EDUCATEDSCHOLAR, and ECLIPSEDWING.
• One was discovered to have been used in the wild and patched in 2014: ESKIMOROLL.
• Four were only patched when the NSA informed Microsoft about them in early 2017: ETERNALBLUE, ETERNALSYNERGY, ETERNALROMANCE, and ETERNALCHAMPION.

So of the five serious zero-day vulnerabilities against Windows in the NSA’s pocket, four were never independently discovered. This isn’t new news, but I haven’t seen this summary before.

### Firing a Locked Smart Gun

The Armatix IP1 “smart gun” can only be fired by someone who is wearing a special watch. Unfortunately, this security measure is easily hackable.

### Roombas will Spy on You

The company that sells the Roomba autonomous vacuum wants to sell the data about your home that it collects.

Some questions:

What happens if a Roomba user consents to the data collection and later sells his or her home—especially furnished—and now the buyers of the data have a map of a home that belongs to someone who didn’t consent, Mr. Gidari asked. How long is the data kept? If the house burns down, can the insurance company obtain the data and use it to identify possible causes? Can the police use it after a robbery?

EDITED TO ADD (6/29): Roomba is backtracking—for now.

### Alternatives to Government-Mandated Encryption Backdoors

Policy essay: “Encryption Substitutes,” by Andrew Keane Woods:

Blog post.

### US Army Researching Bot Swarms

The US Army Research Agency is funding research into autonomous bot swarms. From the announcement:

The objective of this CRA is to perform enabling basic and applied research to extend the reach, situational awareness, and operational effectiveness of large heterogeneous teams of intelligent systems and Soldiers against dynamic threats in complex and contested environments and provide technical and operational superiority through fast, intelligent, resilient and collaborative behaviors. To achieve this, ARL is requesting proposals that address three key Research Areas (RAs):

RA1: Distributed Intelligence: Establish the theoretical foundations of multi-faceted distributed networked intelligent systems combining autonomous agents, sensors, tactical super-computing, knowledge bases in the tactical cloud, and human experts to acquire and apply knowledge to affect and inform decisions of the collective team.

RA2: Heterogeneous Group Control: Develop theory and algorithms for control of large autonomous teams with varying levels of heterogeneity and modularity across sensing, computing, platforms, and degree of autonomy.

RA3: Adaptive and Resilient Behaviors: Develop theory and experimental methods for heterogeneous teams to carry out tasks under the dynamic and varying conditions in the physical world.

And while we’re on the subject, this is an excellent report on AI and national security.

### Friday Squid Blogging: Giant Squid Caught Off the Coast of Ireland

It’s the second in two months. Video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

### Hacking a Segway

The Segway has a mobile app. It is hackable:

While analyzing the communication between the app and the Segway scooter itself, Kilbride noticed that a user PIN number meant to protect the Bluetooth communication from unauthorized access wasn’t being used for authentication at every level of the system. As a result, Kilbride could send arbitrary commands to the scooter without needing the user-chosen PIN.

He also discovered that the hoverboard’s software update platform didn’t have a mechanism in place to confirm that firmware updates sent to the device were really from Segway (often called an “integrity check”). This meant that in addition to sending the scooter commands, an attacker could easily trick the device into installing a malicious firmware update that could override its fundamental programming. In this way an attacker would be able to nullify built-in safety mechanisms that prevented the app from remote-controlling or shutting off the vehicle while someone was on it.

“The app allows you to do things like change LED colors, it allows you to remote-control the hoverboard and also apply firmware updates, which is the interesting part,” Kilbride says. “Under the right circumstances, if somebody applies a malicious firmware update, any attacker who knows the right assembly language could then leverage this to basically do as they wish with the hoverboard.”

### Ethereum Hacks

The press is reporting a \$32M theft of the cryptocurrency Ethereum. Like all such thefts, they’re not a result of a cryptographic failure in the currencies, but instead a software vulnerability in the software surrounding the currency—in this case, digital wallets.

This is the second Ethereum hack this week. The first tricked people in sending their Ethereum to another address.

This is my concern about digital cash. The cryptography can be bulletproof, but the computer security will always be an issue.

Slashdot asks if password masking—replacing password characters with asterisks as you type them—is on the way out. I don’t know if that’s true, but I would be happy to see it go. Shoulder surfing, the threat it defends against, is largely nonexistent. And it is becoming harder to type in passwords on small screens and annoying interfaces. The IoT will only exacerbate this problem, and when passwords are harder to type in, users choose weaker ones.

### Australia Considering New Law Weakening Encryption

News from Australia:

Under the law, internet companies would have the same obligations telephone companies do to help law enforcement agencies, Prime Minister Malcolm Turnbull said. Law enforcement agencies would need warrants to access the communications.

“We’ve got a real problem in that the law enforcement agencies are increasingly unable to find out what terrorists and drug traffickers and pedophile rings are up to because of the very high levels of encryption,” Turnbull told reporters.

“Where we can compel it, we will, but we will need the cooperation from the tech companies,” he added.

Never mind that the law 1) would not achieve the desired results because all the smart “terrorists and drug traffickers and pedophile rings” will simply use a third-party encryption app, and 2) would make everyone else in Australia less secure. But that’s all ground I’ve covered before.

I found this bit amusing:

Asked whether the laws of mathematics behind encryption would trump any new legislation, Mr Turnbull said: “The laws of Australia prevail in Australia, I can assure you of that.

“The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia.”

Next Turnbull is going to try to legislate that pi = 3.2.

Another article. BoingBoing post.

### Friday Squid Blogging: Eyeball Collector Wants a Giant-Squid Eyeball

They’re rare:

The one Dubielzig really wants is an eye from a giant squid, which has the biggest eye of any living animal—it’s the size of a dinner plate.

“But there are no intact specimens of giant squid eyes, only rotten specimens that have been beached,” he says.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

### Book Review: Twitter and Tear Gas, by Zeynep Tufekci

There are two opposing models of how the Internet has changed protest movements. The first is that the Internet has made protesters mightier than ever. This comes from the successful revolutions in Tunisia (2010-11), Egypt (2011), and Ukraine (2013). The second is that it has made them more ineffectual. Derided as “slacktivism” or “clicktivism,” the ease of action without commitment can result in movements like Occupy petering out in the US without any obvious effects. Of course, the reality is more nuanced, and Zeynep Tufekci teases that out in her new book Twitter and Tear Gas.

Tufekci is a rare interdisciplinary figure. As a sociologist, programmer, and ethnographer, she studies how technology shapes society and drives social change. She has a dual appointment in both the School of Information Science and the Department of Sociology at University of North Carolina at Chapel Hill, and is a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. Her regular New York Times column on the social impacts of technology is a must-read.

Modern Internet-fueled protest movements are the subjects of Twitter and Tear Gas. As an observer, writer, and participant, Tufekci examines how modern protest movements have been changed by the Internet­—and what that means for protests going forward. Her book combines her own ethnographic research and her usual deft analysis, with the research of others and some big data analysis from social media outlets. The result is a book that is both insightful and entertaining, and whose lessons are much broader than the book’s central topic.

“The Power and Fragility of Networked Protest” is the book’s subtitle. The power of the Internet as a tool for protest is obvious: it gives people newfound abilities to quickly organize and scale. But, according to Tufekci, it’s a mistake to judge modern protests using the same criteria we used to judge pre-Internet protests. The 1963 March on Washington might have culminated in hundreds of thousands of people listening to Martin Luther King Jr. deliver his “I Have a Dream” speech, but it was the culmination of a multi-year protest effort and the result of six months of careful planning made possible by that sustained effort. The 2011 protests in Cairo came together in mere days because they could be loosely coordinated on Facebook and Twitter.

That’s the power. Tufekci describes the fragility by analogy. Nepalese Sherpas assist Mt. Everest climbers by carrying supplies, laying out ropes and ladders, and so on. This means that people with limited training and experience can make the ascent, which is no less dangerous—to sometimes disastrous results. Says Tufekci: “The Internet similarly allows networked movements to grow dramatically and rapidly, but without prior building of formal or informal organizational and other collective capacities that could prepare them for the inevitable challenges they will face and give them the ability to respond to what comes next.” That makes them less able to respond to government counters, change their tactics­—a phenomenon Tufekci calls “tactical freeze”—make movement-wide decisions, and survive over the long haul.

Tufekci isn’t arguing that modern protests are necessarily less effective, but that they’re different. Effective movements need to understand these differences, and leverage these new advantages while minimizing the disadvantages.

To that end, she develops a taxonomy for talking about social movements. Protests are an example of a “signal” that corresponds to one of several underlying “capacities.” There’s narrative capacity: the ability to change the conversation, as Black Lives Matter did with police violence and Occupy did with wealth inequality. There’s disruptive capacity: the ability to stop business as usual. An early Internet example is the 1999 WTO protests in Seattle. And finally, there’s electoral or institutional capacity: the ability to vote, lobby, fund raise, and so on. Because of various “affordances” of modern Internet technologies, particularly social media, the same signal—a protest of a given size—reflects different underlying capacities.

This taxonomy also informs government reactions to protest movements. Smart responses target attention as a resource. The Chinese government responded to 2015 protesters in Hong Kong by not engaging with them at all, denying them camera-phone videos that would go viral and attract the world’s attention. Instead, they pulled their police back and waited for the movement to die from lack of attention.

If this all sounds dry and academic, it’s not. Twitter and Tear Gasis infused with a richness of detail stemming from her personal participation in the 2013 Gezi Park protests in Turkey, as well as personal on-the-ground interviews with protesters throughout the Middle East—particularly Egypt and her native Turkey—Zapatistas in Mexico, WTO protesters in Seattle, Occupy participants worldwide, and others. Tufekci writes with a warmth and respect for the humans that are part of these powerful social movements, gently intertwining her own story with the stories of others, big data, and theory. She is adept at writing for a general audience, and­despite being published by the intimidating Yale University Press—her book is more mass-market than academic. What rigor is there is presented in a way that carries readers along rather than distracting.

The synthesist in me wishes Tufekci would take some additional steps, taking the trends she describes outside of the narrow world of political protest and applying them more broadly to social change. Her taxonomy is an important contribution to the more-general discussion of how the Internet affects society. Furthermore, her insights on the networked public sphere has applications for understanding technology-driven social change in general. These are hard conversations for society to have. We largely prefer to allow technology to blindly steer society or—in some ways worse—leave it to unfettered for-profit corporations. When you’re reading Twitter and Tear Gas, keep current and near-term future technological issues such as ubiquitous surveillance, algorithmic discrimination, and automation and employment in mind. You’ll come away with new insights.

Tufekci twice quotes historian Melvin Kranzberg from 1985: “Technology is neither good nor bad; nor is it neutral.” This foreshadows her central message. For better or worse, the technologies that power the networked public sphere have changed the nature of political protest as well as government reactions to and suppressions of such protest.

I have long characterized our technological future as a battle between the quick and the strong. The quick—dissidents, hackers, criminals, marginalized groups—are the first to make use of a new technology to magnify their power. The strong are slower, but have more raw power to magnify. So while protesters are the first to use Facebook to organize, the governments eventually figure out how to use Facebook to track protesters. It’s still an open question who will gain the upper hand in the long term, but Tufekci’s book helps us understand the dynamics at work.

This essay originally appeared on Vice Motherboard.

The book on Amazon.com.

### Forged Documents and Microsoft Fonts

A set of documents in Pakistan were detected as forgeries because their fonts were not in circulation at the time the documents were dated.

### Tomato-Plant Security

I have a soft spot for interesting biological security measures, especially by plants. I’ve used them as examples in several of my books. Here’s a new one: when tomato plants are attacked by caterpillars, they release a chemical that turns the caterpillars on each other:

It’s common for caterpillars to eat each other when they’re stressed out by the lack of food. (We’ve all been there.) But why would they start eating each other when the plant food is right in front of them? Answer: because of devious behavior control by plants.

When plants are attacked (read: eaten) they make themselves more toxic by activating a chemical called methyl jasmonate. Scientists sprayed tomato plants with methyl jasmonate to kick off these responses, then unleashed caterpillars on them.

Compared to an untreated plant, a high-dose plant had five times as much plant left behind because the caterpillars were turning on each other instead. The caterpillars on a treated tomato plant ate twice as many other caterpillars than the ones on a control plant.

### More on the NSA's Use of Traffic Shaping

“Traffic shaping”—the practice of tricking data to flow through a particular route on the Internet so it can be more easily surveiled—is an NSA technique that has gotten much less attention than it deserves. It’s a powerful technique that allows an eavesdropper to get access to communications channels it would otherwise not be able to monitor.

There’s a new paper on this technique:

This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad. In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term “traffic shaping” to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance. Since it is hard to intercept Yemen’s international communications from inside Yemen itself, the agency might try to “shape” the traffic so that it passes through communications cables located on friendlier territory. Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.

The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.

Could the NSA use traffic shaping to redirect domestic Internet traffic—­emails and chat messages sent between Americans, say­—to foreign soil, where its surveillance can be conducted beyond the purview of Congress and the courts? It is impossible to categorically answer this question, due to the classified nature of many national-security surveillance programs, regulations and even of the legal decisions made by the surveillance courts. Nevertheless, this report explores a legal, technical, and operational landscape that suggests that traffic shaping could be exploited to sidestep legal restrictions imposed by Congress and the surveillance courts.

News article. NSA document detailing the technique with Yemen.

This work builds on previous research that I blogged about here.

The fundamental vulnerability is that routing information isn’t authenticated.

### Hacking Spotify

Some of the ways artists are hacking the music-streaming service Spotify.

### The Future of Forgeries

This article argues that AI technologies will make image, audio, and video forgeries much easier in the future.

Combined, the trajectory of cheap, high-quality media forgeries is worrying. At the current pace of progress, it may be as little as two or three years before realistic audio forgeries are good enough to fool the untrained ear, and only five or 10 years before forgeries can fool at least some types of forensic analysis. When tools for producing fake video perform at higher quality than today’s CGI and are simultaneously available to untrained amateurs, these forgeries might comprise a large part of the information ecosystem. The growth in this technology will transform the meaning of evidence and truth in domains across journalism, government communications, testimony in criminal justice, and, of course, national security.

I am not worried about fooling the “untrained ear,” and more worried about fooling forensic analysis. But there’s an arms race here. Recording technologies will get more sophisticated, too, making their outputs harder to forge. Still, I agree that the advantage will go to the forgers and not the forgery detectors.

### Friday Squid Blogging: Why It's Hard to Track the Squid Population

Counting squid is not easy.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

### An Assassin's Teapot

This teapot has two chambers. Liquid is released from one or the other depending on whether an air hole is covered. I want one.

### DNI Wants Research into Secure Multiparty Computation

The Intelligence Advanced Research Projects Activity (IARPA) is soliciting proposals for research projects in secure multiparty computation:

Specifically of interest is computing on data belonging to different—potentially mutually distrusting—parties, which are unwilling or unable (e.g., due to laws and regulations) to share this data with each other or with the underlying compute platform. Such computations may include oblivious verification mechanisms to prove the correctness and security of computation without revealing underlying data, sensitive computations, or both.

My guess is that this is to perform analysis using data obtained from different surveillance authorities.

### Now It's Easier than Ever to Steal Someone's Keys

The website key.me will make a duplicate key from a digital photo.

If a friend or coworker leaves their keys unattended for a few seconds, you know what to do.

EDITED TO ADD (7/20): Another article.

### Dubai Deploying Autonomous Robotic Police Cars

It’s hard to tell how much of this story is real and how much is aspirational, but it really is only a matter of time:

About the size of a child’s electric toy car, the driverless vehicles will patrol different areas of the city to boost security and hunt for unusual activity, all the while scanning crowds for potential persons of interest to police and known criminals.

### Commentary on US Election Security

Good commentaries from Ed Felten and Matt Blaze.

Both make a point that I have also been saying: hacks can undermine the legitimacy of an election, even if there is no actual voter or vote manipulation.

Felten:

The second lesson is that we should be paying more attention to attacks that aim to undermine the legitimacy of an election rather than changing the election’s result. Election-stealing attacks have gotten most of the attention up to now—­and we are still vulnerable to them in some places—­but it appears that external threat actors may be more interested in attacking legitimacy.

Attacks on legitimacy could take several forms. An attacker could disrupt the operation of the election, for example, by corrupting voter registration databases so there is uncertainty about whether the correct people were allowed to vote. They could interfere with post-election tallying processes, so that incorrect results were reported­ an attack that might have the intended effect even if the results were eventually corrected. Or the attacker might fabricate evidence of an attack, and release the false evidence after the election.

Legitimacy attacks could be easier to carry out than election-stealing attacks, as well. For one thing, a legitimacy attacker will typically want the attack to be discovered, although they might want to avoid having the culprit identified. By contrast, an election-stealing attack must avoid detection in order to succeed. (If detected, it might function as a legitimacy attack.)

Blaze:

A hostile state actor who can compromise a handful of county networks might not even need to alter any actual votes to create considerable uncertainty about an election’s legitimacy. It may be sufficient to simply plant some suspicious software on back end networks, create some suspicious audit files, or add some obviously bogus names to to the voter rolls. If the preferred candidate wins, they can quietly do nothing (or, ideally, restore the compromised networks to their original states). If the “wrong” candidate wins, however, they could covertly reveal evidence that county election systems had been compromised, creating public doubt about whether the election had been “rigged”. This could easily impair the ability of the true winner to effectively govern, at least for a while.

In other words, a hostile state actor interested in disruption may actually have an easier task than someone who wants to undetectably steal even a small local office. And a simple phishing and trojan horse email campaign like the one in the NSA report is potentially all that would be needed to carry this out.

Me:

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­ by extension ­ our democracy.

And, finally, a report from the Brennan Center for Justice on how to secure elections.

### GoldenEye Malware

I don’t have anything to say—mostly because I’m otherwise busy—about the malware known as GoldenEye, NotPetya, or ExPetr. But I wanted a post to park links.

### A Man-in-the-Middle Attack against a Password Reset System

This is nice work: “The Password Reset MitM Attack,” by Nethanel Gelerntor, Senia Kalma, Bar Magnezi, and Hen Porcilan:

Abstract: We present the password reset MitM (PRMitM) attack and show how it can be used to take over user accounts. The PRMitM attack exploits the similarity of the registration and password reset processes to launch a man in the middle (MitM) attack at the application level. The attacker initiates a password reset process with a website and forwards every challenge to the victim who either wishes to register in the attacking site or to access a particular resource on it.

The attack has several variants, including exploitation of a password reset process that relies on the victim’s mobile phone, using either SMS or phone call. We evaluated the PRMitM attacks on Google and Facebook users in several experiments, and found that their password reset process is vulnerable to the PRMitM attack. Other websites and some popular mobile applications are vulnerable as well.

Although solutions seem trivial in some cases, our experiments show that the straightforward solutions are not as effective as expected. We designed and evaluated two secure password reset processes and evaluated them on users of Google and Facebook. Our results indicate a significant improvement in the security. Since millions of accounts are currently vulnerable to the PRMitM attack, we also present a list of recommendations for implementing and auditing the password reset process.