Blog: April 2017 Archives

Friday Squid Blogging: Live Squid Washes up on North Carolina Beach

A “mysterious squid”—big and red—washed up on a beach in Carteret County, North Carolina. Someone found it, still alive, and set it back in the water after taking some photos of it. Squid scientists later decided it was a diamondback squid.

So, you think that O’Shea might know the identity of the squid Carey Walker found on the Portsmouth Island Beach, just by looking at an emailed photo or two? Indeed, he did. After a couple of days of back-and-forth emails—it can be difficult to connect consistently with a world-famous man who lives now in Australia—he reported that, while unusual to be seen on beaches in our parts, this was not a particularly unusual squid: It was a diamondback squid, known in scientific nomenclature as Thysanoteuthis rhombus.

T. rhombus, also known as the diamond squid or diamondback squid, is a large species that grows to about 100 centimeters in length, which translates to about 39 inches, and ranges in weight from 20 to 30 kilograms, which translates to 44 to 50 pounds. Which means that, if nothing else, Carey Walker is pretty good at estimating the weight and length of big red squids he picks up on remote beaches.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on April 28, 2017 at 4:37 PM139 Comments

Jumping Airgaps with a Laser and a Scanner

Researchers have configured two computers to talk to each other using a laser and a scanner.

Scanners work by detecting reflected light on their glass pane. The light creates a charge that the scanner translates into binary, which gets converted into an image. But scanners are sensitive to any changes of light in a room­—even when paper is on the glass pane or when the light source is infrared—which changes the charges that get converted to binary. This means signals can be sent through the scanner by flashing light at its glass pane using either a visible light source or an infrared laser that is invisible to human eyes.

There are a couple of caveats to the attack—the malware to decode the signals has to already be installed on a system on the network, and the lid on the scanner has to be at least partially open to receive the light. It’s not unusual for workers to leave scanner lids open after using them, however, and an attacker could also pay a cleaning crew or other worker to leave the lid open at night.

The setup is that there’s malware on the computer connected to the scanner, and that computer isn’t on the Internet. This technique allows an attacker to communicate with that computer. For extra coolness, the laser can be mounted on a drone.

Here’s the paper. And two videos.

Posted on April 28, 2017 at 12:48 PM53 Comments

Stealing Browsing History Using Your Phone's Ambient Light Sensor

There has been a flurry of research into using the various sensors on your phone to steal data in surprising ways. Here’s another: using the phone’s ambient light sensor to detect what’s on the screen. It’s a proof of concept, but the paper’s general conclusions are correct:

There is a lesson here that designing specifications and systems from a privacy engineering perspective is a complex process: decisions about exposing sensitive APIs to the web without any protections should not be taken lightly. One danger is that specification authors and browser vendors will base decisions on overly general principles and research results which don’t apply to a particular new feature (similarly to how protections on gyroscope readings might not be sufficient for light sensor data).

Posted on April 28, 2017 at 6:17 AM13 Comments

Reading Analytics and Privacy

Interesting paper: “The rise of reading analytics and the emerging calculus of reading privacy in the digital world,” by Clifford Lynch:

Abstract: This paper studies emerging technologies for tracking reading behaviors (“reading analytics”) and their implications for reader privacy, attempting to place them in a historical context. It discusses what data is being collected, to whom it is available, and how it might be used by various interested parties (including authors). I explore means of tracking what’s being read, who is doing the reading, and how readers discover what they read. The paper includes two case studies: mass-market e-books (both directly acquired by readers and mediated by libraries) and scholarly journals (usually mediated by academic libraries); in the latter case I also provide examples of the implications of various authentication, authorization and access management practices on reader privacy. While legal issues are touched upon, the focus is generally pragmatic, emphasizing technology and marketplace practices. The article illustrates the way reader privacy concerns are shifting from government to commercial surveillance, and the interactions between government and the private sector in this area. The paper emphasizes U.S.-based developments.

Posted on April 27, 2017 at 6:20 AM6 Comments

Analyzing Cyber Insurance Policies

There’s a really interesting new paper analyzing over 100 different cyber insurance policies. From the abstract:

In this research paper, we seek to answer fundamental questions concerning the current state of the cyber insurance market. Specifically, by collecting over 100 full insurance policies, we examine the composition and variation across three primary components: The coverage and exclusions of first and third party losses which define what is and is not covered; The security application questionnaires which are used to help assess an applicant’s security posture; and the rate schedules which define the algorithms used to compute premiums.

Overall, our research shows a much greater consistency among loss coverage and exclusions of insurance policies than is often assumed. For example, after examining only 5 policies, all coverage topics were identified, while it took only 13 policies to capture all exclusion topics. However, while each policy may include commonly covered losses or exclusions, there was often additional language further describing exceptions, conditions, or limits to the coverage. The application questionnaires provide insights into the security technologies and management practices that are (and are not) examined by carriers. For example, our analysis identified four main topic areas: Organizational, Technical, Policies and Procedures, and Legal and Compliance. Despite these sometimes lengthy questionnaires, however, there still appeared to be relevant gaps. For instance, information about the security posture of third-party service and supply chain providers and are notoriously difficult to assess properly (despite numerous breaches occurring from such compromise).

In regard to the rate schedules, we found a surprising variation in the sophistication of the equations and metrics used to price premiums. Many policies examined used a very simple, flat rate pricing (based simply on expected loss), while others incorporated more parameters such as the firm’s asset value (or firm revenue), or standard insurance metrics (e.g. limits, retention, coinsurance), and industry type. More sophisticated policies also included information specific information security controls and practices as collected from the security questionnaires. By examining these components of insurance contracts, we hope to provide the first-ever insights into how insurance carriers understand and price cyber risks.

Posted on April 26, 2017 at 6:14 AM18 Comments

Advances in Ad Blocking

Ad blockers represent the largest consumer boycott in human history. They’re also an arms race between the blockers and the blocker blockers. This article discusses a new ad-blocking technology that represents another advance in this arms race. I don’t think it will “put an end to the ad-blocking arms race,” as the title proclaims, but it will definitely give the blockers the upper hand.

The software, devised by Arvind Narayanan, Dillon Reisman, Jonathan Mayer, and Grant Storey, is novel in two major ways: First, it looks at the struggle between advertising and ad blockers as fundamentally a security problem that can be fought in much the same way antivirus programs attempt to block malware, using techniques borrowed from rootkits and built-in web browser customizability to stealthily block ads without being detected. Second, the team notes that there are regulations and laws on the books that give a fundamental advantage to consumers that cannot be easily changed, opening the door to a long-term ad-blocking solution.

Now if we could only block the data collection as well.

Posted on April 25, 2017 at 12:07 PM66 Comments

Smart TV Hack via the Broadcast Signal

This is impressive:

The proof-of-concept exploit uses a low-cost transmitter to embed malicious commands into a rogue TV signal. That signal is then broadcast to nearby devices. It worked against two fully updated TV models made by Samsung. By exploiting two known security flaws in the Web browsers running in the background, the attack was able to gain highly privileged root access to the TVs. By revising the attack to target similar browser bugs found in other sets, the technique would likely work on a much wider range of TVs.

Posted on April 20, 2017 at 7:41 AM57 Comments

Surveillance and Our Insecure Infrastructure

Since Edward Snowden revealed to the world the extent of the NSA’s global surveillance network, there has been a vigorous debate in the technological community about what its limits should be.

Less discussed is how many of these same surveillance techniques are used by other—smaller and poorer—more totalitarian countries to spy on political opponents, dissidents, human rights defenders; the press in Toronto has documented some of the many abuses, by countries like Ethiopia , the UAE, Iran, Syria, Kazakhstan , Sudan, Ecuador, Malaysia, and China.

That these countries can use network surveillance technologies to violate human rights is a shame on the world, and there’s a lot of blame to go around.

We can point to the governments that are using surveillance against their own citizens.

We can certainly blame the cyberweapons arms manufacturers that are selling those systems, and the countries—mostly European—that allow those arms manufacturers to sell those systems.

There’s a lot more the global Internet community could do to limit the availability of sophisticated Internet and telephony surveillance equipment to totalitarian governments. But I want to focus on another contributing cause to this problem: the fundamental insecurity of our digital systems that makes this a problem in the first place.

IMSI catchers are fake mobile phone towers. They allow someone to impersonate a cell network and collect information about phones in the vicinity of the device and they’re used to create lists of people who were at a particular event or near a particular location.

Fundamentally, the technology works because the phone in your pocket automatically trusts any cell tower to which it connects. There’s no security in the connection protocols between the phones and the towers.

IP intercept systems are used to eavesdrop on what people do on the Internet. Unlike the surveillance that happens at the sites you visit, by companies like Facebook and Google, this surveillance happens at the point where your computer connects to the Internet. Here, someone can eavesdrop on everything you do.

This system also exploits existing vulnerabilities in the underlying Internet communications protocols. Most of the traffic between your computer and the Internet is unencrypted, and what is encrypted is often vulnerable to man-in-the-middle attacks because of insecurities in both the Internet protocols and the encryption protocols that protect it.

There are many other examples. What they all have in common is that they are vulnerabilities in our underlying digital communications systems that allow someone—whether it’s a country’s secret police, a rival national intelligence organization, or criminal group—to break or bypass what security there is and spy on the users of these systems.

These insecurities exist for two reasons. First, they were designed in an era where computer hardware was expensive and inaccessibility was a reasonable proxy for security. When the mobile phone network was designed, faking a cell tower was an incredibly difficult technical exercise, and it was reasonable to assume that only legitimate cell providers would go to the effort of creating such towers.

At the same time, computers were less powerful and software was much slower, so adding security into the system seemed like a waste of resources. Fast forward to today: computers are cheap and software is fast, and what was impossible only a few decades ago is now easy.

The second reason is that governments use these surveillance capabilities for their own purposes. The FBI has used IMSI-catchers for years to investigate crimes. The NSA uses IP interception systems to collect foreign intelligence. Both of these agencies, as well as their counterparts in other countries, have put pressure on the standards bodies that create these systems to not implement strong security.

Of course, technology isn’t static. With time, things become cheaper and easier. What was once a secret NSA interception program or a secret FBI investigative tool becomes usable by less-capable governments and cybercriminals.

Man-in-the-middle attacks against Internet connections are a common criminal tool to steal credentials from users and hack their accounts.

IMSI-catchers are used by criminals, too. Right now, you can go onto Alibaba.com and buy your own IMSI catcher for under $2,000.

Despite their uses by democratic governments for legitimate purposes, our security would be much better served by fixing these vulnerabilities in our infrastructures.

These systems are not only used by dissidents in totalitarian countries, they’re also used by legislators, corporate executives, critical infrastructure providers, and many others in the US and elsewhere.

That we allow people to remain insecure and vulnerable is both wrongheaded and dangerous.

Earlier this month, two American legislators—Senator Ron Wyden and Rep Ted Lieu—sent a letter to the chairman of the Federal Communications Commission, demanding that he do something about the country’s insecure telecommunications infrastructure.

They pointed out that not only are insecurities rampant in the underlying protocols and systems of the telecommunications infrastructure, but also that the FCC knows about these vulnerabilities and isn’t doing anything to force the telcos to fix them.

Wyden and Lieu make the point that fixing these vulnerabilities is a matter of US national security, but it’s also a matter of international human rights. All modern communications technologies are global, and anything the US does to improve its own security will also improve security worldwide.

Yes, it means that the FBI and the NSA will have a harder job spying, but it also means that the world will be a safer and more secure place.

This essay previously appeared on AlJazeera.com.

Posted on April 17, 2017 at 6:21 AM91 Comments

Attack vs. Defense in Nation-State Cyber Operations

I regularly say that, on the Internet, attack is easier than defense. There are a bunch of reasons for this, but primarily it’s 1) the complexity of modern networked computer systems and 2) the attacker’s ability to choose the time and method of the attack versus the defender’s necessity to secure against every type of attack. This is true, but how this translates to military cyber-operations is less straightforward. Contrary to popular belief, government cyberattacks are not bolts out of the blue, and the attack/defense balance is more…well…balanced.

Rebecca Slayton has a good article in International Security that tries to make sense of this: “What is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment.” In it, she points out that launching a cyberattack is more than finding and exploiting a vulnerability, and it is those other things that help balance the offensive advantage.

Posted on April 13, 2017 at 5:45 AM25 Comments

Research on Tech-Support Scams

Interesting paper: “Dial One for Scam: A Large-Scale Analysis of Technical Support Scams“:

Abstract: In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then “diagnose the problem”, before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web.

In this paper, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by
technical support scams.

BoingBoing post.

Posted on April 12, 2017 at 6:34 AM22 Comments

Fourth WikiLeaks CIA Attack Tool Dump

WikiLeaks is obviously playing their Top Secret CIA data cache for as much press as they can, leaking the documents a little at a time. On Friday they published their fourth set of documents from what they call “Vault 7”:

27 documents from the CIA’s Grasshopper framework, a platform used to build customized malware payloads for Microsoft Windows operating systems.

We have absolutely no idea who leaked this one. When they first started appearing, I suspected that it was not an insider because there wasn’t anything illegal in the documents. There still isn’t, but let me explain further. The CIA documents are all hacking tools. There’s nothing about programs or targets. Think about the Snowden leaks: it was the information about programs that targeted Americans, programs that swept up much of the world’s information, programs that demonstrated particularly powerful NSA capabilities. There’s nothing like that in the CIA leaks. They’re just hacking tools. All they demonstrate is that the CIA hoards vulnerabilities contrary to the government’s stated position, but we already knew that.

This was my guess from March:

If I had to guess right now, I’d say the documents came from an outsider and not an insider. My reasoning: One, there is absolutely nothing illegal in the contents of any of this stuff. It’s exactly what you’d expect the CIA to be doing in cyberspace. That makes the whistleblower motive less likely. And two, the documents are a few years old, making this more like the Shadow Brokers than Edward Snowden. An internal leaker would leak quickly. A foreign intelligence agency—like the Russians—would use the documents while they were fresh and valuable, and only expose them when the embarrassment value was greater.

But, as I said last month, no one has any idea: we’re all guessing. (Well, to be fair, I hope the CIA knows exactly who did this. Or, at least, exactly where the documents were stolen from.) And I hope the inability of either the NSA or CIA to keep its own attack tools secret will cause them to rethink their decision to hoard vulnerabilities in common Internet systems instead of fixing them.

News articles.

EDITED TO ADD (4/12): An analysis.

Posted on April 10, 2017 at 2:16 PM24 Comments

Shadow Brokers Releases the Rest of Their NSA Hacking Tools

Last August, an unknown group called the Shadow Brokers released a bunch of NSA tools to the public. The common guesses were that the tools were discovered on an external staging server, and that the hack and release was the work of the Russians (back then, that wasn’t controversial). This was me:

Okay, so let’s think about the game theory here. Some group stole all of this data in 2013 and kept it secret for three years. Now they want the world to know it was stolen. Which governments might behave this way? The obvious list is short: China and Russia. Were I betting, I would bet Russia, and that it’s a signal to the Obama Administration: “Before you even think of sanctioning us for the DNC hack, know where we’ve been and what we can do to you.”

They published a second, encrypted, file. My speculation:

They claim to be auctioning off the rest of the data to the highest bidder. I think that’s PR nonsense. More likely, that second file is random nonsense, and this is all we’re going to get. It’s a lot, though.

I was wrong. On November 1, the Shadow Brokers released some more documents, and two days ago they released the key to that original encrypted archive:

EQGRP-Auction-Files is CrDj”(;Va.*NdlnzB9M?@K2)#>deB7mN

I don’t think their statement is worth reading for content. I still believe the Russia are more likely to be the perpetrator than China.

There’s not much yet on the contents of this dump of Top Secret NSA hacking tools, but it can’t be a fun weekend at Ft. Meade. I’m sure that by now they have enough information to know exactly where and when the data got stolen, and maybe even detailed information on who did it. My guess is that we’ll never see that information, though.

EDITED TO ADD (4/11): Seems like there’s not a lot here.

Posted on April 10, 2017 at 5:51 AM63 Comments

Friday Squid Blogging: Squid Can Edit Their Own RNA

This is just plain weird:

Rosenthal, a neurobiologist at the Marine Biological Laboratory, was a grad student studying a specific protein in squid when he got an an inkling that some cephalopods might be different. Every time he analyzed that protein’s RNA sequence, it came out slightly different. He realized the RNA was occasionally substituting A’ for I’s, and wondered if squid might apply RNA editing to other proteins. Rosenthal, a grad student at the time, joined Tel Aviv University bioinformaticists Noa Liscovitch-Braur and Eli Eisenberg to find out.

In results published today, they report that the family of intelligent mollusks, which includes squid, octopuses and cuttlefish, feature thousands of RNA editing sites in their genes. Where the genetic material of humans, insects, and other multi-celled organisms read like a book, the squid genome reads more like a Mad Lib.

So why do these creatures engage in RNA editing when most others largely abandoned it? The answer seems to lie in some crazy double-stranded cloverleaves that form alongside editing sites in the RNA. That information is like a tag for RNA editing. When the scientists studied octopuses, squid, and cuttlefish, they found that these species had retained those vast swaths of genetic information at the expense of making the small changes that facilitate evolution. “Editing is important enough that they’re forgoing standard evolution,” Rosenthal says.

He hypothesizes that the development of a complex brain was worth that price. The researchers found many of the edited proteins in brain tissue, creating the elaborate dendrites and axons of the neurons and tuning the shape of the electrical signals that neurons pass. Perhaps RNA editing, adopted as a means of creating a more sophisticated brain, allowed these species to use tools, camouflage themselves, and communicate.

Yet more evidence that these bizarre creatures are actually aliens.

Three more articles. Academic paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on April 7, 2017 at 4:16 PM92 Comments

Incident Response as "Hand-to-Hand Combat"

NSA Deputy Director Richard Ledgett described a 2014 Russian cyberattack against the US State Department as “hand-to-hand” combat:

“It was hand-to-hand combat,” said NSA Deputy Director Richard Ledgett, who described the incident at a recent cyber forum, but did not name the nation behind it. The culprit was identified by other current and former officials. Ledgett said the attackers’ thrust-and-parry moves inside the network while defenders were trying to kick them out amounted to “a new level of interaction between a cyber attacker and a defender.”

[…]

Fortunately, Ledgett said, the NSA, whose hackers penetrate foreign adversaries’ systems to glean intelligence, was able to spy on the attackers’ tools and tactics. “So we were able to see them teeing up new things to do,” Ledgett said. “That’s a really useful capability to have.”

I think this is the first public admission that we spy on foreign governments’ cyberwarriors for defensive purposes. He’s right: being able to spy on the attackers’ networks and see what they’re doing before they do it is a very useful capability. It’s something that was first exposed by the Snowden documents: that the NSA spies on enemy networks for defensive purposes.

Interesting is that another country first found out about the intrusion, and that they also have offensive capabilities inside Russia’s cyberattack units:

The NSA was alerted to the compromises by a Western intelligence agency. The ally had managed to hack not only the Russians’ computers, but also the surveillance cameras inside their workspace, according to the former officials. They monitored the hackers as they maneuvered inside the U.S. systems and as they walked in and out of the workspace, and were able to see faces, the officials said.

There’s a myth that it’s hard for the US to attribute these sorts of cyberattacks. It used to be, but for the US—and other countries with this kind of intelligence gathering capabilities—attribution is not hard. It’s not fast, which is its own problem, and of course it’s not perfect: but it’s not hard.

Posted on April 7, 2017 at 8:06 AM44 Comments

Many Android Phones Vulnerable to Attacks Over Malicious Wi-Fi Networks

There’s a blog post from Google’s Project Zero detailing an attack against Android phones over Wi-Fi. From Ars Technica:

The vulnerability resides in a widely used Wi-Fi chipset manufactured by Broadcom and used in both iOS and Android devices. Apple patched the vulnerability with Monday’s release of iOS 10.3.1. “An attacker within range may be able to execute arbitrary code on the Wi-Fi chip,” Apple’s accompanying advisory warned. In a highly detailed blog post published Tuesday, the Google Project Zero researcher who discovered the flaw said it allowed the execution of malicious code on a fully updated 6P “by Wi-Fi proximity alone, requiring no user interaction.”

Google is in the process of releasing an update in its April security bulletin. The fix is available only to a select number of device models, and even then it can take two weeks or more to be available as an over-the-air update to those who are eligible. Company representatives didn’t respond to an e-mail seeking comment for this post.

The proof-of-concept exploit developed by Project Zero researcher Gal Beniamini uses Wi-Fi frames that contain irregular values. The values, in turn, cause the firmware running on Broadcom’s wireless system-on-chip to overflow its stack. By using the frames to target timers responsible for carrying out regularly occurring events such as performing scans for adjacent networks, Beniamini managed to overwrite specific regions of device memory with arbitrary shellcode. Beniamini’s code does nothing more than write a benign value to a specific memory address. Attackers could obviously exploit the same series of flaws to surreptitiously execute malicious code on vulnerable devices within range of a rogue access point.

Slashdot thread.

Posted on April 6, 2017 at 7:52 AM39 Comments

APT10 and Cloud Hopper

There’s a new report of a nation-state attack, presumed to be from China, on a series of managed ISPs. From the executive summary:

Since late 2016, PwC UK and BAE Systems have been assisting victims of a new cyber espionage campaign conducted by a China-based threat actor. We assess this threat actor to almost certainly be the same as the threat actor widely known within the security community as ‘APT10’. The campaign, which we refer to as Operation Cloud Hopper, has targeted managed IT service providers (MSPs), allowing APT10 unprecedented potential access to the intellectual property and sensitive data of those MSPs and their clients globally. A number of Japanese organisations have also been directly targeted in a separate, simultaneous campaign by the same actor.

We have identified a number of key findings that are detailed below.

APT10 has recently unleashed a sustained campaign against MSPs. The compromise of MSP networks has provided broad and unprecedented access to MSP customer networks.

  • Multiple MSPs were almost certainly being targeted from 2016 onwards, and it is likely that APT10 had already begun to do so from as early as 2014.
  • MSP infrastructure has been used as part of a complex web of exfiltration routes spanning multiple victim networks.

[…]

APT10 focuses on espionage activity, targeting intellectual property and other sensitive data.

  • APT10 is known to have exfiltrated a high volume of data from multiple victims, exploiting compromised MSP networks, and those of their customers, to stealthily move this data around the world.
  • The targeted nature of the exfiltration we have observed, along with the volume of the data, is reminiscent of the previous era of APT campaigns pre-2013.

PwC UK and BAE Systems assess APT10 as highly likely to be a China-based threat actor.

  • It is a widely held view within the cyber security community that APT10 is a China-based threat actor.
  • Our analysis of the compile times of malware binaries, the registration times of domains attributed to APT10, and the majority of its intrusion activity indicates a pattern of work in line with China Standard Time (UTC+8).
  • The threat actor’s targeting of diplomatic and political organisations in response to geopolitical tensions, as well as the targeting of specific commercial enterprises, is closely aligned with strategic Chinese interests.

I know nothing more than what’s in this report, but it looks like a big one.

Press release.

Posted on April 5, 2017 at 10:42 AM50 Comments

Clever Physical ATM Attack

This is an interesting combination of computer and physical attack:

Researchers from the Russian security firm Kaspersky on Monday detailed a new ATM-emptying attack, one that mixes digital savvy with a very precise form of physical penetration. Kaspersky’s team has even reverse engineered and demonstrated the attack, using only a portable power drill and a $15 homemade gadget that injects malicious commands to trigger the machine’s cash dispenser. And though they won’t name the ATM manufacturer or the banks affected, they warn that thieves have already used the drill attack across Russia and Europe, and that the technique could still leave ATMs around the world vulnerable to having their cash safes disemboweled in a matter of minutes.

“We wanted to know: To what extent can you control the internals of the ATM with one drilled hole and one connected wire? It turns out we can do anything with it,” says Kaspersky researcher Igor Soumenkov, who presented the research at the company’s annual Kaspersky Analyst Summit. “The dispenser will obey and dispense money, and it can all be done with a very simple microcomputer.”

Posted on April 5, 2017 at 6:29 AM43 Comments

Encryption Policy and Freedom of the Press

Interesting law journal article: “Encryption and the Press Clause,” by D. Victoria Barantetsky.

Abstract: Almost twenty years ago, a hostile debate over whether government could regulate encryption—later named the Crypto Wars—seized the country. At the center of this debate stirred one simple question: is encryption protected speech? This issue touched all branches of government percolating from Congress, to the President, and eventually to the federal courts. In a waterfall of cases, several United States Court of Appeals appeared to reach a consensus that encryption was protected speech under the First Amendment, and with that the Crypto Wars appeared to be over, until now.

Nearly twenty years later, the Crypto Wars have returned. Following recent mass shootings, law enforcement has once again questioned the legal protection for encryption and tried to implement “backdoor” techniques to access messages sent over encrypted channels. In the case, Apple v. FBI, the agency tried to compel Apple to grant access to the iPhone of a San Bernardino shooter. The case was never decided, but the legal arguments briefed before the court were essentially the same as they were two decades prior. Apple and amici supporting the company argued that encryption was protected speech.

While these arguments remain convincing, circumstances have changed in ways that should be reflected in the legal doctrines that lawyers use. Unlike twenty years ago, today surveillance is ubiquitous, and the need for encryption is no longer felt by a seldom few. Encryption has become necessary for even the most basic exchange of information given that most Americans share “nearly every aspect of their lives ­—from the mundane to the intimate” over the Internet, as stated in a recent Supreme Court opinion.

Given these developments, lawyers might consider a new justification under the Press Clause. In addition to the many doctrinal concerns that exist with protection under the Speech Clause, the Press Clause is normatively and descriptively more accurate at protecting encryption as a tool for secure communication without fear of government surveillance. This Article outlines that framework by examining the historical and theoretical transformation of the Press Clause since its inception.

Edited to Add (4/12): Follow-up article.

Posted on April 4, 2017 at 2:14 PM21 Comments

Acoustic Attack Against Accelerometers

Interesting acoustic attack against the MEMS accelerometers in devices like FitBits.

Millions of accelerometers reside inside smartphones, automobiles, medical devices, anti-theft devices, drones, IoT devices, and many other industrial and consumer applications. Our work investigates how analog acoustic injection attacks can damage the digital integrity of the capacitive MEMS accelerometer. Spoofing such sensors with intentional acoustic interference enables an out-of-spec pathway for attackers to deliver chosen digital values to microprocessors and embedded systems that blindly trust the unvalidated integrity of sensor outputs. Our contributions include (1) modeling the physics of malicious acoustic interference on MEMS accelerometers, (2) discovering the circuit-level security flaws that cause the vulnerabilities by measuring acoustic injection attacks on MEMS accelerometers as well as systems that employ on these sensors, and (3) two software-only defenses that mitigate many of the risks to the integrity of MEMS accelerometer outputs.

This is not that a big deal with things like FitBits, but as IoT devices get more autonomous—and start making decisions and then putting them into effect automatically—these vulnerabilities will become critical.

Academic paper.

Posted on April 4, 2017 at 6:23 AM13 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.