Detecting When a Smartphone Has Been Compromised

Andrew "bunnie" Huang and Edward Snowden have designed a smartphone case that detects unauthorized transmissions by the phone. Paper. Three news articles.

Looks like a clever design. Of course, it has to be outside the device; otherwise, it could be compromised along with the device. Note that this is still in the research design stage; there are no public prototypes.

Posted on July 27, 2016 at 1:09 PM12 Comments

The NSA and "Intelligence Legalism"

Interesting law journal paper: "Intelligence Legalism and the National Security Agency's Civil Liberties Gap," by Margo Schlanger:

Abstract: This paper examines the National Security Agency, its compliance with legal constraints and its respect for civil liberties. But even if perfect compliance could be achieved, it is too paltry a goal. A good oversight system needs its institutions not just to support and enforce compliance but also to design good rules. Yet as will become evident, the offices that make up the NSA's compliance system are nearly entirely compliance offices, not policy offices; they work to improve compliance with existing rules, but not to consider the pros and cons of more individually-protective rules and try to increase privacy or civil liberties where the cost of doing so is acceptable. The NSA and the administration in which it sits have thought of civil liberties and privacy only in compliance terms. That is, they have asked only "Can we (legally) do X?" and not "Should we do X?" This preference for the can question over the should question is part and parcel, I argue, of a phenomenon I label "intelligence legalism," whose three crucial and simultaneous features are imposition of substantive rules given the status of law rather than policy; some limited court enforcement of those rules; and empowerment of lawyers. Intelligence legalism has been a useful corrective to the lawlessness that characterized surveillance prior to intelligence reform, in the late 1970s. But I argue that it gives systematically insufficient weight to individual liberty, and that its relentless focus on rights, and compliance, and law has obscured the absence of what should be an additional focus on interests, or balancing, or policy. More is needed; additional attention should be directed both within the NSA and by its overseers to surveillance policy, weighing the security gains from surveillance against the privacy and civil liberties risks and costs. That attention will not be a panacea, but it can play a useful role in filling the civil liberties gap intelligence legalism creates.

This is similar to what I wrote in Data and Goliath:

There are two levels of oversight. The first is strategic: are the rules we're imposing the correct ones? For example, the NSA can implement its own procedures to ensure that it's following the rules, but it should not get to decide what rules it should follow....

The other kind of oversight is tactical: are the rules being followed? Mechanisms for this kind of oversight include procedures, audits, approvals, troubleshooting protocols, and so on. The NSA, for example, trains its analysts in the regulations governing their work, audits systems to ensure that those regulations are actually followed, and has instituted reporting and disciplinary procedures for occasions when they're not.

It's not enough that the NSA makes sure there is a colorable legal interpretation that authorizes what they do. We need to make sure that their understanding of the law is shared with the outside world, and that what they're doing is a good idea.

EDITED TO ADD: The paper is from 2014. Also worth reading are these two related essays.

Posted on July 27, 2016 at 6:47 AM5 Comments

Russian Hack of the DNC

Amazingly enough, the preponderance of the evidence points to Russia as the source of the DNC leak. I was going to summarize the evidence, but Thomas Rid did a great job here. Much of that is based on June's forensic analysis by Crowdstrike, which I wrote about here. More analysis here.

Jack Goldsmith discusses the political implications.

The FBI is investigating. It's not unreasonable to expect the NSA has some additional intelligence on this attack, similarly to what they had on the North Korea attack on Sony.

EDITED TO ADD (7/27): More on the FBI's investigation. Another summary of the evidence pointing to Russia.

Posted on July 26, 2016 at 1:40 PM108 Comments

Tracking the Owner of Kickass Torrents

Here's the story of how it was done. First, a fake ad on torrent listings linked the site to a Latvian bank account, an e-mail address, and a Facebook page.

Using basic website-tracking services, Der-Yeghiayan was able to uncover (via a reverse DNS search) the hosts of seven apparent KAT website domains:,,,,, and This dug up two Chicago IP addresses, which were used as KAT name servers for more than four years. Agents were then able to legally gain a copy of the server's access logs (explaining why it was federal authorities in Chicago that eventually charged Vaulin with his alleged crimes).

Using similar tools, Homeland Security investigators also performed something called a WHOIS lookup on a domain that redirected people to the main KAT site. A WHOIS search can provide the name, address, email and phone number of a website registrant. In the case of, that was Artem Vaulin from Kharkiv, Ukraine.

Der-Yeghiayan was able to link the email address found in the WHOIS lookup to an Apple email address that Vaulin purportedly used to operate KAT. It's this Apple account that appears to tie all of pieces of Vaulin's alleged involvement together.

On July 31st 2015, records provided by Apple show that the account was used to purchase something on iTunes. The logs show that the same IP address was used on the same day to access the KAT Facebook page. After KAT began accepting Bitcoin donations in 2012, $72,767 was moved into a Coinbase account in Vaulin's name. That Bitcoin wallet was registered with the same email address.

Another article.

Posted on July 26, 2016 at 6:42 AM21 Comments

The Economist on Hacking the Financial System

The Economist has an article on the potential hacking of the global financial system, either for profit or to cause mayhem. It's reasonably balanced.

So how might such an attack unfold? Step one, several months before mayhem is unleashed, is to get into the system. Financial institutions have endless virtual doors that could be used to trespass, but one of the easiest to force is still the front door. By getting someone who works at an FMI or a partner company to click on a corrupt link through a "phishing" attack (an attempt to get hold of sensitive information by masquerading as someone trustworthy), or stealing their credentials when they use public Wi-Fi, hackers can impersonate them and install malware to watch over employees' shoulders and see how the institution's system functions. This happened in the Carbanak case: hackers installed a "RAT" (remote-access tool) to make videos of employees' computers.

Step two is to study the system and set up booby traps. Once in, the gang quietly observes the quirks and defences of the system in order to plan the perfect attack from within; hackers have been known to sit like this for years. Provided they are not detected, they pick their places to plant spyware or malware that can be activated at the click of a button.

Step three is the launch. One day, preferably when there is already distracting market turmoil, they unleash a series of attacks on, say, multiple clearing houses.

The attackers might start with small changes, tweaking numbers in transactions as they are processed (Bank A gets credited $1,000, for example, but on the other side of the transaction Bank B is debited $0, or $900 or $100,000). As lots of erroneous payments travel the globe, and as it becomes clear that these are not just "glitches", eventually the entire system would be deemed unreliable. Unsure how much money they have, banks could not settle their books when markets close. Settlement is a legally defined, binding moment. Regulators and central banks would become agitated if they could not see how solvent the nation's banks were at the end of the financial day.

In many aspects of our society, as attackers become more powerful the potential for catastrophe increases. We need to ensure that the likelihood of catastrophe remains low.

Posted on July 25, 2016 at 6:10 AM32 Comments

Friday Squid Blogging: Sperm Whale Eats Squid

A post-mortem of a stranded sperm whale shows that he had recently eaten squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on July 22, 2016 at 4:14 PM150 Comments

Cyberweapons vs. Nuclear Weapons

Good essay pointing out the absurdity of comparing cyberweapons with nuclear weapons.

On the surface, the analogy is compelling. Like nuclear weapons, the most powerful cyberweapons -- malware capable of permanently damaging critical infrastructure and other key assets of society -- are potentially catastrophically destructive, have short delivery times across vast distances, and are nearly impossible to defend against. Moreover, only the most technically competent of states appear capable of wielding cyberweapons to strategic effect right now, creating the temporary illusion of an exclusive cyber club. To some leaders who matured during the nuclear age, these tempting similarities and the pressing nature of the strategic cyberthreat provide firm justification to use nuclear deterrence strategies in cyberspace. Indeed, Cold War-style cyberdeterrence is one of the foundational cornerstones of the 2015 U.S. Department of Defense Cyber Strategy.

However, dive a little deeper and the analogy becomes decidedly less convincing. At the present time, strategic cyberweapons simply do not share the three main deterrent characteristics of nuclear weapons: the sheer destructiveness of a single weapon, the assuredness of that destruction, and a broad debate over the use of such weapons.

Posted on July 22, 2016 at 11:08 AM28 Comments

DARPA Document: "On Countering Strategic Deception"

Old, but interesting. The document was published by DARPA in 1973, and approved for release in 2007. It examines the role of deception on strategic warning systems, and possible actions to protect from strategic foreign deception.

Posted on July 21, 2016 at 9:54 AM7 Comments

Detecting Spoofed Messages Using Clock Skew

Two researchers are working on a system to detect spoofed messages sent to automobiles by fingerprinting the clock skew of the various computer components within the car, and then detecting when those skews are off. It's a clever system, with applications outside of automobiles (and isn't new).

To perform that fingerprinting, they use a weird characteristic of all computers: tiny timing errors known as "clock skew." Taking advantage of the fact that those errors are different in every computerĀ­ -- including every computer inside a carĀ­ -- the researchers were able to assign a fingerprint to each ECU based on its specific clock skew. The CIDS' device then uses those fingerprints to differentiate between the ECUs, and to spot when one ECU impersonates another, like when a hacker corrupts the vehicle's radio system to spoof messages that are meant to come from a brake pedal or steering system.

Paper: "Fingerprinting Electronic Control Units for Vehicle Intrusion Detection," by Kyong-Tak Cho and Kang G. Shin.

Abstract: As more software modules and external interfaces are getting added on vehicles, new attacks and vulnerabilities are emerging. Researchers have demonstrated how to compromise in-vehicle Electronic Control Units (ECUs) and control the vehicle maneuver. To counter these vulnerabilities, various types of defense mechanisms have been proposed, but they have not been able to meet the need of strong protection for safety-critical ECUs against in-vehicle network attacks. To mitigate this deficiency, we propose an anomaly-based intrusion detection system (IDS), called Clock-based IDS (CIDS). It measures and then exploits the intervals of periodic in-vehicle messages for fingerprinting ECUs. The thus-derived fingerprints are then used for constructing a baseline of ECUs' clock behaviors with the Recursive Least Squares (RLS) algorithm. Based on this baseline, CIDS uses Cumulative Sum (CUSUM) to detect any abnormal shifts in the identification errors -- a clear sign of intrusion. This allows quick identification of in-vehicle network intrusions with a low false-positive rate of 0.055%. Unlike state-of-the-art IDSs, if an attack is detected, CIDS's fingerprinting of ECUs also facilitates a rootcause analysis; identifying which ECU mounted the attack. Our experiments on a CAN bus prototype and on real vehicles have shown CIDS to be able to detect a wide range of in-vehicle network attacks.

Posted on July 20, 2016 at 7:26 AM33 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.