Blog: August 2018 Archives

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m giving a book talk on Click Here to Kill Everybody at the Ford Foundation in New York City, on September 5, 2018.
  • The Aspen Institute’s Cybersecurity & Technology Program is holding a book launch for Click Here to Kill Everybody on September 10, 2018 in Washington, DC.
  • I’m speaking about my book Click Here to Kill Everybody: Security and Survival in a Hyper-connected World at Brattle Theatre in Cambridge, Massachusetts on September 11, 2018.
  • I’m giving a keynote on supply chain security at Tehama’s “De-Risking Your Global Workforce” event in New York City on September 12, 2018.
  • I’ll be appearing at an Atlantic event on Protecting Privacy in Washington, DC on September 13, 2018.
  • I’ll be speaking at the 2018 TTI/Vanguard Conference in Washington, DC on September 13, 2018.
  • I’m giving a book talk at Fordham Law School in New York City on September 17, 2018.
  • I’m giving an InfoGuard Talk in Zug, Switzerland on September 19, 2018.
  • I’m speaking at the IBM Security Summit in Stockholm on September 20, 2018.
  • I’m giving a book talk at Harvard Law School’s Wasserstein Hall on September 25, 2018.
  • I’m giving a talk on “Securing a World of Physically Capable Computers” at the University of Rochester in Rochester, New York on October 5, 2018.
  • I’m keynoting at SpiceWorld in Austin, Texas on October 9, 2018.
  • I’m speaking at Cyber Security Nordic in Helsinki on October 10, 2018.
  • I’m speaking at the Cyber Security Summit in Minneapolis, Minnesota on October 24, 2018.
  • I’m speaking at ISF’s 29th Annual World Congress in Las Vegas, Nevada on October 30, 2018.
  • I’m speaking at Kiwicon in Wellington, New Zealand on November 16, 2018.
  • I’m speaking at the The Digital Society Conference 2018: Empowering Ecosystems on December 11, 2018.
  • I’m speaking at the Hyperledger Forum in Basel, Switzerland on December 13, 2018.

The list is maintained on this page.

Posted on August 31, 2018 at 1:37 PM32 Comments

Cheating in Bird Racing

I’ve previously written about people cheating in marathon racing by driving—or otherwise getting near the end of the race by faster means than running. In China, two people were convicted of cheating in a pigeon race:

The essence of the plan involved training the pigeons to believe they had two homes. The birds had been secretly raised not just in Shanghai but also in Shangqiu.

When the race was held in the spring of last year, the Shanghai Pigeon Association took all the entrants from Shanghai to Shangqiu and released them. Most of the pigeons started flying back to Shanghai.

But the four specially raised pigeons flew instead to their second home in Shangqiu. According to the court, the two men caught the birds there and then carried them on a bullet train back to Shanghai, concealed in milk cartons. (China prohibits live animals on bullet trains.)

When the men arrived in Shanghai, they released the pigeons, which quickly fluttered to their Shanghai loft, seemingly winning the race.

Posted on August 30, 2018 at 6:34 AM22 Comments

CIA Network Exposed through Insecure Communications System

Interesting story of a CIA intelligence network in China that was exposed partly because of a computer security failure:

Although they used some of the same coding, the interim system and the main covert communication platform used in China at this time were supposed to be clearly separated. In theory, if the interim system were discovered or turned over to Chinese intelligence, people using the main system would still be protected—and there would be no way to trace the communication back to the CIA. But the CIA’s interim system contained a technical error: It connected back architecturally to the CIA’s main covert communications platform. When the compromise was suspected, the FBI and NSA both ran “penetration tests” to determine the security of the interim system. They found that cyber experts with access to the interim system could also access the broader covert communications system the agency was using to interact with its vetted sources, according to the former officials.

In the words of one of the former officials, the CIA had “fucked up the firewall” between the two systems.

U.S. intelligence officers were also able to identify digital links between the covert communications system and the U.S. government itself, according to one former official—links the Chinese agencies almost certainly found as well. These digital links would have made it relatively easy for China to deduce that the covert communications system was being used by the CIA. In fact, some of these links pointed back to parts of the CIA’s own website, according to the former official.

People died because of that mistake.

The moral—which is to go back to pre-computer systems in these high-risk sophisticated-adversary circumstances—is the right one, I think.

Posted on August 29, 2018 at 8:10 AM28 Comments

Future Cyberwar

A report for the Center for Strategic and International Studies looks at surprise and war. One of the report’s cyberwar scenarios is particularly compelling. It doesn’t just map cyber onto today’s tactics, but completely reimagines future tactics that include a cyber component (quote starts on page 110).

The U.S. secretary of defense had wondered this past week when the other shoe would drop. Finally, it had, though the U.S. military would be unable to respond effectively for a while.

The scope and detail of the attack, not to mention its sheer audacity, had earned the grudging respect of the secretary. Years of worry about a possible Chinese “Assassin’s Mace”—a silver bullet super-weapon capable of disabling key parts of the American military—turned out to be focused on the wrong thing.

The cyber attacks varied. Sailors stationed at the 7th Fleet’ s homeport in Japan awoke one day to find their financial accounts, and those of their dependents, empty. Checking, savings, retirement funds: simply gone. The Marines based on Okinawa were under virtual siege by the populace, whose simmering resentment at their presence had boiled over after a YouTube video posted under the account of a Marine stationed there had gone viral. The video featured a dozen Marines drunkenly gang-raping two teenaged Okinawan girls. The video was vivid, the girls’ cries heart-wrenching the cheers of Marines sickening. And all of it fake. The National Security Agency’s initial analysis of the video had uncovered digital fingerprints showing that it was a computer-assisted lie, and could prove that the Marine’s account under which it had been posted was hacked. But the damage had been done.

There was the commanding officer of Edwards Air Force Base whose Internet browser history had been posted on the squadron’s Facebook page. His command turned on him as a pervert; his weak protestations that he had not visited most of the posted links could not counter his admission that he had, in fact, trafficked some of them. Lies mixed with the truth. Soldiers at Fort Sill were at each other’s throats thanks to a series of text messages that allegedly unearthed an adultery ring on base.

The variations elsewhere were endless. Marines suddenly owed hundreds of thousands of dollars on credit lines they had never opened; sailors received death threats on their Twitter feeds; spouses and female service members had private pictures of themselves plastered across the Internet; older service members received notifications about cancerous conditions discovered in their latest physical.

Leadership was not exempt. Under the hashtag # PACOMMUSTGO a dozen women allegedly described harassment by the commander of Pacific command. Editorial writers demanded that, under the administration’s “zero tolerance” policy, he step aside while Congress held hearings.

There was not an American service member or dependent whose life had not been digitally turned upside down. In response, the secretary had declared “an operational pause,” directing units to stand down until things were sorted out.

Then, China had made its move, flooding the South China Sea with its conventional forces, enforcing a sea and air identification zone there, and blockading Taiwan. But the secretary could only respond weakly with a few air patrols and diversions of ships already at sea. Word was coming in through back channels that the Taiwanese government, suddenly stripped of its most ardent defender, was already considering capitulation.

I found this excerpt here. The author is Mark Cancian.

Posted on August 27, 2018 at 6:16 AM53 Comments

John Mueller and Mark Stewart on the Risks of Terrorism

Another excellent paper by the Mueller/Stewart team: “Terrorism and Bathtubs: Comparing and Assessing the Risks“:

Abstract: The likelihood that anyone outside a war zone will be killed by an Islamist extremist terrorist is extremely small. In the United States, for example, some six people have perished each year since 9/11 at the hands of such terrorists—vastly smaller than the number of people who die in bathtub drownings. Some argue, however, that the incidence of terrorist destruction is low because counterterrorism measures are so effective. They also contend that terrorism may well become more frequent and destructive in the future as terrorists plot and plan and learn from experience, and that terrorism, unlike bathtubs, provides no benefit and exacts costs far beyond those in the event itself by damagingly sowing fear and anxiety and by requiring policy makers to adopt countermeasures that are costly and excessive. This paper finds these arguments to be wanting. In the process, it concludes that terrorism is rare outside war zones because, to a substantial degree, terrorists don’t exist there. In general, as with rare diseases that kill few, it makes more policy sense to expend limited funds on hazards that inflict far more damage. It also discusses the issue of risk communication for this hazard.

Posted on August 23, 2018 at 5:54 AM53 Comments

"Two Stage" BMW Theft Attempt

Modern cars have alarm systems that automatically connect to a remote call center. This makes cars harder to steal, since tripping the alarm causes a quick response. This article describes a theft attempt that tried to neutralize that security system. In the first attack, the thieves just disabled the alarm system and then left. If the owner had not immediately repaired the car, the thieves would have returned the next night and—no longer working under time pressure—stolen the car.

Posted on August 21, 2018 at 5:58 AM33 Comments

New Ways to Track Internet Browsing

Interesting research on web tracking: “Who Left Open the Cookie Jar? A Comprehensive Evaluation of Third-Party Cookie Policies:

Abstract: Nowadays, cookies are the most prominent mechanism to identify and authenticate users on the Internet. Although protected by the Same Origin Policy, popular browsers include cookies in all requests, even when these are cross-site. Unfortunately, these third-party cookies enable both cross-site attacks and third-party tracking. As a response to these nefarious consequences, various countermeasures have been developed in the form of browser extensions or even protection mechanisms that are built directly into the browser.

In this paper, we evaluate the effectiveness of these defense mechanisms by leveraging a framework that automatically evaluates the enforcement of the policies imposed to third-party requests. By applying our framework, which generates a comprehensive set of test cases covering various web mechanisms, we identify several flaws in the policy implementations of the 7 browsers and 46 browser extensions that were evaluated. We find that even built-in protection mechanisms can be circumvented by multiple novel techniques we discover. Based on these results, we argue that our proposed framework is a much-needed tool to detect bypasses and evaluate solutions to the exposed leaks. Finally, we analyze the origin of the identified bypass techniques, and find that these are due to a variety of implementation, configuration and design flaws.

The researchers discovered many new tracking techniques that work despite all existing anonymous browsing tools. These have not yet been seen in the wild, but that will change soon.

Three news articles. BoingBoing post.

Posted on August 17, 2018 at 5:26 AM16 Comments

Speculation Attack Against Intel's SGX

Another speculative-execution attack against Intel’s SGX.

At a high level, SGX is a new feature in modern Intel CPUs which allows computers to protect users’ data even if the entire system falls under the attacker’s control. While it was previously believed that SGX is resilient to speculative execution attacks (such as Meltdown and Spectre), Foreshadow demonstrates how speculative execution can be exploited for reading the contents of SGX-protected memory as well as extracting the machine’s private attestation key. Making things worse, due to SGX’s privacy features, an attestation report cannot be linked to the identity of its signer. Thus, it only takes a single compromised SGX machine to erode trust in the entire SGX ecosystem.

News article.

The details of the Foreshadow attack are a little more complicated than those of Meltdown. In Meltdown, the attempt to perform an illegal read of kernel memory triggers the page fault mechanism (by which the processor and operating system cooperate to determine which bit of physical memory a memory access corresponds to, or they crash the program if there’s no such mapping). Attempts to read SGX data from outside an enclave receive special handling by the processor: reads always return a specific value (-1), and writes are ignored completely. The special handling is called “abort page semantics” and should be enough to prevent speculative reads from being able to learn anything.

However, the Foreshadow researchers found a way to bypass the abort page semantics. The data structures used to control the mapping of virtual-memory addresses to physical addresses include a flag to say whether a piece of memory is present (loaded into RAM somewhere) or not. If memory is marked as not being present at all, the processor stops performing any further permissions checks and immediately triggers the page fault mechanism: this means that the abort page mechanics aren’t used. It turns out that applications can mark memory, including enclave memory, as not being present by removing all permissions (read, write, execute) from that memory.

EDITED TO ADD: Intel has responded:

L1 Terminal Fault is addressed by microcode updates released earlier this year, coupled with corresponding updates to operating system and hypervisor software that are available starting today. We’ve provided more information on our web site and continue to encourage everyone to keep their systems up-to-date, as it’s one of the best ways to stay protected.

I think this is the “more information” they’re referring to, although this is a comprehensive link to everything the company is saying about the vulnerability.

Posted on August 16, 2018 at 11:43 AM17 Comments

Hacking Police Bodycams

Suprising no one, the security of police bodycams is terrible.

Mitchell even realized that because he can remotely access device storage on models like the Fire Cam OnCall, an attacker could potentially plant malware on some of the cameras. Then, when the camera connects to a PC for syncing, it could deliver all sorts of malicious code: a Windows exploit that could ultimately allow an attacker to gain remote access to the police network, ransomware to spread across the network and lock everything down, a worm that infiltrates the department’s evidence servers and deletes everything, or even cryptojacking software to mine cryptocurrency using police computing resources. Even a body camera with no Wi-Fi connection, like the CeeSc, can be compromised if a hacker gets physical access. “You know not to trust thumb drives, but these things have the same ability,” Mitchell says.

BoingBoing post.

Posted on August 15, 2018 at 6:04 AM25 Comments

Google Tracks Its Users Even If They Opt Out of Tracking

Google is tracking you, even if you turn off tracking:

Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.”

That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking.

For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude ­- accurate to the square foot -­ and save it to your Google account.

On the one hand, this isn’t surprising to technologists. Lots of applications use location data. On the other hand, it’s very surprising—and counterintuitive—to everyone else. And that’s why this is a problem.

I don’t think we should pick on Google too much, though. Google is a symptom of the bigger problem: surveillance capitalism in general. As long as surveillance is the business model of the Internet, things like this are inevitable.

BoingBoing story.

Good commentary.

Posted on August 14, 2018 at 6:22 AM55 Comments

Identifying Programmers by Their Coding Style

Fascinating research on de-anonymizing code—from either source code or compiled code:

Rachel Greenstadt, an associate professor of computer science at Drexel University, and Aylin Caliskan, Greenstadt’s former PhD student and now an assistant professor at George Washington University, have found that code, like other forms of stylistic expression, are not anonymous. At the DefCon hacking conference Friday, the pair will present a number of studies they’ve conducted using machine learning techniques to de-anonymize the authors of code samples. Their work could be useful in a plagiarism dispute, for instance, but it also has privacy implications, especially for the thousands of developers who contribute open source code to the world.

Posted on August 13, 2018 at 4:02 PM25 Comments

Don't Fear the TSA Cutting Airport Security. Be Glad That They're Talking about It.

Last week, CNN reported that the Transportation Security Administration is considering eliminating security at U.S. airports that fly only smaller planes—60 seats or fewer. Passengers connecting to larger planes would clear security at their destinations.

To be clear, the TSA has put forth no concrete proposal. The internal agency working group’s report obtained by CNN contains no recommendations. It’s nothing more than 20 people examining the potential security risks of the policy change. It’s not even new: The TSA considered this back in 2011, and the agency reviews its security policies every year. But commentary around the news has been strongly negative. Regardless of the idea’s merit, it will almost certainly not happen. That’s the result of politics, not security: Sen. Charles E. Schumer (D-N.Y.), one of numerous outraged lawmakers, has already penned a letter to the agency saying that “TSA documents proposing to scrap critical passenger security screenings, without so much as a metal detector in place in some airports, would effectively clear the runway for potential terrorist attacks.” He continued, “It simply boggles the mind to even think that the TSA has plans like this on paper in the first place.”

We don’t know enough to conclude whether this is a good idea, but it shouldn’t be dismissed out of hand. We need to evaluate airport security based on concrete costs and benefits, and not continue to implement security theater based on fear. And we should applaud the agency’s willingness to explore changes in the screening process.

There is already a tiered system for airport security, varying for both airports and passengers. Many people are enrolled in TSA PreCheck, allowing them to go through checkpoints faster and with less screening. Smaller airports don’t have modern screening equipment like full-body scanners or CT baggage screeners, making it impossible for them to detect some plastic explosives. Any would-be terrorist is already able to pick and choose his flight conditions to suit his plot.

Over the years, I have written many essays critical of the TSA and airport security, in general. Most of it is security theater—measures that make us feel safer without improving security. For example, the liquids ban makes no sense as implemented, because there’s no penalty for repeatedly trying to evade the scanners. The full-body scanners are terrible at detecting the explosive material PETN if it is well concealed—which is their whole point.

There are two basic kinds of terrorists. The amateurs will be deterred or detected by even basic security measures. The professionals will figure out how to evade even the most stringent measures. I’ve repeatedly said that the two things that have made flying safer since 9/11 are reinforcing the cockpit doors and persuading passengers that they need to fight back. Everything beyond that isn’t worth it.

It’s always possible to increase security by adding more onerous—and expensive—procedures. If that were the only concern, we would all be strip-searched and prohibited from traveling with luggage. Realistically, we need to analyze whether the increased security of any measure is worth the cost, in money, time and convenience. We spend $8 billion a year on the TSA, and we’d like to get the most security possible for that money.

This is exactly what that TSA working group was doing. CNN reported that the group specifically evaluated the costs and benefits of eliminating security at minor airports, saving $115 million a year with a “small (nonzero) undesirable increase in risk related to additional adversary opportunity.” That money could be used to bolster security at larger airports or to reduce threats totally removed from airports.

We need more of this kind of thinking, not less. In 2017, political scientists Mark Stewart and John Mueller published a detailed evaluation of airport security measures based on the cost to implement and the benefit in terms of lives saved. They concluded that most of what our government does either isn’t effective at preventing terrorism or is simply too expensive to justify the security it does provide. Others might disagree with their conclusions, but their analysis provides enough detailed information to have a meaningful argument.

The more we politicize security, the worse we are. People are generally terrible judges of risk. We fear threats in the news out of proportion with the actual dangers. We overestimate rare and spectacular risks, and underestimate commonplace ones. We fear specific “movie-plot threats” that we can bring to mind. That’s why we fear flying over driving, even though the latter kills about 35,000 people each year—about a 9/11’s worth of deaths each month. And it’s why the idea of the TSA eliminating security at minor airports fills us with fear. We can imagine the plot unfolding, only without Bruce Willis saving the day.

Very little today is immune to politics, including the TSA. It drove most of the agency’s decisions in the early years after the 9/11 terrorist attacks. That the TSA is willing to consider politically unpopular ideas is a credit to the organization. Let’s let them perform their analyses in peace.

This essay originally appeared in the Washington Post.

Posted on August 10, 2018 at 6:10 AM23 Comments

Detecting Phishing Sites with Machine Learning

Really interesting article:

A trained eye (or even a not-so-trained one) can discern when something phishy is going on with a domain or subdomain name. There are search tools, such as Censys.io, that allow humans to specifically search through the massive pile of certificate log entries for sites that spoof certain brands or functions common to identity-processing sites. But it’s not something humans can do in real time very well—which is where machine learning steps in.

StreamingPhish and the other tools apply a set of rules against the names within certificate log entries. In StreamingPhish’s case, these rules are the result of guided learning—a corpus of known good and bad domain names is processed and turned into a “classifier,” which (based on my anecdotal experience) can then fairly reliably identify potentially evil websites.

Posted on August 9, 2018 at 6:17 AM11 Comments

SpiderOak's Warrant Canary Died

BoingBoing has the story.

I have never quite trusted the idea of a warrant canary. But here it seems to have worked. (Presumably, if SpiderOak wanted to replace the warrant canary with a transparency report, they would have written something explaining their decision. To have it simply disappear is what we would expect if SpiderOak were being forced to comply with a US government request for personal data.)

EDITED TO ADD (8/9): SpiderOak has posted an explanation claiming that the warrant canary did not die—it just changed.

That’s obviously false, because it did die. And a change is the functional equivalent—that’s how they work. So either they have received a National Security Letter and now have to pretend they did not, or they completely misunderstood what a warrant canary is and how it works. No one knows.

I have never fully trusted warrant canaries—this EFF post explains why—and this is an illustration.

Posted on August 8, 2018 at 9:37 AM97 Comments

Measuring the Rationality of Security Decisions

Interesting research: “Dancing Pigs or Externalities? Measuring the Rationality of
Security Decisions
“:

Abstract: Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant’s wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users’ decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users’ awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a “one-size-fits-all” emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains

Posted on August 7, 2018 at 6:40 AM12 Comments

Three of My Books Are Available in DRM-Free E-Book Format

Humble Bundle sells groups of e-books at ridiculously low prices, DRM free. This month, the bundles are all Wiley titles, including three of my books: Applied Cryptography, Secrets and Lies, and Cryptography Engineering. $15 gets you everything, and they’re all DRM-free.

Even better, a portion of the proceeds goes to the EFF. As a board member, I’ve seen the other side of this. It’s significant money.

Posted on August 3, 2018 at 2:10 PM25 Comments

How the US Military Can Better Keep Hackers

Interesting commentary:

The military is an impossible place for hackers thanks to antiquated career management, forced time away from technical positions, lack of mission, non-technical mid- and senior-level leadership, and staggering pay gaps, among other issues.

It is possible the military needs a cyber corps in the future, but by accelerating promotions, offering graduate school to newly commissioned officers, easing limited lateral entry for exceptional private-sector talent, and shortening the private/public pay gap, the military can better accommodate its most technical members now.

The model the author uses is military doctors.

Posted on August 3, 2018 at 6:21 AM30 Comments

GCHQ on Quantum Key Distribution

The UK’s GCHQ delivers a brutally blunt assessment of quantum key distribution:

QKD protocols address only the problem of agreeing keys for encrypting data. Ubiquitous on-demand modern services (such as verifying identities and data integrity, establishing network sessions, providing access control, and automatic software updates) rely more on authentication and integrity mechanisms—such as digital signatures—than on encryption.

QKD technology cannot replace the flexible authentication mechanisms provided by contemporary public key signatures. QKD also seems unsuitable for some of the grand future challenges such as securing the Internet of Things (IoT), big data, social media, or cloud applications.

I agree with them. It’s a clever idea, but basically useless in practice. I don’t even think it’s anything more than a niche solution in a world where quantum computers have broken our traditional public-key algorithms.

Read the whole thing. It’s short.

Posted on August 1, 2018 at 2:07 PM21 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.