Blog: December 2016 Archives

How Signal Is Evading Censorship

Signal, the encrypted messaging app I prefer, is being blocked in both Egypt and the UAE. Recently, the Signal team developed a workaround: domain fronting.

Signal’s new anti-censorship feature uses a trick called “domain fronting,” Marlinspike explains. A country like Egypt, with only a few small internet service providers tightly controlled by the government, can block any direct request to a service on its blacklist. But clever services can circumvent that censorship by hiding their traffic inside of encrypted connections to a major internet service, like the content delivery networks (CDNs) that host content closer to users to speed up their online experience—or in Signal’s case, Google’s App Engine platform, designed to host apps on Google’s servers.

“Now when people in Egypt or the United Arab Emirates send a Signal message, it’ll look identical to something like a Google search,” Marlinspike says. “The idea is that using Signal will look like using Google; if you want to block Signal you’ll have to block Google.”

The trick works because Google’s App Engine allows developers to redirect traffic from Google.com to their own domain. Google’s use of TLS encryption means that contents of the traffic, including that redirect request, are hidden, and the internet service provider can see only that someone has connected to Google.com. That essentially turns Google into a proxy for Signal, bouncing its traffic and fooling the censors.

This isn’t a new trick (Tor uses it too, for example), but it does work.

Posted on December 28, 2016 at 6:20 AM55 Comments

Security Risks of TSA PreCheck

Former TSA Administrator Kip Hawley wrote an op-ed pointing out the security vulnerabilities in the TSA’s PreCheck program:

The first vulnerability in the system is its enrollment process, which seeks to verify an applicant’s identity. We know verification is a challenge: A 2011 Government Accountability Office report on TSA’s system for checking airport workers’ identities concluded that it was “not designed to provide reasonable assurance that only qualified applicants” got approved. It’s not a stretch to believe a reasonably competent terrorist could construct an identity that would pass PreCheck’s front end.

The other step in PreCheck’s “intelligence-driven, risk-based security strategy” is absurd on its face: The absence of negative information about a person doesn’t mean he or she is trustworthy. News reports are filled with stories of people who seemed to be perfectly normal right up to the moment they committed a heinous act. There is no screening algorithm and no database check that can accurately predict human behavior—especially on the scale of millions. It is axiomatic that terrorist organizations recruit operatives who have clean backgrounds and interview well.

None of this is news.

Back in 2004, I wrote:

Imagine you’re a terrorist plotter with half a dozen potential terrorists at your disposal. They all apply for a card, and three get one. Guess which are going on the mission? And they’ll buy round-trip tickets with credit cards and have a “normal” amount of luggage with them.

What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.

The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That’s simply not true. Most of the 9/11 terrorists were unknown and not on any watch list. Timothy McVeigh was an upstanding US citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that Al Qaeda is recruiting non-Arab terrorists for US operations.

I wrote much the same thing in 2007:

Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn’t any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn’t even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.

And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.

The truth is that whenever you create two paths through security—a high-security path and a low-security path—you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.

In a companion blog post, Hawley has more details about why the program doesn’t work:

In the sense that PreCheck bars people who were identified by intelligence or law enforcement agencies as possible terrorists, then it was intelligence-driven. But using that standard for PreCheck is ridiculous since those people already get extra screening or are on the No-Fly list. The movie Patriots Day, out now, reminds us of the tragic and preventable Boston Marathon bombing. The FBI sent agents to talk to the Tsarnaev brothers and investigate them as possible terror suspects. And cleared them. Even they did not meet the “intelligence-driven” definition used in PreCheck.

The other problem with “intelligence-driven” in the PreCheck context is that intelligence actually tells us the opposite; specifically that terrorists pick clean operatives. If TSA uses current intelligence to evaluate risk, it would not be out enrolling everybody they can into pre-9/11 security for everybody not flagged by the security services.

Hawley and I may agree on the problem, but we have completely opposite solutions. The op-ed was too short to include details, but they’re in a companion blog post. Basically, he wants to screen PreCheck passengers more:

In the interests of space, I left out details of what I would suggest as short-and medium-term solutions. Here are a few ideas:

  • Immediately scrub the PreCheck enrollees for false identities. That can probably be accomplished best and most quickly by getting permission from members, and then using, commercial data. If the results show that PreCheck has already been penetrated, the program should be suspended.
  • Deploy K-9 teams at PreCheck lanes.
  • Use Behaviorally trained officers to interact with and check the credentials of PreCheck passengers.
  • Use Explosives Trace Detection cotton swabs on PreCheck passengers at a much higher rate. Same with removing shoes.
  • Turn on the body scanners and keep them fully utilized.
  • Allow liquids to stay in the carry-on since TSA scanners can detect threat liquids.
  • Work with the airlines to keep the PreCheck experience positive.
  • Work with airports to place PreCheck lanes away from regular checkpoints so as not to diminish lane capacity for non-PreCheck passengers. Rental Car check-in areas could be one alternative. Also, downtown check-in and screening (with secure transport to the airport) is a possibility.

These solutions completely ignore the data from the real-world experiment PreCheck has been. Hawley writes that PreCheck tells us that “terrorists pick clean operatives.” That’s exactly wrong. PreCheck tells us that, basically, there are no terrorists. If 1) it’s an easier way through airport security that terrorists will invariably use, and 2) there have been no instances of terrorists using it in the 10+ years it and its predecessors have been in operation, then the inescapable conclusion is that the threat is minimal. Instead of screening PreCheck passengers more, we should screen everybody else less. This is me in 2012: “I think the PreCheck level of airport screening is what everyone should get, and that the no-fly list and the photo ID check add nothing to security.”

I agree with Hawley that we need to overhaul airport security. Me in 2010: “Airport security is the last line of defense, and it’s not a very good one.” We need to recognize that the actual risk is much lower than we fear, and ratchet airport security down accordingly. And then we need to continue to invest in investigation and intelligence: security measures that work regardless of the tactic or target.

Posted on December 27, 2016 at 6:11 AM84 Comments

Encryption Working Group Annual Report from the US House of Representatives

The Encryption Working Group of the House Judiciary Committee and the House Energy and Commerce Committee has released its annual report.

Observation #1: Any measure that weakens encryption works against the national interest.

Observation #2: Encryption technology is a global technology that is widely and increasingly available around the world.

Observation #3: The variety of stakeholders, technologies, and other factors create different and divergent challenges with respect to encryption and the “going dark” phenomenon, and therefore there is no one-size-fits-all solution to the encryption challenge.

Observation #4: Congress should foster cooperation between the law enforcement community and technology companies.

Posted on December 21, 2016 at 9:25 AM52 Comments

Google Releases Crypto Test Suite

Google has released Project Wycheproof—a test suite designed to test cryptographic libraries against a series of known attacks. From a blog post:

In cryptography, subtle mistakes can have catastrophic consequences, and mistakes in open source cryptographic software libraries repeat too often and remain undiscovered for too long. Good implementation guidelines, however, are hard to come by: understanding how to implement cryptography securely requires digesting decades’ worth of academic literature. We recognize that software engineers fix and prevent bugs with unit testing, and we found that many cryptographic issues can be resolved by the same means

The tool has already found over 40 security bugs in cryptographic libraries, which are (all? mostly?) currently being fixed.

News article. Slashdot thread.

Posted on December 20, 2016 at 6:12 AM11 Comments

Giving Up on PGP

Filippo Valsorda wrote an excellent essay on why he’s giving up on PGP. I have long believed PGP to be more trouble than it is worth. It’s hard to use correctly, and easy to get wrong. More generally, e-mail is inherently difficult to secure because of all the different things we ask of it and use it for.

Valsorda has a different complaint, that its long-term secrets are an unnecessary source of risk:

But the real issues, I realized, are more subtle. I never felt confident in the security of my long-term keys. The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe. Vulnerabilities would be announced. USB devices would get plugged in.

A long-term key is as secure as the minimum common denominator of your security practices over its lifetime. It’s the weak link.

Worse, long-term key patterns, like collecting signatures and printing fingerprints on business cards, discourage practices that would otherwise be obvious hygiene: rotating keys often, having different keys for different devices, compartmentalization. Such practices actually encourage expanding the attack surface by making backups of the key.

Both he and I favor encrypted messaging, either Signal or OTR.

EDITED TO ADD (1/13): More PGP criticism.

Posted on December 16, 2016 at 5:36 AM110 Comments

My Priorities for the Next Four Years

Like many, I was surprised and shocked by the election of Donald Trump as president. I believe his ideas, temperament, and inexperience represent a grave threat to our country and world. Suddenly, all the things I had planned to work on seemed trivial in comparison. Although Internet security and privacy are not the most important policy areas at risk, I believe he—and, more importantly, his cabinet, administration, and Congress—will have devastating effects in that area, both in the US and around the world.

The election was so close that I’ve come to see the result as a bad roll of the dice. A few minor tweaks here and there—a more enthusiastic Sanders endorsement, one fewer of Comey’s announcements, slightly less Russian involvement—and the country would be preparing for a Clinton presidency and discussing a very different social narrative. That alternative narrative would stress business as usual, and continue to obscure the deep social problems in our society. Those problems won’t go away on their own, and in this alternative future they would continue to fester under the surface, getting steadily worse. This election exposed those problems for everyone to see.

I spent the last month both coming to terms with this reality, and thinking about the future. Here is my new agenda for the next four years:

One, fight the fights. There will be more government surveillance and more corporate surveillance. I expect legislative and judicial battles along several lines: a renewed call from the FBI for backdoors into encryption, more leeway for government hacking without a warrant, no controls on corporate surveillance, and more secret government demands for that corporate data. I expect other countries to follow our lead. (The UK is already more extreme than us.) And if there’s a major terrorist attack under Trump’s watch, it’ll be open season on our liberties. We may lose a lot of these battles, but we need to lose as few as possible and as little of our existing liberties as possible.

Two, prepare for those fights. Much of the next four years will be reactive, but we can prepare somewhat. The more we can convince corporate America to delete their saved archives of surveillance data and to store only what they need for as long as they need it, the safer we’ll all be. We need to convince Internet giants like Google and Facebook to change their business models away from surveillance capitalism. It’s a hard sell, but maybe we can nibble around the edges. Similarly, we need to keep pushing the truism that privacy and security are not antagonistic, but rather are essential for each other.

Three, lay the groundwork for a better future. No matter how bad the next four years get, I don’t believe that a Trump administration will permanently end privacy, freedom, and liberty in the US. I don’t believe that it portends a radical change in our democracy. (Or if it does, we have bigger problems than a free and secure Internet.) It’s true that some of Trump’s institutional changes might take decades to undo. Even so, I am confident—optimistic even—that the US will eventually come around; and when that time comes, we need good ideas in place for people to come around to. This means proposals for non-surveillance-based Internet business models, research into effective law enforcement that preserves privacy, intelligent limits on how corporations can collect and exploit our data, and so on.

And four, continue to solve the actual problems. The serious security issues around cybercrime, cyber-espionage, cyberwar, the Internet of Things, algorithmic decision making, foreign interference in our elections, and so on aren’t going to disappear for four years while we’re busy fighting the excesses of Trump. We need to continue to work towards a more secure digital future. And to the extent that cybersecurity for our military networks and critical infrastructure allies with cybersecurity for everyone, we’ll probably have an ally in Trump.

Those are my four areas. Under a Clinton administration, my list would have looked much the same. Trump’s election just means the threats will be much greater, and the battles a lot harder to win. It’s more than I can possibly do on my own, and I am therefore substantially increasing my annual philanthropy to support organizations like EPIC, EFF, ACLU, and Access Now in continuing their work in these areas.

My agenda is necessarily focused entirely on my particular areas of concern. The risks of a Trump presidency are far more pernicious, but this is where I have expertise and influence.

Right now, we have a defeated majority. Many are scared, and many are motivated—and few of those are applying their motivation constructively. We need to harness that fear and energy to start fixing our society now, instead of waiting four or even eight years, at which point the problems would be worse and the solutions more extreme. I am choosing to proceed as if this were cowpox, not smallpox: fighting the more benign disease today will be much easier than subjecting ourselves to its more virulent form in the future. It’s going to be hard keeping the intensity up for the next four years, but we need to get to work. Let’s use Trump’s victory as the wake-up call and opportunity that it is.

Posted on December 15, 2016 at 3:50 AM

Let's Encrypt Is Making Web Encryption Easier

That’s the conclusion of a research paper:

Once [costs and complexity] are eliminated, it enables big hosting providers to issue and deploy certificates for their customers in bulk, thus quickly and automatically enable encryption across a large number of domains. For example, we have shown that currently, 47% of LE certified domains are hosted at three large hosting companies (Automattic/wordpress.com, Shopify, and OVH).

Paper: “No domain left behind: is Let’s Encrypt democratizing encryption?

Abstract: The 2013 National Security Agency revelations of pervasive monitoring have lead to an “encryption rush” across the computer and Internet industry. To push back against massive surveillance and protect users privacy, vendors, hosting and cloud providers have widely deployed encryption on their hardware, communication links, and applications. As a consequence, the most of web traffic nowadays is encrypted. However, there is still a significant part of Internet traffic that is not encrypted. It has been argued that both costs and complexity associated with obtaining and deploying X.509 certificates are major barriers for widespread encryption, since these certificates are required to established encrypted connections. To address these issues, the Electronic Frontier Foundation, Mozilla Foundation, and the University of Michigan have set up Let’s Encrypt (LE), a certificate authority that provides both free X.509 certificates and software that automates the deployment of these certificates. In this paper, we investigate if LE has been successful in democratizing encryption: we analyze certificate issuance in the first year of LE and show from various perspectives that LE adoption has an upward trend and it is in fact being successful in covering the lower-cost end of the hosting market.

Reddit thread.

Posted on December 14, 2016 at 6:46 AM28 Comments

Hiding Information in Silver and Carbon Ink

Interesting:

“We used silver and carbon ink to print an image consisting of small rods that are about a millimeter long and a couple of hundred microns wide,” said Ajay Nahata from the University of Utah, leader of the research team. “We found that changing the fraction of silver and carbon in each rod changes the conductivity in each rod just slightly, but visually, you can’t see this modification. Passing terahertz radiation at the correct frequency and polarization through the array allows extraction of information encoded into the conductivity.”

Research paper.

Posted on December 13, 2016 at 6:21 AM15 Comments

WWW Malware Hides in Images

There’s new malware toolkit that uses steganography to hide in images:

For the past two months, a new exploit kit has been serving malicious code hidden in the pixels of banner ads via a malvertising campaign that has been active on several high profile websites.

Discovered by security researchers from ESET, this new exploit kit is named Stegano, from the word steganography, which is a technique of hiding content inside other files.

In this particular scenario, malvertising campaign operators hid malicious code inside PNG images used for banner ads.

The crooks took a PNG image and altered the transparency value of several pixels. They then packed the modified image as an ad, for which they bought ad displays on several high-profile websites.

Since a large number of advertising networks allow advertisers to deliver JavaScript code with their ads, the crooks also included JS code that would parse the image, extract the pixel transparency values, and using a mathematical formula, convert those values into a character.

Slashdot thread.

Posted on December 7, 2016 at 8:06 AM107 Comments

International Phone Fraud Tactics

This article outlines two different types of international phone fraud. The first can happen when you call an expensive country like Cuba:

My phone call never actually made it to Cuba. The fraudsters make money because the last carrier simply pretends that it connected to Cuba when it actually connected me to the audiobook recording. So it charges Cuban rates to the previous carrier, which charges the preceding carrier, which charges the preceding carrier, and the costs flow upstream to my telecom carrier. The fraudsters siphoning money from the telecommunications system could be anywhere in the world.

The second happens when phones are forced to dial international premium-rate numbers:

The crime ring wasn’t interested in reselling the actual [stolen] phone hardware so much as exploiting the SIM cards. By using all the phones to call international premium numbers, similar to 900 numbers in the U.S. that charge extra, they were making hundreds of thousands of dollars. Elsewhere—Pakistan and the Philippines being two common locations—organized crime rings have hacked into phone systems to get those phones to constantly dial either international premium numbers or high-rate countries like Cuba, Latvia, or Somalia.

Why is this kind of thing so hard to stop?

Stamping out international revenue share fraud is a collective action problem. “The only way to prevent IRFS fraud is to stop the money. If everyone agrees, if no one pays for IRFS, that disrupts it,” says Yates. That would mean, for example, the second-to-last carrier would refuse to pay the last carrier that routed my call to the audiobooks and the third-to-last would refuse to pay the second-to-last, and so on, all the way back up the chain to my phone company. But when has it been easy to get so many companies to do the same thing? It costs money to investigate fraud cases too, and some companies won’t think it’s worth the trade off. “Some operators take a very positive approach toward fraud management. Others see it as cost of business and don’t put a lot of resources or systems in to manage it,” says Yates.

Posted on December 6, 2016 at 6:15 AM12 Comments

Guessing Credit Card Security Details

Researchers have found that they can guess various credit-card-number security details by spreading their guesses around multiple websites so as not to trigger any alarms.

From a news article:

Mohammed Ali, a PhD student at the university’s School of Computing Science, said: “This sort of attack exploits two weaknesses that on their own are not too severe but when used together, present a serious risk to the whole payment system.

“Firstly, the current online payment system does not detect multiple invalid payment requests from different websites.

“This allows unlimited guesses on each card data field, using up to the allowed number of attempts—typically 10 or 20 guesses—on each website.

“Secondly, different websites ask for different variations in the card data fields to validate an online purchase. This means it’s quite easy to build up the information and piece it together like a jigsaw.

“The unlimited guesses, when combined with the variations in the payment data fields make it frighteningly easy for attackers to generate all the card details one field at a time.

“Each generated card field can be used in succession to generate the next field and so on. If the hits are spread across enough websites then a positive response to each question can be received within two seconds—just like any online payment.

“So even starting with no details at all other than the first six digits—which tell you the bank and card type and so are the same for every card from a single provider—a hacker can obtain the three essential pieces of information to make an online purchase within as little as six seconds.”

That’s card number, expiration date, and CVV code.

From the paper:

Abstract: This article provides an extensive study of the current practice of online payment using credit and debit cards, and the intrinsic security challenges caused by the differences in how payment sites operate. We investigated the Alexa top-400 online merchants’ payment sites, and realised that the current landscape facilitates a distributed guessing attack. This attack subverts the payment functionality from its intended purpose of validating card details, into helping the attackers to generate all security data fields required to make online transactions. We will show that this attack would not be practical if all payment sites performed the same security checks. As part of our responsible disclosure measure, we notified a selection of payment sites about our findings, and we report on their responses. We will discuss potential solutions to the problem and the practical difficulty to implement these, given the varying technical and business concerns of the involved parties.

BoingBoing post:

The researchers believe this method has already been used in the wild, as part of a spectacular hack against Tesco bank last month.

MasterCard is immune to this hack because they detect the guesses, even though they’re distributed across multiple websites. Visa is not.

Posted on December 5, 2016 at 6:25 AM9 Comments

Auditing Elections for Signs of Hacking

Excellent essay pointing out that election security is a national security issue, and that we need to perform random ballot audits on every future election:

The good news is that we know how to solve this problem. We need to audit computers by manually examining randomly selected paper ballots and comparing the results to machine results. Audits require a voter-verified paper ballot, which the voter inspects to confirm that his or her selections have been correctly and indelibly recorded. Since 2003, an active community of academics, lawyers, election officials and activists has urged states to adopt paper ballots and robust audit procedures. This campaign has had significant, but slow, success. As of now, about three quarters of U.S. voters vote on paper ballots. Twenty-six states do some type of manual audit, but none of their procedures are adequate. Auditing methods have recently been devised that are much more efficient than those used in any state. It is important that audits be performed on every contest in every election, so that citizens do not have to request manual recounts to feel confident about election results. With high-quality audits, it is very unlikely that election fraud will go undetected whether perpetrated by another country or a political party.

Another essay along similar lines.

Related: there is some information about Russian political hacking this election cycle that is classified. My guess is that it has nothing to do with hacking the voting machines—the NSA was on high alert for anything, and I have it on good authority that they found nothing—but something related to either the political-organization hacking, the propaganda machines, or something else before Election Day.

Posted on December 2, 2016 at 6:39 AM206 Comments

Analyzing WeChat

Citizen Lab has analyzed how censorship works in the Chinese chat app WeChat:

Key Findings:

  • Keyword filtering on WeChat is only enabled for users with accounts registered to mainland China phone numbers, and persists even if these users later link the account to an International number.
  • Keyword censorship is no longer transparent. In the past, users received notification when their message was blocked; now censorship of chat messages happens without any user notice.
  • More keywords are blocked on group chat, where messages can reach a larger audience, than one-to-one chat.
  • Keyword censorship is dynamic. Some keywords that triggered censorship in our original tests were later found to be permissible in later tests. Some newfound censored keywords appear to have been added in response to current news events.
  • WeChat’s internal browser blocks China-based accounts from accessing a range of websites including gambling, Falun Gong, and media that report critically on China. Websites that are blocked for China accounts were fully accessible for International accounts, but there is intermittent blocking of gambling and pornography websites on International accounts.

Lots more details in the paper.

Posted on December 1, 2016 at 9:29 AM19 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.