Blog: September 2017 Archives

Friday Squid Blogging: Squid Empire Is a New Book

Regularly I receive mail from people wanting to advertise on, write for, or sponsor posts on my blog. My rule is that I say no to everyone. There is no amount of money or free stuff that will get me to write about your security product or service.

With regard to squid, however, I have no such compunctions. Send me any sort of squid anything, and I am happy to write about it. Earlier this week, for example, I received two—not one—copies of the new book Squid Empire: The Rise and Fall of Cephalopods. I haven’t read it yet, but it looks good. It’s the story of prehistoric squid.

Here’s a review by someone who has read it.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on September 29, 2017 at 4:27 PM107 Comments

Deloitte Hacked

The large accountancy firm Deloitte was hacked, losing client e-mails and files. The hackers had access inside the company’s networks for months. Deloitte is doing its best to downplay the severity of this hack, but Brian Krebs reports that the hack “involves the compromise of all administrator accounts at the company as well as Deloitte’s entire internal email system.”

So far, the hackers haven’t published all the data they stole.

Posted on September 29, 2017 at 6:13 AM43 Comments

Department of Homeland Security to Collect Social Media of Immigrants and Citizens

New rules give the DHS permission to collect “social media handles, aliases, associated identifiable information, and search results” as part of people’s immigration files. The Federal Register has the details, which seems to also include US citizens that communicate with immigrants.

This is part of the general trend to scrutinize people coming into the US more, but it’s hard to get too worked up about the DHS accessing publicly available information. More disturbing is the trend of occasionally asking for social media passwords at the border.

Posted on September 28, 2017 at 7:43 AM43 Comments

The Data Tinder Collects, Saves, and Uses

Under European law, service providers like Tinder are required to show users what information they have on them when requested. This author requested, and this is what she received:

Some 800 pages came back containing information such as my Facebook “likes,” my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened…the list goes on.

“I am horrified but absolutely not surprised by this amount of data,” said Olivier Keyes, a data scientist at the University of Washington. “Every app you use regularly on your phone owns the same [kinds of information]. Facebook has thousands of pages about you!”

As I flicked through page after page of my data I felt guilty. I was amazed by how much information I was voluntarily disclosing: from locations, interests and jobs, to pictures, music tastes and what I liked to eat. But I quickly realised I wasn’t the only one. A July 2017 study revealed Tinder users are excessively willing to disclose information without realising it.

“You are lured into giving away all this information,” says Luke Stark, a digital technology sociologist at Dartmouth University. “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”

Reading through the 1,700 Tinder messages I’ve sent since 2013, I took a trip into my hopes, fears, sexual preferences and deepest secrets. Tinder knows me so well. It knows the real, inglorious version of me who copy-pasted the same joke to match 567, 568, and 569; who exchanged compulsively with 16 different people simultaneously one New Year’s Day, and then ghosted 16 of them.

“What you are describing is called secondary implicit disclosed information,” explains Alessandro Acquisti, professor of information technology at Carnegie Mellon University. “Tinder knows much more about you when studying your behaviour on the app. It knows how often you connect and at which times; the percentage of white men, black men, Asian men you have matched; which kinds of people are interested in you; which words you use the most; how much time people spend on your picture before swiping you, and so on. Personal data is the fuel of the economy. Consumers’ data is being traded and transacted for the purpose of advertising.”

Tinder’s privacy policy clearly states your data may be used to deliver “targeted advertising.”

It’s not Tinder. Surveillance is the business model of the Internet. Everyone does this.

Posted on September 26, 2017 at 7:57 AM39 Comments

GPS Spoofing Attacks

Wired has a story about a possible GPS spoofing attack by Russia:

After trawling through AIS data from recent years, evidence of spoofing becomes clear. Goward says GPS data has placed ships at three different airports and there have been other interesting anomalies. “We would find very large oil tankers who could travel at the maximum speed at 15 knots,” says Goward, who was formerly director for Marine Transportation Systems at the US Coast Guard. “Their AIS, which is powered by GPS, would be saying they had sped up to 60 to 65 knots for an hour and then suddenly stopped. They had done that several times.”

All of the evidence from the Black Sea points towards a co-ordinated attempt to disrupt GPS. A recently published report from NRK found that 24 vessels appeared at Gelendzhik airport around the same time as the Atria. When contacted, a US Coast Guard representative refused to comment on the incident, saying any GPS disruption that warranted further investigation would be passed onto the Department of Defence.

“It looks like a sophisticated attack, by somebody who knew what they were doing and were just testing the system,” Bonenberg says. Humphreys told NRK it “strongly” looks like a spoofing incident. Fire Eye’s Brubaker, agreed, saying the activity looked intentional. Goward is also confident that GPS were purposely disrupted. “What this case shows us is there are entities out there that are willing and eager to disrupt satellite navigation systems for whatever reason and they can do it over a fairly large area and in a sophisticated way,” he says. “They’re not just broadcasting a stronger signal and denying service this is worse they’re providing hazardously misleading information.”

Posted on September 25, 2017 at 8:23 AM72 Comments

Boston Red Sox Caught Using Technology to Steal Signs

The Boston Red Sox admitted to eavesdropping on the communications channel between catcher and pitcher.

Stealing signs is believed to be particularly effective when there is a runner on second base who can both watch what hand signals the catcher is using to communicate with the pitcher and can easily relay to the batter any clues about what type of pitch may be coming. Such tactics are allowed as long as teams do not use any methods beyond their eyes. Binoculars and electronic devices are both prohibited.

In recent years, as cameras have proliferated in major league ballparks, teams have begun using the abundance of video to help them discern opponents’ signs, including the catcher’s signals to the pitcher. Some clubs have had clubhouse attendants quickly relay information to the dugout from the personnel monitoring video feeds.

But such information has to be rushed to the dugout on foot so it can be relayed to players on the field—a runner on second, the batter at the plate—while the information is still relevant. The Red Sox admitted to league investigators that they were able to significantly shorten this communications chain by using electronics. In what mimicked the rhythm of a double play, the information would rapidly go from video personnel to a trainer to the players.

This is ridiculous. The rules about what sorts of sign stealing are allowed and what sorts are not are arbitrary and unenforceable. My guess is that the only reason there aren’t more complaints is because everyone does it.

The Red Sox responded in kind on Tuesday, filing a complaint against the Yankees claiming that the team uses a camera from its YES television network exclusively to steal signs during games, an assertion the Yankees denied.

Boston’s mistake here was using a very conspicuous Apple Watch as a communications device. They need to learn to be more subtle, like everyone else.

Posted on September 22, 2017 at 6:21 AM33 Comments

ISO Rejects NSA Encryption Algorithms

The ISO has decided not to approve two NSA-designed block encryption algorithms: Speck and Simon. It’s because the NSA is not trusted to put security ahead of surveillance:

A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA documents disclosed by Snowden that showed the agency had previously plotted to manipulate standards and promote technology it could penetrate. Budget documents, for example, sought funding to “insert vulnerabilities into commercial encryption systems.”

More than a dozen of the experts involved in the approval process for Simon and Speck feared that if the NSA was able to crack the encryption techniques, it would gain a “back door” into coded transmissions, according to the interviews and emails and other documents seen by Reuters.

“I don’t trust the designers,” Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden’s papers. “There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards.”

I don’t trust the NSA, either.

Posted on September 21, 2017 at 5:50 AM79 Comments

What the NSA Collects via 702

New York Times reporter Charlie Savage writes about some bad statistics we’re all using:

Among surveillance legal policy specialists, it is common to cite a set of statistics from an October 2011 opinion by Judge John Bates, then of the FISA Court, about the volume of internet communications the National Security Agency was collecting under the FISA Amendments Act (“Section 702”) warrantless surveillance program. In his opinion, declassified in August 2013, Judge Bates wrote that the NSA was collecting more than 250 million internet communications a year, of which 91 percent came from its Prism system (which collects stored e-mails from providers like Gmail) and 9 percent came from its upstream system (which collects transmitted messages from network operators like AT&T).

These numbers are wrong. This blog post will address, first, the widespread nature of this misunderstanding; second, how I came to FOIA certain documents trying to figure out whether the numbers really added up; third, what those documents show; and fourth, what I further learned in talking to an intelligence official. This is far too dense and weedy for a New York Times article, but should hopefully be of some interest to specialists.

Worth reading for the details.

Posted on September 20, 2017 at 6:12 AM10 Comments

Apple's FaceID

This is a good interview with Apple’s SVP of Software Engineering about FaceID.

Honestly, I don’t know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can’t be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:

I also quizzed Federighi about the exact way you “quick disabled” Face ID in tricky scenarios—like being stopped by police, or being asked by a thief to hand over your device.

“On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while—we’ll take you to the power down [screen]. But that also has the effect of disabling Face ID,” says Federighi. “So, if you were in a case where the thief was asking to hand over your phone—you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID.”

That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the “5 clicks” because it’s less obtrusive. When you do this, it defaults back to your passcode.

More:

It’s worth noting a few additional details here:

  • If you haven’t used Face ID in 48 hours, or if you’ve just rebooted, it will ask for a passcode.
  • If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode—it tried to read the people setting the phones up on the podium.)
  • Developers do not have access to raw sensor data from the Face ID array. Instead, they’re given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.
  • You’ll also get a passcode request if you haven’t unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn’t unlocked it in 4 hours.

Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.

Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you’re a researcher or security wonk looking for more, he says it will have “extreme levels of detail” about the security of the system.

Here’s more about fooling it with fake faces:

Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop’s owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.

Hacking FaceID, though, won’t be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user’s face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face’s 3-D shape­—a trick similar to the kind now used to capture actors’ faces to morph them into animated and digitally enhanced characters.

It’ll be harder, but I have no doubt that it will be done.

More speculation.

I am not planning on enabling it just yet.

Posted on September 19, 2017 at 6:44 AM43 Comments

Bluetooth Vulnerabilities

A bunch of Bluetooth vulnerabilities are being reported, some pretty nasty.

BlueBorne concerns us because of the medium by which it operates. Unlike the majority of attacks today, which rely on the internet, a BlueBorne attack spreads through the air. This works similarly to the two less extensive vulnerabilities discovered recently in a Broadcom Wi-Fi chip by Project Zero and Exodus. The vulnerabilities found in Wi-Fi chips affect only the peripherals of the device, and require another step to take control of the device. With BlueBorne, attackers can gain full control right from the start. Moreover, Bluetooth offers a wider attacker surface than WiFi, almost entirely unexplored by the research community and hence contains far more vulnerabilities.

Airborne attacks, unfortunately, provide a number of opportunities for the attacker. First, spreading through the air renders the attack much more contagious, and allows it to spread with minimum effort. Second, it allows the attack to bypass current security measures and remain undetected, as traditional methods do not protect from airborne threats. Airborne attacks can also allow hackers to penetrate secure internal networks which are “air gapped,” meaning they are disconnected from any other network for protection. This can endanger industrial systems, government agencies, and critical infrastructure.

Finally, unlike traditional malware or attacks, the user does not have to click on a link or download a questionable file. No action by the user is necessary to enable the attack.

Fully patched Windows and iOS systems are protected; Linux coming soon.

Posted on September 18, 2017 at 6:58 AM45 Comments

Another iPhone Change to Frustrate the Police

I recently wrote about the new ability to disable the Touch ID login on iPhones. This is important because of a weirdness in current US law that protects people’s passcodes from forced disclosure in ways it does not protect actions: being forced to place a thumb on a fingerprint reader.

There’s another, more significant, change: iOS now requires a passcode before the phone will establish trust with another device.

In the current system, when you connect your phone to a computer, you’re prompted with the question “Trust this computer?” and you can click yes or no. Now you have to enter in your passcode again. That means if the police have an unlocked phone, they can scroll through the phone looking for things but they can’t download all of the contents onto a another computer without also knowing the passcode.

More details:

This might be particularly consequential during border searches. The “border search” exception, which allows Customs and Border Protection to search anything going into the country, is a contentious issue when applied electronics. It is somewhat (but not completely) settled law, but that the U.S. government can, without any cause at all (not even “reasonable articulable suspicion”, let alone “probable cause”), copy all the contents of my devices when I reenter the country sows deep discomfort in myself and many others. The only legal limitation appears to be a promise not to use this information to connect to remote services. The new iOS feature means that a Customs office can browse through a device—a time limited exercise—but not download the full contents.

Posted on September 15, 2017 at 6:28 AM40 Comments

On the Equifax Data Breach

Last Thursday, Equifax reported a data breach that affects 143 million US customers, about 44% of the population. It’s an extremely serious breach; hackers got access to full names, Social Security numbers, birth dates, addresses, driver’s license numbers—exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, and other businesses vulnerable to fraud.

Many sites posted guides to protecting yourself now that it’s happened. But if you want to prevent this kind of thing from happening again, your only solution is government regulation (as unlikely as that may be at the moment).

The market can’t fix this. Markets work because buyers choose between sellers, and sellers compete for buyers. In case you didn’t notice, you’re not Equifax’s customer. You’re its product.

This happened because your personal information is valuable, and Equifax is in the business of selling it. The company is much more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights.

Its customers are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer—everyone who wants to sell you something, even governments.

It’s not just Equifax. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about you—almost all of them companies you’ve never heard of and have no business relationship with.

Surveillance capitalism fuels the Internet, and sometimes it seems that everyone is spying on you. You’re secretly tracked on pretty much every commercial website you visit. Facebook is the largest surveillance organization mankind has created; collecting data on you is its business model. I don’t have a Facebook account, but Facebook still keeps a surprisingly complete dossier on me and my associations—just in case I ever decide to join.

I also don’t have a Gmail account, because I don’t want Google storing my e-mail. But my guess is that it has about half of my e-mail anyway, because so many people I correspond with have accounts. I can’t even avoid it by choosing not to write to gmail.com addresses, because I have no way of knowing if newperson@company.com is hosted at Gmail.

And again, many companies that track us do so in secret, without our knowledge and consent. And most of the time we can’t opt out. Sometimes it’s a company like Equifax that doesn’t answer to us in any way. Sometimes it’s a company like Facebook, which is effectively a monopoly because of its sheer size. And sometimes it’s our cell phone provider. All of them have decided to track us and not compete by offering consumers privacy. Sure, you can tell people not to have an e-mail account or cell phone, but that’s not a realistic option for most people living in 21st-century America.

The companies that collect and sell our data don’t need to keep it secure in order to maintain their market share. They don’t have to answer to us, their products. They know it’s more profitable to save money on security and weather the occasional bout of bad press after a data loss. Yes, we are the ones who suffer when criminals get our data, or when our private information is exposed to the public, but ultimately why should Equifax care?

Yes, it’s a huge black eye for the company—this week. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

This market failure isn’t unique to data security. There is little improvement in safety and security in any industry until government steps in. Think of food, pharmaceuticals, cars, airplanes, restaurants, workplace conditions, and flame-retardant pajamas.

Market failures like this can only be solved through government intervention. By regulating the security practices of companies that store our data, and fining companies that fail to comply, governments can raise the cost of insecurity high enough that security becomes a cheaper alternative. They can do the same thing by giving individuals affected by these breaches the ability to sue successfully, citing the exposure of personal data itself as a harm.

By all means, take the recommended steps to protect yourself from identity theft in the wake of Equifax’s data breach, but recognize that these steps are only effective on the margins, and that most data security is out of your hands. Perhaps the Federal Trade Commission will get involved, but without evidence of “unfair and deceptive trade practices,” there’s nothing it can do. Perhaps there will be a class-action lawsuit, but because it’s hard to draw a line between any of the many data breaches you’re subjected to and a specific harm, courts are not likely to side with you.

If you don’t like how careless Equifax was with your data, don’t waste your breath complaining to Equifax. Complain to your government.

This essay previously appeared on CNN.com.

EDITED TO ADD: In the early hours of this breach, I did a radio interview where I minimized the ramifications of this. I didn’t know the full extent of the breach, and thought it was just another in an endless string of breaches. I wondered why the press was covering this one and not many of the others. I don’t remember which radio show interviewed me. I kind of hope it didn’t air.

Posted on September 13, 2017 at 12:49 PM115 Comments

A Hardware Privacy Monitor for iPhones

Andrew “bunnie” Huang and Edward Snowden have designed a hardware device that attaches to an iPhone and monitors it for malicious surveillance activities, even in instances where the phone’s operating system has been compromised. They call it an Introspection Engine, and their use model is a journalist who is concerned about government surveillance:

Our introspection engine is designed with the following goals in mind:

  1. Completely open source and user-inspectable (“You don’t have to trust us”)
  2. Introspection operations are performed by an execution domain completely separated from the phone”s CPU (“don’t rely on those with impaired judgment to fairly judge their state”)
  3. Proper operation of introspection system can be field-verified (guard against “evil maid” attacks and hardware failures)
  4. Difficult to trigger a false positive (users ignore or disable security alerts when there are too many positives)
  5. Difficult to induce a false negative, even with signed firmware updates (“don’t trust the system vendor”—state-level adversaries with full cooperation of system vendors should not be able to craft signed firmware updates that spoof or bypass the introspection engine)
  6. As much as possible, the introspection system should be passive and difficult to detect by the phone’s operating system (prevent black-listing/targeting of users based on introspection engine signatures)
  7. Simple, intuitive user interface requiring no specialized knowledge to interpret or operate (avoid user error leading to false negatives; “journalists shouldn’t have to be cryptographers to be safe”)
  8. Final solution should be usable on a daily basis, with minimal impact on workflow (avoid forcing field reporters into the choice between their personal security and being an effective journalist)

This looks like fantastic work, and they have a working prototype.

Of course, this does nothing to stop all the legitimate surveillance that happens over a cell phone: location tracking, records of who you talk to, and so on.

BoingBoing post.

Posted on September 11, 2017 at 6:12 AM61 Comments

ShadowBrokers Releases NSA UNITEDRAKE Manual

The ShadowBrokers released the manual for UNITEDRAKE, a sophisticated NSA Trojan that targets Windows machines:

Able to compromise Windows PCs running on XP, Windows Server 2003 and 2008, Vista, Windows 7 SP 1 and below, as well as Windows 8 and Windows Server 2012, the attack tool acts as a service to capture information.

UNITEDRAKE, described as a “fully extensible remote collection system designed for Windows targets,” also gives operators the opportunity to take complete control of a device.

The malware’s modules—including FOGGYBOTTOM and GROK—can perform tasks including listening in and monitoring communication, capturing keystrokes and both webcam and microphone usage, the impersonation users, stealing diagnostics information and self-destructing once tasks are completed.

More news.

UNITEDRAKE was mentioned in several Snowden documents and also in the TAO catalog of implants.

And Kaspersky Labs has found evidence of these tools in the wild, associated with the Equation Group—generally assumed to be the NSA:

The capabilities of several tools in the catalog identified by the codenames UNITEDRAKE, STRAITBAZZARE, VALIDATOR and SLICKERVICAR appear to match the tools Kaspersky found. These codenames don’t appear in the components from the Equation Group, but Kaspersky did find “UR” in EquationDrug, suggesting a possible connection to UNITEDRAKE (United Rake). Kaspersky also found other codenames in the components that aren’t in the NSA catalog but share the same naming conventions­they include SKYHOOKCHOW, STEALTHFIGHTER, DRINKPARSLEY, STRAITACID, LUTEUSOBSTOS, STRAITSHOOTER, and DESERTWINTER.

ShadowBrokers has only released the UNITEDRAKE manual, not the tool itself. Presumably they’re trying to sell that.

Posted on September 8, 2017 at 6:54 AM14 Comments

Research on What Motivates ISIS—and Other—Fighters

Interesting research from Nature Human Behaviour: “The devoted actor’s will to fight and the spiritual dimension of human conflict“:

Abstract: Frontline investigations with fighters against the Islamic State (ISIL or ISIS), combined with multiple online studies, address willingness to fight and die in intergroup conflict. The general focus is on non-utilitarian aspects of human conflict, which combatants themselves deem ‘sacred’ or ‘spiritual’, whether secular or religious. Here we investigate two key components of a theoretical framework we call ‘the devoted actor’—sacred values and identity fusion with a group­—to better understand people’s willingness to make costly sacrifices. We reveal three crucial factors: commitment to non-negotiable sacred values and the groups that the actors are wholly fused with; readiness to forsake kin for those values; and perceived spiritual strength of ingroup versus foes as more important than relative material strength. We directly relate expressed willingness for action to behaviour as a check on claims that decisions in extreme conflicts are driven by cost-benefit calculations, which may help to inform policy decisions for the common defense.

Posted on September 7, 2017 at 6:05 AM43 Comments

Security Vulnerabilities in AT&T Routers

They’re actually Arris routers, sold or given away by AT&T. There are several security vulnerabilities, some of them very serious. They can be fixed, but because these are routers it takes some skill. We don’t know how many routers are affected, and estimates range from thousands to 138,000.

Among the vulnerabilities are hardcoded credentials, which can allow “root” remote access to an affected device, giving an attacker full control over the router. An attacker can connect to an affected router and log-in with a publicly-disclosed username and password, granting access to the modem’s menu-driven shell. An attacker can view and change the Wi-Fi router name and password, and alter the network’s setup, such as rerouting internet traffic to a malicious server.

The shell also allows the attacker to control a module that’s dedicated to injecting advertisements into unencrypted web traffic, a common tactic used by internet providers and other web companies. Hutchins said that there was “no clear evidence” to suggest the module was running but noted that it was still vulnerable, allowing an attacker to inject their own money-making ad campaigns or malware.

I have written about router vulnerabilities, and why the economics of their production makes them inevitable.

Posted on September 6, 2017 at 6:55 AM28 Comments

Security Flaw in Estonian National ID Card

We have no idea how bad this really is:

On 30 August, an international team of researchers informed the Estonian Information System Authority (RIA) of a vulnerability potentially affecting the digital use of Estonian ID cards. The possible vulnerability affects a total of almost 750,000 ID-cards issued starting from October 2014, including cards issued to e-residents. The ID-cards issued before 16 October 2014 use a different chip and are not affected. Mobile-IDs are also not impacted.

My guess is that it’s worse than the politicians are saying:

According to Peterkop, the current data shows this risk to be theoretical and there is no evidence of anyone’s digital identity being misused. “All ID-card operations are still valid and we will take appropriate actions to secure the functioning of our national digital-ID infrastructure. For example, we have restricted the access to Estonian ID-card public key database to prevent illegal use.”

And because this system is so important in local politics, the effects are significant:

In the light of current events, some Estonian politicians called to postpone the upcoming local elections, due to take place on 16 October. In Estonia, approximately 35% of the voters use digital identity to vote online.

But the Estonian prime minister, Jüri Ratas, said at a press conference on 5 September that “this incident will not affect the course of the Estonian e-state.” Ratas also recommended to use Mobile-IDs where possible. The prime minister said that the State Electoral Office will decide whether it will allow the usage of ID cards at the upcoming local elections.

The Estonian Police and Border Guard estimates it will take approximately two months to fix the issue with faulty cards. The authority will involve as many Estonian experts as possible in the process.

This is exactly the sort of thing I worry about as ID systems become more prevalent and more centralized. Anyone want to place bets on whether a foreign country is going to try to hack the next Estonian election?

Another article.

EDITED TO ADD (9/18): More details.

Posted on September 5, 2017 at 3:23 PM67 Comments

New Techniques in Fake Reviews

Research paper: “Automated Crowdturfing Attacks and Defenses in Online Review Systems.”

Abstract: Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect.

Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on “usefulness” metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.

Posted on September 4, 2017 at 7:08 AM16 Comments

Russian Hacking Tools Codenamed WhiteBear Exposed

Kaspersky Labs exposed a highly sophisticated set of hacking tools from Russia called WhiteBear.

From February to September 2016, WhiteBear activity was narrowly focused on embassies and consular operations around the world. All of these early WhiteBear targets were related to embassies and diplomatic/foreign affair organizations. Continued WhiteBear activity later shifted to include defense-related organizations into June 2017. When compared to WhiteAtlas infections, WhiteBear deployments are relatively rare and represent a departure from the broader Skipper Turla target set. Additionally, a comparison of the WhiteAtlas framework to WhiteBear components indicates that the malware is the product of separate development efforts. WhiteBear infections appear to be preceded by a condensed spearphishing dropper, lack Firefox extension installer payloads, and contain several new components signed with a new code signing digital certificate, unlike WhiteAtlas incidents and modules.

The exact delivery vector for WhiteBear components is unknown to us, although we have very strong suspicion the group spearphished targets with malicious pdf files. The decoy pdf document above was likely stolen from a target or partner. And, although WhiteBear components have been consistently identified on a subset of systems previously targeted with the WhiteAtlas framework, and maintain components within the same filepaths and can maintain identical filenames, we were unable to firmly tie delivery to any specific WhiteAtlas component. WhiteBear focused on various embassies and diplomatic entities around the world in early 2016—tellingly, attempts were made to drop and display decoy pdf’s with full diplomatic headers and content alongside executable droppers on target systems.

One of the clever things the tool does is use hijacked satellite connections for command and control, helping it evade detection by broad surveillance capabilities like what the NSA uses. We’ve seen Russian attack tools that do this before. More details are in the Kaspersky blog post.

Given all the trouble Kaspersky is having because of its association with Russia, it’s interesting to speculate on this disclosure. Either they are independent, and have burned a valuable Russian hacking toolset. Or the Russians decided that the toolset was already burned—maybe the NSA knows all about it and has neutered it somehow—and allowed Kaspersky to publish. Or maybe it’s something in between. That’s the problem with this kind of speculation: without any facts, your theories just amplify whatever opinion you had previously.

Oddly, there hasn’t been much press about this. I have only found one story.

EDITED TO ADD: A colleague pointed out to me that Kaspersky announcements like this often get ignored by the press. There was very little written about ProjectSauron, for example.

EDITED TO ADD: The text I originally wrote said that Kaspersky released the attacks tools, like what Shadow Brokers is doing. They did not. They just exposed the existence of them. Apologies for that error—it was sloppy wording.

Posted on September 1, 2017 at 6:39 AM27 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.