Crypto-Gram

August 15, 2019

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. Palantir’s Surveillance Service for Law Enforcement
  2. Zoom Vulnerability
  3. Identity Theft on the Job Market
  4. John Paul Stevens Was a Cryptographer
  5. A Harlequin Romance Novel about Hackers
  6. Hackers Expose Russian FSB Cyberattack Projects
  7. Science Fiction Writers Helping Imagine Future Threats
  8. Attorney General William Barr on Encryption Policy
  9. Software Developers and Security
  10. Insider Logic Bombs
  11. Wanted: Cybersecurity Imagery
  12. ACLU on the GCHQ Backdoor Proposal
  13. Another Attack Against Driverless Cars
  14. Facebook Plans on Backdooring WhatsApp
  15. How Privacy Laws Hurt Defendants
  16. Disabling Security Cameras with Lasers
  17. More on Backdooring (or Not) WhatsApp
  18. Regulating International Trade in Commercial Spyware
  19. Phone Pharming for Ad Fraud
  20. Brazilian Cell Phone Hack
  21. AT&T Employees Took Bribes to Unlock Smartphones
  22. Supply-Chain Attack against the Electron Development Platform
  23. Evaluating the NSA’s Telephony Metadata Program
  24. Exploiting GDPR to Get Private Information
  25. Side-Channel Attack against Electronic Locks

Palantir’s Surveillance Service for Law Enforcement

[2019.07.15] Motherboard got its hands on Palantir’s Gotham user’s manual, which is used by the police to get information on people:

The Palantir user guide shows that police can start with almost no information about a person of interest and instantly know extremely intimate details about their lives. The capabilities are staggering, according to the guide:

  • If police have a name that’s associated with a license plate, they can use automatic license plate reader data to find out where they’ve been, and when they’ve been there. This can give a complete account of where someone has driven over any time period.
  • With a name, police can also find a person’s email address, phone numbers, current and previous addresses, bank accounts, social security number(s), business relationships, family relationships, and license information like height, weight, and eye color, as long as it’s in the agency’s database.
  • The software can map out a person’s family members and business associates of a suspect, and theoretically, find the above information about them, too.

All of this information is aggregated and synthesized in a way that gives law enforcement nearly omniscient knowledge over any suspect they decide to surveil.

Read the whole article—it has a lot of details. This seems like a commercial version of the NSA’s XKEYSCORE.

Boing Boing post.

Meanwhile:

The FBI wants to gather more information from social media. Today, it issued a call for contracts for a new social media monitoring tool. According to a request-for-proposals (RFP), it’s looking for an “early alerting tool” that would help it monitor terrorist groups, domestic threats, criminal activity and the like.

The tool would provide the FBI with access to the full social media profiles of persons-of-interest. That could include information like user IDs, emails, IP addresses and telephone numbers. The tool would also allow the FBI to track people based on location, enable persistent keyword monitoring and provide access to personal social media history. According to the RFP, “The mission-critical exploitation of social media will enable the Bureau to detect, disrupt, and investigate an ever growing diverse range of threats to U.S. National interests.”


Zoom Vulnerability

[2019.07.16] The Zoom conferencing app has a vulnerability that allows someone to remotely take over the computer’s camera.

It’s a bad vulnerability, made worse by the fact that it remains even if you uninstall the Zoom app:

This vulnerability allows any website to forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission.

On top of this, this vulnerability would have allowed any webpage to DOS (Denial of Service) a Mac by repeatedly joining a user to an invalid call.

Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day.

Zoom didn’t take the vulnerability seriously:

This vulnerability was originally responsibly disclosed on March 26, 2019. This initial report included a proposed description of a ‘quick fix’ Zoom could have implemented by simply changing their server logic. It took Zoom 10 days to confirm the vulnerability. The first actual meeting about how the vulnerability would be patched occurred on June 11th, 2019, only 18 days before the end of the 90-day public disclosure deadline. During this meeting, the details of the vulnerability were confirmed and Zoom’s planned solution was discussed. However, I was very easily able to spot and describe bypasses in their planned fix. At this point, Zoom was left with 18 days to resolve the vulnerability. On June 24th after 90 days of waiting, the last day before the public disclosure deadline, I discovered that Zoom had only implemented the ‘quick fix’ solution originally suggested.

This is why we disclose vulnerabilities. Now, finally, Zoom is taking this seriously and fixing it for real.

EDITED TO ADD (8/8): Apple silently released a macOS update that removes the Zoom webserver if the app is not present.


Identity Theft on the Job Market

[2019.07.18] Identity theft is getting more subtle: “My job application was withdrawn by someone pretending to be me“:

When Mr Fearn applied for a job at the company he didn’t hear back.

He said the recruitment team said they’d get back to him by Friday, but they never did.

At first, he assumed he was unsuccessful, but after emailing his contact there, it turned out someone had created a Gmail account in his name and asked the company to withdraw his application.

Mr Fearn said the talent assistant told him they were confused because he had apparently emailed them to withdraw his application on Wednesday.

“They forwarded the email, which was sent from an account using my name.”

He said he felt “really shocked and violated” to find out that someone had created an email account in his name just to tarnish his chances of getting a role.

This is about as low-tech as it gets. It’s trivially simple for me to open a new Gmail account using a random first and last name. But because people innately trust email, it works.


John Paul Stevens Was a Cryptographer

[2019.07.19] I didn’t know that Supreme Court Justice John Paul Stevens “was also a cryptographer for the Navy during World War II.” He was a proponent of individual privacy.

EDITED TO ADD (8/12): More on his cryptography career.


A Harlequin Romance Novel about Hackers

[2019.07.19] Really.


Hackers Expose Russian FSB Cyberattack Projects

[2019.07.22] More nation-state activity in cyberspace, this time from Russia:

Per the different reports in Russian media, the files indicate that SyTech had worked since 2009 on a multitude of projects since 2009 for FSB unit 71330 and for fellow contractor Quantum. Projects include:

  • Nautilus—a project for collecting data about social media users (such as Facebook, MySpace, and LinkedIn).
  • Nautilus-S—a project for deanonymizing Tor traffic with the help of rogue Tor servers.
  • Reward—a project to covertly penetrate P2P networks, like the one used for torrents.
  • Mentor—a project to monitor and search email communications on the servers of Russian companies.
  • Hope—a project to investigate the topology of the Russian internet and how it connects to other countries’ network.
  • Tax-3—a project for the creation of a closed intranet to store the information of highly-sensitive state figures, judges, and local administration officials, separate from the rest of the state’s IT networks.

BBC Russia, who received the full trove of documents, claims there were other older projects for researching other network protocols such as Jabber (instant messaging), ED2K (eDonkey), and OpenFT (enterprise file transfer).

Other files posted on the Digital Revolution Twitter account claimed that the FSB was also tracking students and pensioners.


Science Fiction Writers Helping Imagine Future Threats

[2019.07.23] The French army is going to put together a team of science fiction writers to help imagine future threats.

Leaving aside the question of whether science fiction writers are better or worse at envisioning nonfictional futures, this isn’t new. The US Department of Homeland Security did the same thing over a decade ago, and I wrote about it back then:

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats—which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Science fiction writers are creative, and creativity helps in any future scenario brainstorming. But please, keep the people who actually know science and technology in charge.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.


Attorney General William Barr on Encryption Policy

[2019.07.24] Yesterday, Attorney General William Barr gave a major speech on encryption policy—what is commonly known as “going dark.” Speaking at Fordham University in New York, he admitted that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically impossible to provide lawful access without weakening security against unlawful access. But, in the world of cybersecurity, we do not deal in absolute guarantees but in relative risks. All systems fall short of optimality and have some residual risk of vulnerability a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its significance should not be judged solely by the extent to which it falls short of theoretical optimality. Particularly with respect to encryption marketed to consumers, the significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society. After all, we are not talking about protecting the Nation’s nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications. If one already has an effective level of security say, by way of illustration, one that protects against 99 percent of foreseeable threats is it reasonable to incur massive further costs to move slightly closer to optimality and attain a 99.5 percent level of protection? A company would not make that expenditure; nor should society. Here, some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety. This is untenable. If the choice is between a world where we can achieve a 99 percent assurance against cyber threats to consumers, while still providing law enforcement 80 percent of the access it might seek; or a world, on the other hand, where we have boosted our cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic] access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how—an approach we have derisively named “nerd harder.”

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be havingnot the fake one about whether or not we can have both security and surveillance.

Barr makes the point that this is about “consumer cybersecurity,” and not “nuclear launch codes.” This is true, but ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There’s no longer a difference between consumer tech and government tech—it’s all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served for many years as the general counsel of a large telecommunications concern. During my tenure, we dealt with these issues and lived through the passage and implementation of CALEA the Communications Assistance for Law Enforcement Act. CALEA imposes a statutory duty on telecommunications carriers to maintain the capability to provide lawful access to communications over their facilities. Companies bear the cost of compliance but have some flexibility in how they achieve it, and the system has by and large worked. I therefore reserve a heavy dose of skepticism for those who claim that maintaining a mechanism for lawful access would impose an unreasonable burden on tech firms especially the big ones. It is absurd to think that we would preserve lawful access by mandating that physical telecommunications facilities be accessible to law enforcement for the purpose of obtaining content, while allowing tech providers to block law enforcement from obtaining that very content.

That telecommunications company was GTEwhich became Verizon. Barr conveniently ignores that CALEA-enabled phone switches were used to spy on government officials in Greece in 2003—which seems to have been an NSA operation—and on a variety of people in Italy in 2006. Moreover, in 2012 every CALEA-enabled switch sold to the Defense Department had security vulnerabilities. (I wrote about all this, and more, in 2013.)

The final thing I noticed about the speech is that is it not about iPhones and data at rest. It is about communications—data in transit. The “going dark” debate has bounced back and forth between those two aspects for decades. It seems to be bouncing once again.

I hope that Barr’s latest speech signals that we can finally move on from the fake security vs. privacy debate, and to the real security vs. security debate. I know where I stand on that: As computers continue to permeate every aspect of our lives, society, and critical infrastructure, it is much more important to ensure that they are secure from everybody—even at the cost of law-enforcement access—than it is to allow access at the cost of security. Barr is wrong, it kind of is like these systems are protecting nuclear launch codes.

This essay previously appeared on Lawfare.com.

EDITED TO ADD: More news articles.

EDITED TO ADD (7/28): Gen. Hayden comments.

EDITED TO ADD (7/30): Good response by Robert Graham.


Software Developers and Security

[2019.07.25] According to a survey: “68% of the security professionals surveyed believe it’s a programmer’s job to write secure code, but they also think less than half of developers can spot security holes.” And that’s a problem.

Nearly half of security pros surveyed, 49%, said they struggle to get developers to make remediation of vulnerabilities a priority. Worse still, 68% of security professionals feel fewer than half of developers can spot security vulnerabilities later in the life cycle. Roughly half of security professionals said they most often found bugs after code is merged in a test environment.

At the same time, nearly 70% of developers said that while they are expected to write secure code, they get little guidance or help. One disgruntled programmer said, “It’s a mess, no standardization, most of my work has never had a security scan.”

Another problem is it seems many companies don’t take security seriously enough. Nearly 44% of those surveyed reported that they’re not judged on their security vulnerabilities.


Insider Logic Bombs

[2019.07.26] Add to the “not very smart criminals” file:

According to court documents, Tinley provided software services for Siemens’ Monroeville, PA offices for nearly ten years. Among the work he was asked to perform was the creation of spreadsheets that the company was using to manage equipment orders.

The spreadsheets included custom scripts that would update the content of the file based on current orders stored in other, remote documents, allowing the company to automate inventory and order management.

But while Tinley’s files worked for years, they started malfunctioning around 2014. According to court documents, Tinley planted so-called “logic bombs” that would trigger after a certain date, and crash the files.

Every time the scripts would crash, Siemens would call Tinley, who’d fix the files for a fee.


Wanted: Cybersecurity Imagery

[2019.07.29] Eli Sugarman of the Hewlettt Foundation laments about the sorry state of cybersecurity imagery:

The state of cybersecurity imagery is, in a word, abysmal. A simple Google Image search for the term proves the point: It’s all white men in hoodies hovering menacingly over keyboards, green “Matrix”-style 1s and 0s, glowing locks and server racks, or some random combination of those elements—sometimes the hoodie-clad men even wear burglar masks. Each of these images fails to convey anything about either the importance or the complexity of the topic—or the huge stakes for governments, industry and ordinary people alike inherent in topics like encryption, surveillance and cyber conflict.

I agree that this is a problem. It’s not something I noticed until recently. I work in words. I think in words. I don’t use PowerPoint (or anything similar) when I give presentations. I don’t need visuals.

But recently, I started teaching at the Harvard Kennedy School, and I constantly use visuals in my class. I made those same image searches, and I came up with similarly unacceptable results.

But unlike me, Hewlett is doing something about it. You can help: participate in the Cybersecurity Visuals Challenge.

EDITED TO ADD (8/5): News article. Slashdot thread.


ACLU on the GCHQ Backdoor Proposal

[2019.07.30] Back in January, two senior GCHQ officials proposed a specific backdoor for communications systems. It was universally derided as unworkable—by me, as well. Now Jon Callas of the ACLU explains why.


Another Attack Against Driverless Cars

[2019.07.31] In this piece of research, attackers successfully attack a driverless car system—Renault Captur’s “Level 0” autopilot (Level 0 systems advise human drivers but do not directly operate cars)—by following them with drones that project images of fake road signs in 100ms bursts. The time is too short for human perception, but long enough to fool the autopilot’s sensors.

Boing Boing post.


Facebook Plans on Backdooring WhatsApp

[2019.08.01] This article points out that Facebook’s planned content moderation scheme will result in an encryption backdoor into WhatsApp:

In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Once this is in place, it’s easy for the government to demand that Facebook add another filter—one that searches for communications that they care about—and alert them when it gets triggered.

Of course alternatives like Signal will exist for those who don’t want to be subject to Facebook’s content moderation, but what happens when this filtering technology is built into operating systems?

The problem is that if Facebook’s model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

I don’t think this will happen—why does AT&T care about content moderation—but it is something to watch?

EDITED TO ADD (8/2): This story is wrong. Read my correction.


How Privacy Laws Hurt Defendants

[2019.08.02] Rebecca Wexler has an interesting op-ed about an inadvertent harm that privacy laws can cause: while law enforcement can often access third-party data to aid in prosecution, the accused don’t have the same level of access to aid in their defense:

The proposed privacy laws would make this situation worse. Lawmakers may not have set out to make the criminal process even more unfair, but the unjust result is not surprising. When lawmakers propose privacy bills to protect sensitive information, law enforcement agencies lobby for exceptions so they can continue to access the information. Few lobby for the accused to have similar rights. Just as the privacy interests of poor, minority and heavily policed communities are often ignored in the lawmaking process, so too are the interests of criminal defendants, many from those same communities.

In criminal cases, both the prosecution and the accused have a right to subpoena evidence so that juries can hear both sides of the case. The new privacy bills need to ensure that law enforcement and defense investigators operate under the same rules when they subpoena digital data. If lawmakers believe otherwise, they should have to explain and justify that view.

For more detail, see her paper.


Disabling Security Cameras with Lasers

[2019.08.02] There’s a really interesting video of protesters in Hong Kong using some sort of laser to disable security cameras. I know nothing more about the technologies involved.

EDITED TO ADD (8/14): LIDAR from self-driving cars has damaged security cameras before.


More on Backdooring (or Not) WhatsApp

[2019.08.02] Yesterday, I blogged about a Facebook plan to backdoor WhatsApp by adding client-side scanning and filtering. It seems that I was wrong, and there are no such plans.

The only source for that post was a Forbes essay by Kalev Leetaru, which links to a previous Forbes essay by him, which links to a video presentation from a Facebook developers conference.

Leetaru extrapolated a lot out of very little. I watched the video (the relevant section is at the 23:00 mark), and it doesn’t talk about client-side scanning of messages. It doesn’t talk about messaging apps at all. It discusses using AI techniques to find bad content on Facebook, and the difficulties that arise from dynamic content:

So far, we have been keeping this fight [against bad actors and harmful content] on familiar grounds. And that is, we have been training our AI models on the server and making inferences on the server when all the data are flooding into our data centers.

While this works for most scenarios, it is not the ideal setup for some unique integrity challenges. URL masking is one such problem which is very hard to do. We have the traditional way of server-side inference. What is URL masking? Let us imagine that a user sees a link on the app and decides to click on it. When they click on it, Facebook actually logs the URL to crawl it at a later date. But…the publisher can dynamically change the content of the webpage to make it look more legitimate [to Facebook]. But then our users click on the same link, they see something completely different—oftentimes it is disturbing; oftentimes it violates our policy standards. Of course, this creates a bad experience for our community that we would like to avoid. This and similar integrity problems are best solved with AI on the device.

That might be true, but it also would hand whatever secret-AI sauce Facebook has to every one of its users to reverse engineer—which means it’s probably not going to happen. And it is a dumb idea, for reasons Steve Bellovin has pointed out.

Facebook’s first published response was a comment on the Hacker News website from a user named “wcathcart,” which Cardozo assures me is Will Cathcart, the vice president of WhatsApp. (I have no reason to doubt his identity, but surely there is a more official news channel that Facebook could have chosen to use if they wanted to.) Cathcart wrote:

We haven’t added a backdoor to WhatsApp. The Forbes contributor referred to a technical talk about client side AI in general to conclude that we might do client side scanning of content on WhatsApp for anti-abuse purposes.

To be crystal clear, we have not done this, have zero plans to do so, and if we ever did it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise which is why we are opposed to it.

Facebook’s second published response was a comment on my original blog post, which has been confirmed to me by the WhatsApp people as authentic. It’s more of the same.

So, this was a false alarm. And, to be fair, Alec Muffet called foul on the first Forbes piece:

So, here’s my pre-emptive finger wag: Civil Society’s pack mentality can make us our own worst enemies. If we go around repeating one man’s Germanic conspiracy theory, we may doom ourselves to precisely what we fear. Instead, we should—we must—take steps to constructively demand what we actually want: End to End Encryption which is worthy of the name.

Blame accepted. But in general, this is the sort of thing we need to watch for. End-to-end encryption only secures data in transit. The data has to be in the clear on the device where it is created, and it has to be in the clear on the device where it is consumed. Those are the obvious places for an eavesdropper to get a copy.

This has been a long process. Facebook desperately wanted to convince me to correct the record, while at the same time not wanting to write something on their own letterhead (just a couple of comments, so far). I spoke at length with Privacy Policy Manager Nate Cardozo, whom Facebook hired last December from EFF. (Back then, I remember thinking of him—and the two other new privacy hires—as basically human warrant canaries. If they ever leave Facebook under non-obvious circumstances, we know that things are bad.) He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this. I am trusting him, while also reminding everyone that Facebook has broken so many privacy promises that they really can’t be trusted.

Final note: If they want to be trusted, Adam Shostack and I gave them a road map.

Hacker News thread.

EDITED TO ADD (8/4): Slashdot covered my retraction.


Regulating International Trade in Commercial Spyware

[2019.08.05] Siena Anstis, Ronald J. Deibert, and John Scott-Railton of Citizen Lab published an editorial calling for regulating the international trade in commercial surveillance systems until we can figure out how to curb human rights abuses.

Any regime of rigorous human rights safeguards that would make a meaningful change to this marketplace would require many elements, for instance, compliance with the U.N. Guiding Principles on Business and Human Rights. Corporate tokenism in this space is unacceptable; companies will have to affirmatively choose human rights concerns over growing profits and hiding behind the veneer of national security. Considering the lies that have emerged from within the surveillance industry, self-reported compliance is insufficient; compliance will have to be independently audited and verified and accept robust measures of outside scrutiny.

The purchase of surveillance technology by law enforcement in any state must be transparent and subject to public debate. Further, its use must comply with frameworks setting out the lawful scope of interference with fundamental rights under international human rights law and applicable national laws, such as the “Necessary and Proportionate” principles on the application of human rights to surveillance. Spyware companies like NSO Group have relied on rubber stamp approvals by government agencies whose permission is required to export their technologies abroad. To prevent abuse, export control systems must instead prioritize a reform agenda that focuses on minimizing the negative human rights impacts of surveillance technology and that ensures—with clear and immediate consequences for those who fail—that companies operate in an accountable and transparent environment.

Finally, and critically, states must fulfill their duty to protect individuals against third-party interference with their fundamental rights. With the growth of digital authoritarianism and the alarming consequences that it may hold for the protection of civil liberties around the world, rights-respecting countries need to establish legal regimes that hold companies and states accountable for the deployment of surveillance technology within their borders. Law enforcement and other organizations that seek to protect refugees or other vulnerable persons coming from abroad will also need to take digital threats seriously.


Phone Pharming for Ad Fraud

[2019.08.06] Interesting article on people using banks of smartphones to commit ad fraud for profit.

No one knows how prevalent ad fraud is on the Internet. I believe it is surprisingly high—here’s an article that places losses between $6.5 and $19 billion annually—and something companies like Google and Facebook would prefer remain unresearched.


Brazilian Cell Phone Hack

[2019.08.07] I know there’s a lot of politics associated with this story, but concentrate on the cybersecurity aspect for a moment. The cell phones of a thousand Brazilians, including senior government officials, were hacked—seemingly by actors much less sophisticated than rival governments.

Brazil’s federal police arrested four people for allegedly hacking 1,000 cellphones belonging to various government officials, including that of President Jair Bolsonaro.

Police detective João Vianey Xavier Filho said the group hacked into the messaging apps of around 1,000 different cellphone numbers, but provided little additional information at a news conference in Brasilia on Wednesday. Cellphones used by Bolsonaro were among those attacked by the group, the justice ministry said in a statement on Thursday, adding that the president was informed of the security breach.

[…]

In the court order determining the arrest of the four suspects, Judge Vallisney de Souza Oliveira wrote that the hackers had accessed Moro’s Telegram messaging app, along with those of two judges and two federal police officers.

When I say that smartphone security equals national security, this is the kind of thing I am talking about.


AT&T Employees Took Bribes to Unlock Smartphones

[2019.08.08] This wasn’t a small operation:

A Pakistani man bribed AT&T call-center employees to install malware and unauthorized hardware as part of a scheme to fraudulently unlock cell phones, according to the US Department of Justice. Muhammad Fahd, 34, was extradited from Hong Kong to the US on Friday and is being detained pending trial.

An indictment alleges that “Fahd recruited and paid AT&T insiders to use their computer credentials and access to disable AT&T’s proprietary locking software that prevented ineligible phones from being removed from AT&T’s network,” a DOJ announcement yesterday said. “The scheme resulted in millions of phones being removed from AT&T service and/or payment plans, costing the company millions of dollars. Fahd allegedly paid the insiders hundreds of thousands of dollars—paying one co-conspirator $428,500 over the five-year scheme.”

In all, AT&T insiders received more than $1 million in bribes from Fahd and his co-conspirators, who fraudulently unlocked more than 2 million cell phones, the government alleged. Three former AT&T customer service reps from a call center in Bothell, Washington, already pleaded guilty and agreed to pay the money back to AT&T.


Supply-Chain Attack against the Electron Development Platform

[2019.08.08] Electron is a cross-platform development system for many popular communications apps, including Skype, Slack, and WhatsApp. Security vulnerabilities in the update system allows someone to silently inject malicious code into applications. From a news article:

At the BSides LV security conference on Tuesday, Pavel Tsakalidis demonstrated a tool he created called BEEMKA, a Python-based tool that allows someone to unpack Electron ASAR archive files and inject new code into Electron’s JavaScript libraries and built-in Chrome browser extensions. The vulnerability is not part of the applications themselves but of the underlying Electron framework—and that vulnerability allows malicious activities to be hidden within processes that appear to be benign. Tsakalidis said that he had contacted Electron about the vulnerability but that he had gotten no response—and the vulnerability remains.

While making these changes required administrator access on Linux and MacOS, it only requires local access on Windows. Those modifications can create new event-based “features” that can access the file system, activate a Web cam, and exfiltrate information from systems using the functionality of trusted applications—including user credentials and sensitive data. In his demonstration, Tsakalidis showed a backdoored version of Microsoft Visual Studio Code that sent the contents of every code tab opened to a remote website.

Basically, the Electron ASAR files aren’t signed or encrypted, so modifying them is easy.

Note that this attack requires local access to the computer, which means that an attacker that could do this could do much more damaging things as well. But once an app has been modified, it can be distributed to other users. It’s not a big deal attack, but it’s a vulnerability that should be closed.


Evaluating the NSA’s Telephony Metadata Program

[2019.08.12] Interesting analysis: “Examining the Anomalies, Explaining the Value: Should the USA FREEDOM Act’s Metadata Program be Extended?” by Susan Landau and Asaf Lubin.

Abstract: The telephony metadata program which was authorized under Section 215 of the PATRIOT Act, remains one of the most controversial programs launched by the U.S. Intelligence Community (IC) in the wake of the 9/11 attacks. Under the program major U.S. carriers were ordered to provide NSA with daily Call Detail Records (CDRs) for all communications to, from, or within the United States. The Snowden disclosures and the public controversy that followed led Congress in 2015 to end bulk collection and amend the CDR authorities with the adoption of the USA FREEDOM Act (UFA).

For a time, the new program seemed to be functioning well. Nonetheless, three issues emerged around the program. The first concern was over high numbers: in both 2016 and 2017, the Foreign Intelligence Surveillance Court issued 40 orders for collection, but the NSA collected hundreds of millions of CDRs, and the agency provided little clarification for the high numbers. The second emerged in June 2018 when the NSA announced the purging of three years’ worth of CDR records for “technical irregularities.” Finally, in March 2019 it was reported that the NSA had decided to completely abandon the program and not seek its renewal as it is due to sunset in late 2019.

This paper sheds significant light on all three of these concerns. First, we carefully analyze the numbers, showing how forty orders might lead to the collection of several million CDRs, thus offering a model to assist in understanding Intelligence Community transparency reporting across its surveillance programs. Second, we show how the architecture of modern telephone communications might cause collection errors that fit the reported reasons for the 2018 purge. Finally, we show how changes in the terrorist threat environment as well as in the technology and communication methods they employ—in particular the deployment of asynchronous encrypted IP-based communications—has made the telephony metadata program far less beneficial over time. We further provide policy recommendations for Congress to increase effective intelligence oversight.


Exploiting GDPR to Get Private Information

[2019.08.13] A researcher abused the GDPR to get information on his fiancee:

It is one of the first tests of its kind to exploit the EU’s General Data Protection Regulation (GDPR), which came into force in May 2018. The law shortened the time organisations had to respond to data requests, added new types of information they have to provide, and increased the potential penalty for non-compliance.

“Generally if it was an extremely large company—especially tech ones—they tended to do really well,” he told the BBC.

“Small companies tended to ignore me.

“But the kind of mid-sized businesses that knew about GDPR, but maybe didn’t have much of a specialised process [to handle requests], failed.”

He declined to identify the organisations that had mishandled the requests, but said they had included:

  • a UK hotel chain that shared a complete record of his partner’s overnight stays
  • two UK rail companies that provided records of all the journeys she had taken with them over several years
  • a US-based educational company that handed over her high school grades, mother’s maiden name and the results of a criminal background check survey.

Side-Channel Attack against Electronic Locks

[2019.08.14] Several high-security electronic locks are vulnerable to side-channel attacks involving power monitoring.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.

Copyright © 2019 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.