October 15, 2019
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
- Another Side Channel in Intel Chips
- Cracking Forgotten Passwords
- I’m Looking to Hire a Strategist to Help Figure Out Public-Interest Tech
- Revisiting Software Vulnerabilities in the Boeing 787
- New Biometrics
- A Feminist Take on Information Privacy
- Crown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago
- France Outlines Its Approach to Cyberwar
- Russians Hack FBI Comms System
- Ineffective Package Tracking Facilitates Fraud
- On Chinese “Spy Trains”
- Superhero Movies and Security Lessons
- Supply-Chain Security and Trust
- NSA on the Future of National Cybersecurity
- New Research into Russian Malware
- Measuring the Security of IoT Devices
- Tracking by Smart TVs
- More Cryptanalysis of Solitaire
- Edward Snowden’s Memoirs
- New Unpatchable iPhone Exploit Allows Jailbreaking
- Speakers Censored at AISA Conference in Melbourne
- Illegal Data Center Hidden in Former NATO Bunker
- Cheating at Professional Poker
- Wi-Fi Hotspot Tracking
- New Reductor Nation-State Malware Compromises TLS
- Details on Uzbekistan Government Malware: SandCat
- I Have a New Book: We Have Root
- Factoring 2048-bit Numbers Using 20 Million Qubits
In late 2011, Intel introduced a performance enhancement to its line of server processors that allowed network cards and other peripherals to connect directly to a CPU’s last-level cache, rather than following the standard (and significantly longer) path through the server’s main memory. By avoiding system memory, Intel’s DDIOshort for Data-Direct I/Oincreased input/output bandwidth and reduced latency and power consumption.
Now, researchers are warning that, in certain scenarios, attackers can abuse DDIO to obtain keystrokes and possibly other types of sensitive data that flow through the memory of vulnerable servers. The most serious form of attack can take place in data centers and cloud environments that have both DDIO and remote direct memory access enabled to allow servers to exchange data. A server leased by a malicious hacker could abuse the vulnerability to attack other customers. To prove their point, the researchers devised an attack that allows a server to steal keystrokes typed into the protected SSH (or secure shell session) established between another server and an application server.
[2019.09.18] Expandpass is a string expansion program. It’s “useful for cracking passwords you kinda-remember.” You tell the program what you remember about the password and it tries related passwords.
I learned about it in this article about Phil Dougherty, who helps people recover lost cryptocurrency passwords (mostly Ethereum) for a cut of the recovered value.
[2019.09.18] I am in search of a strategic thought partner: a person who can work closely with me over the next 9 to 12 months in assessing what’s needed to advance the practice, integration, and adoption of public-interest technology.
All of the details are in the RFP. The selected strategist will work closely with me on a number of clear deliverables. This is a contract position that could possibly become a salaried position in a subsequent phase, and under a different agreement.
I’m working with the team at Yancey Consulting, who will follow up with all proposers and manage the process. Please email Lisa Yancey at firstname.lastname@example.org.
What our smartphones and relationship abusers share is that they both exert power over us in a world shaped to tip the balance in their favour, and they both work really, really hard to obscure this fact and keep us confused and blaming ourselves. Here are some of the ways our unequal relationship with our smartphones is like an abusive relationship:
- They isolate us from deeper, competing relationships in favour of superficial contact — ‘user engagement’ — that keeps their hold on us strong. Working with social media, they insidiously curate our social lives, manipulating us emotionally with dark patterns to keep us scrolling.
- They tell us the onus is on us to manage their behavior. It’s our job to tiptoe around them and limit their harms. Spending too much time on a literally-designed-to-be-behaviorally-addictive phone? They send company-approved messages about our online time, but ban from their stores the apps that would really cut our use. We just need to use willpower. We just need to be good enough to deserve them.
- They betray us, leaking data / spreading secrets. What we shared privately with them is suddenly public. Sometimes this destroys lives, but hey, we only have ourselves to blame. They fight nasty and under-handed, and are so, so sorry when they get caught that we’re meant to feel bad for them. But they never truly change, and each time we take them back, we grow weaker.
- They love-bomb us when we try to break away, piling on the free data or device upgrades, making us click through page after page of dark pattern, telling us no one understands us like they do, no one else sees everything we really are, no one else will want us.
- It’s impossible to just cut them off. They’ve wormed themselves into every part of our lives, making life without them unimaginable. And anyway, the relationship is complicated. There is love in it, or there once was. Surely we can get back to that if we just manage them the way they want us to?
Nope. Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true.
EDITED TO ADD (9/22) Cindy Cohn echoed a similar sentiment in her essay about John Barlow and his legacy.
This morning, the company announced that they “decrypted two 256-bit asymmetric public keys in approximately 50 seconds from a standard laptop computer.” Really. They did. This keylength is so small it has never been considered secure. It was too small to be part of the RSA Factoring Challenge when it was introduced in 1991. In 1977, when Ron Rivest, Adi Shamir, and Len Adelman first described RSA, they included a challenge with a 426-bit key. (It was factored in 1994.)
The press release goes on: “Crown Sterling also announced the consistent decryption of 512-bit asymmetric public key in as little as five hours also using standard computing.” They didn’t demonstrate it, but if they’re right they’ve matched a factoring record set in 1999. Five hours is significantly less than the 5.2 months it took in 1999, but slower than would be expected if Crown Sterling just used the 1999 techniques with modern CPUs and networks.
Is anyone taking this company seriously anymore? I honestly wouldn’t be surprised if this was a hoax press release. It’s not currently on the company’s website. (And, if it is a hoax, I apologize to Crown Sterling. I’ll post a retraction as soon as I hear from you.)
EDITED TO ADD: First, the press release is real. And second, I forgot to include the quote from CEO Robert Grant: “Today’s decryptions demonstrate the vulnerabilities associated with the current encryption paradigm. We have clearly demonstrated the problem which also extends to larger keys.”
People, this isn’t hard. Find an RSA Factoring Challenge number that hasn’t been factored yet and factor it. Once you do, the entire world will take you seriously. Until you do, no one will. And, bonus, you won’t have to reveal your super-secret world-destabilizing cryptanalytic techniques.
EDITED TO ADD (9/21): Others are laughing at this, too.
EDITED TO ADD (9/24): More commentary.
[2019.09.23] In a document published earlier this month (in French), France described the legal framework in which it will conduct cyberwar operations. Lukasz Olejnik explains what it means, and it’s worth reading.
American officials discovered that the Russians had dramatically improved their ability to decrypt certain types of secure communications and had successfully tracked devices used by elite FBI surveillance teams. Officials also feared that the Russians may have devised other ways to monitor U.S. intelligence communications, including hacking into computers not connected to the internet. Senior FBI and CIA officials briefed congressional leaders on these issues as part of a wide-ranging examination on Capitol Hill of U.S. counterintelligence vulnerabilities.
These compromises, the full gravity of which became clear to U.S. officials in 2012, gave Russian spies in American cities including Washington, New York and San Francisco key insights into the location of undercover FBI surveillance teams, and likely the actual substance of FBI communications, according to former officials. They provided the Russians opportunities to potentially shake off FBI surveillance and communicate with sensitive human sources, check on remote recording devices and even gather intelligence on their FBI pursuers, the former officials said.
It’s unclear whether the Russians were able to recover encrypted data or just perform traffic analysis. The Yahoo story implies the former; the NBC News story says otherwise. It’s hard to tell if the reporters truly understand the difference. We do know, from research Matt Blaze and others did almost ten years ago, that at least one FBI radio system was horribly insecure in practice — but not in a way that breaks the encryption. Its poor design just encourages users to turn off the encryption.
[2019.09.25] This article discusses an e-commerce fraud technique in the UK. Because the Royal Mail only tracks packages to the postcode — and not to the address – it’s possible to commit a variety of different frauds. Tracking systems that rely on signature are not similarly vulnerable.
[2019.09.26] The trade war with China has reached a new industry: subway cars. Congress is considering legislation that would prevent the world’s largest train maker, the Chinese-owned CRRC Corporation, from competing on new contracts in the United States.
Part of the reasoning behind this legislation is economic, and stems from worries about Chinese industries undercutting the competition and dominating key global industries. But another part involves fears about national security. News articles talk about “spy trains,” and the possibility that the train cars might surreptitiously monitor their passengers’ faces, movements, conversations or phone calls.
This is a complicated topic. There is definitely a national security risk in buying computer infrastructure from a country you don’t trust. That’s why there is so much worry about Chinese-made equipment for the new 5G wireless networks.
It’s also why the United States has blocked the cybersecurity company Kaspersky from selling its Russian-made antivirus products to US government agencies. Meanwhile, the chairman of China’s technology giant Huawei has pointed to NSA spying disclosed by Edward Snowden as a reason to mistrust US technology companies.
The reason these threats are so real is that it’s not difficult to hide surveillance or control infrastructure in computer components, and if they’re not turned on, they’re very difficult to find.
Like every other piece of modern machinery, modern train cars are filled with computers, and while it’s certainly possible to produce a subway car with enough surveillance apparatus to turn it into a “spy train,” in practice it doesn’t make much sense. The risk of discovery is too great, and the payoff would be too low. Like the United States, China is more likely to try to get data from the US communications infrastructure, or from the large Internet companies that already collect data on our every move as part of their business model.
While it’s unlikely that China would bother spying on commuters using subway cars, it would be much less surprising if a tech company offered free Internet on subways in exchange for surveillance and data collection. Or if the NSA used those corporate systems for their own surveillance purposes (just as the agency has spied on in-flight cell phone calls, according to an investigation by the Intercept and Le Monde, citing documents provided by Edward Snowden). That’s an easier, and more fruitful, attack path.
We have credible reports that the Chinese hacked Gmail around 2010, and there are ongoing concerns about both censorship and surveillance by the Chinese social-networking company TikTok. (TikTok’s parent company has told the Washington Post that the app doesn’t send American users’ info back to Beijing, and that the Chinese government does not influence the app’s use in the United States.)
Even so, these examples illustrate an important point: there’s no escaping the technology of inevitable surveillance. You have little choice but to rely on the companies that build your computers and write your software, whether in your smartphones, your 5G wireless infrastructure, or your subway cars. And those systems are so complicated that they can be secretly programmed to operate against your interests.
Last year, Le Monde reported that the Chinese government bugged the computer network of the headquarters of the African Union in Addis Ababa. China had built and outfitted the organization’s new headquarters as a foreign aid gift, reportedly secretly configuring the network to send copies of confidential data to Shanghai every night between 2012 and 2017. China denied having done so, of course.
If there’s any lesson from all of this, it’s that everybody spies using the Internet. The United States does it. Our allies do it. Our enemies do it. Many countries do it to each other, with their success largely dependent on how sophisticated their tech industries are.
China dominates the subway car manufacturing industry because of its low prices — the same reason it dominates the 5G hardware industry. Whether these low prices are because the companies are more efficient than their competitors or because they’re being unfairly subsidized by the Chinese government is a matter to be determined at trade negotiations.
Finally, Americans must understand that higher prices are an inevitable result of banning cheaper tech products from China.
We might willingly pay the higher prices because we want domestic control of our telecommunications infrastructure. We might willingly pay more because of some protectionist belief that global trade is somehow bad. But we need to make these decisions to protect ourselves deliberately and rationally, recognizing both the risks and the costs. And while I’m worried about our 5G infrastructure built using Chinese hardware, I’m not worried about our subway cars.
This essay originally appeared on CNN.com.
EDITED TO ADD: I had a lot of trouble with CNN’s legal department with this essay. They were very reluctant to call out the US and its allies for similar behavior, and spent a lot more time adding caveats to statements that I didn’t think needed them. They wouldn’t let me link to this Intercept article talking about US, French, and German infiltration of supply chains, or even the NSA document from the Snowden archives that proved the statements.
[2019.09.27] A paper I co-wrote was just published in Security Journal: “Superheroes on screen: real life lessons for security debates“:
Abstract: Superhero films and episodic shows have existed since the early days of those media, but since 9/11, they have become one of the most popular and most lucrative forms of popular culture. These fantastic tales are not simple amusements but nuanced explorations of fundamental security questions. Their treatment of social issues of power, security and control are here interrogated using the Film Studies approach of close reading to showcase this relevance to the real-life considerations of the legitimacy of security approaches. By scrutinizing three specific pieces — Daredevil Season 2, Captain America: Civil War, and Batman v Superman: Dawn of Justice — superhero tales are framed (by the authors) as narratives which significantly influence the general public’s understanding of security, often encouraging them to view expansive power criticallyto luxuriate within omnipotence while also recognizing the possibility as well as the need for limits, be they ethical or legal.
This was my first collaboration with Fareed Ben-Youssef, a film studies scholar. (And with Andrew Adams and Kiyoshi Murata.) It was fun to think about and write.
[2019.09.30] The United States government’s continuing disagreement with the Chinese company Huawei underscores a much larger problem with computer technologies in general: We have no choice but to trust them completely, and it’s impossible to verify that they’re trustworthy. Solving this problem which is increasingly a national security issue will require us to both make major policy changes and invent new technologies.
The Huawei problem is simple to explain. The company is based in China and subject to the rules and dictates of the Chinese government. The government could require Huawei to install back doors into the 5G routers it sells abroad, allowing the government to eavesdrop on communications or — even worse — take control of the routers during wartime. Since the United States will rely on those routers for all of its communications, we become vulnerable by building our 5G backbone on Huawei equipment.
It’s obvious that we can’t trust computer equipment from a country we don’t trust, but the problem is much more pervasive than that. The computers and smartphones you use are not built in the United States. Their chips aren’t made in the United States. The engineers who design and program them come from over a hundred countries. Thousands of people have the opportunity, acting alone, to slip a back door into the final product.
There’s more. Open-source software packages are increasingly targeted by groups installing back doors. Fake apps in the Google Play store illustrate vulnerabilities in our software distribution systems. The NotPetya worm was distributed by a fraudulent update to a popular Ukranian accounting package, illustrating vulnerabilities in our update systems. Hardware chips can be back-doored at the point of fabrication, even if the design is secure. The National Security Agency exploited the shipping process to subvert Cisco routers intended for the Syrian telephone company. The overall problem is that of supply-chain security, because every part of the supply chain can be attacked.
And while nation-state threats like China and Huawei — or Russia and the antivirus company Kaspersky a couple of years earlier — make the news, many of the vulnerabilities I described above are being exploited by cybercriminals.
Policy solutions involve forcing companies to open their technical details to inspection, including the source code of their products and the designs of their hardware. Huawei and Kaspersky have offered this sort of openness as a way to demonstrate that they are trustworthy. This is not a worthless gesture, and it helps, but it’s not nearly enough. Too many back doors can evade this kind of inspection.
Technical solutions fall into two basic categories, both currently beyond our reach. One is to improve the technical inspection processes for products whose designers provide source code and hardware design specifications, and for products that arrive without any transparency information at all. In both cases, we want to verify that the end product is secure and free of back doors. Sometimes we can do this for some classes of back doors: We can inspect source code this is how a Linux back door was discovered and removed in 2003 or the hardware design, which becomes a cleverness battle between attacker and defender.
This is an area that needs more research. Today, the advantage goes to the attacker. It’s hard to ensure that the hardware and software you examine is the same as what you get, and it’s too easy to create back doors that slip past inspection. And while we can find and correct some of these supply-chain attacks, we won’t find them all. It’s a needle-in-a-haystack problem, except we don’t know what a needle looks like. We need technologies, possibly based on artificial intelligence, that can inspect systems more thoroughly and faster than humans can do. We need them quickly.
The other solution is to build a secure system, even though any of its parts can be subverted. This is what the former Deputy Director of National Intelligence Sue Gordon meant in April when she said about 5G, “You have to presume a dirty network.” Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?
It sounds ridiculous on its face, but the Internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of research. That research continues today, and it’s how we can have highly resilient distributed systems like Google’s network even though none of the individual components are particularly good. It’s also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.
Security is a lot harder than reliability. We don’t even really know how to build secure systems out of secure parts, let alone out of parts and processes that we can’t trust and that are almost certainly being subverted by governments and criminals around the world. Current security technologies are nowhere near good enough, though, to defend against these increasingly sophisticated attacks. So while this is an important part of the solution, and something we need to focus research on, it’s not going to solve our near-term problems.
At the same time, all of these problems are getting worse as computers and networks become more critical to personal and national security. The value of 5G isn’t for you to watch videos faster; it’s for things talking to things without bothering you. These things — cars, appliances, power plants, smart cities — increasingly affect the world in a direct physical manner. They’re increasingly autonomous, using A.I. and other technologies to make decisions without human intervention. The risk from Chinese back doors into our networks and computers isn’t that their government will listen in on our conversations; it’s that they’ll turn the power off or make all the cars crash into one another.
All of this doesn’t leave us with many options for today’s supply-chain problems. We still have to presume a dirty network — as well as back-doored computers and phones — and we can clean up only a fraction of the vulnerabilities. Citing the lack of non-Chinese alternatives for some of the communications hardware, already some are calling to abandon attempts to secure 5G from Chinese back doors and work on having secure American or European alternatives for 6G networks. It’s not nearly enough to solve the problem, but it’s a start.
Perhaps these half-solutions are the best we can do. Live with the problem today, and accelerate research to solve the problem for the future. These are research projects on a par with the Internet itself. They need government funding, like the Internet itself. And, also like the Internet, they’re critical to national security.
Critically, these systems must be as secure as we can make them. As former FCC Commissioner Tom Wheeler has explained, there’s a lot more to securing 5G than keeping Chinese equipment out of the network. This means we have to give up the fantasy that law enforcement can have back doors to aid criminal investigations without also weakening these systems. The world uses one network, and there can only be one answer: Either everyone gets to spy, or no one gets to spy. And as these systems become more critical to national security, a network secure from all eavesdroppers becomes more important.
This essay previously appeared in the New York Times.
There are four key implications of this revolution that policymakers in the national security sector will need to address:
The first is that the unprecedented scale and pace of technological change will outstrip our ability to effectively adapt to it. Second, we will be in a world of ceaseless and pervasive cyberinsecurity and cyberconflict against nation-states, businesses and individuals. Third, the flood of data about human and machine activity will put such extraordinary economic and political power in the hands of the private sector that it will transform the fundamental relationship, at least in the Western world, between government and the private sector. Finally, and perhaps most ominously, the digital revolution has the potential for a pernicious effect on the very legitimacy and thus stability of our governmental and societal structures.
He then goes on to explain these four implications. It’s all interesting, and it’s the sort of stuff you don’t generally hear from the NSA. He talks about technological changes causing social changes, and the need for people who understand that. (Hooray for public-interest technologists.) He talks about national security infrastructure in private hands, at least in the US. He talks about a massive geopolitical restructuring — a fundamental change in the relationship between private tech corporations and government. He talks about recalibrating the Fourth Amendment (of course).
The essay is more about the problems than the solutions, but there is a bit at the end:
The first imperative is that our national security agencies must quickly accept this forthcoming reality and embrace the need for significant changes to address these challenges. This will have to be done in short order, since the digital revolution’s pace will soon outstrip our ability to deal with it, and it will have to be done at a time when our national security agencies are confronted with complex new geopolitical threats.
Much of what needs to be done is easy to see — developing the requisite new technologies and attracting and retaining the expertise needed for that forthcoming reality. What is difficult is executing the solution to those challenges, most notably including whether our nation has the resources and political will to effect that solution. The roughly $60 billion our nation spends annually on the intelligence community might have to be significantly increased during a time of intense competition over the federal budget. Even if the amount is indeed so increased, spending additional vast sums to meet the challenges in an effective way will be a daunting undertaking. Fortunately, the same digital revolution that presents these novel challenges also sometimes provides the new tools (A.I., for example) to deal with them.
The second imperative is we must adapt to the unavoidable conclusion that the fundamental relationship between government and the private sector will be greatly altered. The national security agencies must have a vital role in reshaping that balance if they are to succeed in their mission to protect our democracy and keep our citizens safe. While there will be good reasons to increase the resources devoted to the intelligence community, other factors will suggest that an increasing portion of the mission should be handled by the private sector. In short, addressing the challenges will not necessarily mean that the national security sector will become massively large, with the associated risks of inefficiency, insufficient coordination and excessively intrusive surveillance and data retention.
A smarter approach would be to recognize that as the capabilities of the private sector increase, the scope of activities of the national security agencies could become significantly more focused, undertaking only those activities in which government either has a recognized advantage or must be the only actor. A greater burden would then be borne by the private sector.
It’s an extraordinary essay, less for its contents and more for the speaker. This is not the sort of thing the NSA publishes. The NSA doesn’t opine on broad technological trends and their social implications. It doesn’t publicly try to predict the future. It doesn’t philosophize for 6000 unclassified words. And, given how hard it would be to get something like this approved for public release, I am left to wonder what the purpose of the essay is. Is the NSA trying to lay the groundwork for some policy initiative ? Some legislation? A budget request? What?
Charlie Warzel has a snarky response. His conclusion about the purpose:
He argues that the piece “is not in the spirit of forecasting doom, but rather to sound an alarm.” Translated: Congress, wake up. Pay attention. We’ve seen the future and it is a sweaty, pulsing cyber night terror. So please give us money (the word “money” doesn’t appear in the text, but the word “resources” appears eight times and “investment” shows up 11 times).
Susan Landau has a more considered response, which is well worth reading. She calls the essay a proposal for a moonshot (which is another way of saying “they want money”). And she has some important pushbacks on the specifics.
I don’t expect the general counsel and I will agree on what the answers to these questions should be. But I strongly concur on the importance of the questions and that the United States does not have time to waste in responding to them. And I thank him for raising these issues in so public a way.
I agree with Landau.
The Russian government has fostered competition among the three agencies, which operate independently from one another, and compete for funds. This, in turn, has resulted in each group developing and hoarding its tools, rather than sharing toolkits with their counterparts, a common sight among Chinese and North Korean state-sponsored hackers.
“Every actor or organization under the Russain APT umbrella has its own dedicated malware development teams, working for years in parallel on similar malware toolkits and frameworks,” researchers said.
“While each actor does reuse its code in different operations and between different malware families, there is no single tool, library or framework that is shared between different actors.”
Researchers say these findings suggest that Russia’s cyber-espionage apparatus is investing a lot of effort into its operational security.
“By avoiding different organizations re-using the same tools on a wide range of targets, they overcome the risk that one compromised operation will expose other active operations,” researchers said.
This is no different from the US. The NSA malware released by the Shadow Brokers looked nothing like the CIA “Vault 7” malware released by WikiLeaks.
The work was done by Check Point and Intezer Labs. They have a website with an interactive map.
- 22 Vendors
- 1,294 Products
- 4,956 Firmware versions
- 3,333,411 Binaries analyzed
- Date range of data: 2003-03-24 to 2019-01-24 (varies by vendor, most up to 2018 releases)
This dataset contains products such as home routers, enterprise equipment, smart cameras, security devices, and more. It represents a wide range of either found in the home, enterprise or government deployments.
Vendors are Asus, Belkin, DLink, Linksys, Moxa, Tenda, Trendnet, and Ubiquiti.
CyberITL’s methodology is not source code analysis. They look at the actual firmware. And they don’t look for vulnerabilities; they look for secure coding practices that indicate that the company is taking security seriously, and whose lack pretty much guarantees that there will be vulnerabilities. These include address space layout randomization and stack guards.
A summary of their results.
CITL identified a number of important takeaways from this study:
- On average, updates were more likely to remove hardening features than add them.
- Within our 15 year data set, there have been no positive trends from any one vendor.
- MIPS is both the most common CPU architecture and least hardened on average.
- There are a large number of duplicate binaries across multiple vendors, indicating a common build system or toolchain.
Their website contains the raw data.
EDITED TO ADD (10/11): Slashdot thread.
[2019.10.04] In 1999, I invented the Solitaire encryption algorithm, designed to manually encrypt data using a deck of cards. It was written into the plot of Neal Stephenson’s novel Cryptonomicon, and I even wrote an afterward to the book describing the cipher.
I don’t talk about it much, mostly because I made a dumb mistake that resulted in the algorithm not being reversible. Still, for the short message lengths you’re likely to use a manual cipher for, it’s still secure and will likely remain secure.
Here’s some new cryptanalysis:
Abstract: The Solitaire cipher was designed by Bruce Schneier as a plot point in the novel Cryptonomicon by Neal Stephenson. The cipher is intended to fit the archetype of a modern stream cipher whilst being implementable by hand using a standard deck of cards with two jokers. We find a model for repetitions in the keystream in the stream cipher Solitaire that accounts for the large majority of the repetition bias. Other phenomena merit further investigation. We have proposed modifications to the cipher that would reduce the repetition bias, but at the cost of increasing the complexity of the cipher (probably beyond the goal of allowing manual implementation). We have argued that the state update function is unlikely to lead to cycles significantly shorter than those of a random bijection.
[2019.10.07] Ed Snowden has published a book of his memoirs: Permanent Record. I have not read it yet, but I want to point you all towards two pieces of writing about the book. The first is an excellent review of the book and Snowden in general by SF writer and essayist Jonathan Lethem, who helped make a short film about Snowden in 2014. The second is an essay looking back at the Snowden revelations and what they mean. Both are worth reading.
I wanted to learn how Checkm8 will shape the iPhone experience — particularly as it relates to security — so I spoke at length with axi0mX on Friday. Thomas Reed, director of Mac offerings at security firm Malwarebytes, joined me. The takeaways from the long-ranging interview are:
- Checkm8 requires physical access to the phone. It can’t be remotely executed, even if combined with other exploits.
- The exploit allows only tethered jailbreaks, meaning it lacks persistence. The exploit must be run each time an iDevice boots.
- Checkm8 doesn’t bypass the protections offered by the Secure Enclave and Touch ID.
- All of the above means people will be able to use Checkm8 to install malware only under very limited circumstances. The above also means that Checkm8 is unlikely to make it easier for people who find, steal or confiscate a vulnerable iPhone, but don’t have the unlock PIN, to access the data stored on it.
- Checkm8 is going to benefit researchers, hobbyists, and hackers by providing a way not seen in almost a decade to access the lowest levels of iDevices.
“The main people who are likely to benefit from this are security researchers, who are using their own phone in controlled conditions. This process allows them to gain more control over the phone and so improves visibility into research on iOS or other apps on the phone,” Wood says. “For normal users, this is unlikely to have any effect, there are too many extra hurdles currently in place that they would have to get over to do anything significant.”
If a regular person with no prior knowledge of jailbreaking wanted to use this exploit to jailbreak their iPhone, they would find it extremely difficult, simply because Checkm8 just gives you access to the exploit, but not a jailbreak in itself. It’s also a ‘tethered exploit’, meaning that the jailbreak can only be triggered when connected to a computer via USB and will become untethered once the device restarts.
[2019.10.08] Two speakers were censored at the Australian Information Security Association’s annual conference this week in Melbourne. Thomas Drake, former NSA employee and whistleblower, was scheduled to give a talk on the golden age of surveillance, both government and corporate. Suelette Dreyfus, lecturer at the University of Melbourne, was scheduled to give a talk on her work — funded by the EU government — on anonymous whistleblowing technologies like SecureDrop and how they reduce corruption in countries where that is a problem.
Both were put on the program months ago. But just before the event, the Australian government’s ACSC (the Australian Cyber Security Centre) demanded they both be removed from the program.
It’s really kind of stupid. Australia has been benefiting a lot from whistleblowers in recent years — exposing corruption and bad behavior on the part of the government — and the government doesn’t like it. It’s cracking down on the whistleblowers and reporters who write their stories. My guess is that someone high up in ACSC saw the word “whistleblower” in the descriptions of those two speakers and talks and panicked.
You can read details of their talks, including abstracts and slides, here. Of course, now everyone is writing about the story. The two censored speakers spent a lot of the day yesterday on the phone with reporters, and they have a bunch of TV and radio interviews today.
I am at this conference, speaking on Wednesday morning (today in Australia, as I write this). ACSC used to have its own government cybersecurity conference. This is the first year it combined with AISA. I hope it’s the last. And that AISA invites the two speakers back next year to give their censored talks.
German investigators said Friday they have shut down a data processing center installed in a former NATO bunker that hosted sites dealing in drugs and other illegal activities. Seven people were arrested.
Thirteen people aged 20 to 59 are under investigation in all, including three German and seven Dutch citizens, Brauer said.
Authorities arrested seven of them, citing the danger of flight and collusion. They are suspected of membership in a criminal organization because of a tax offense, as well as being accessories to hundreds of thousands of offenses involving drugs, counterfeit money and forged documents, and accessories to the distribution of child pornography. Authorities didn’t name any of the suspects.
The data center was set up as what investigators described as a “bulletproof hoster,” meant to conceal illicit activities from authorities’ eyes.
Investigators say the platforms it hosted included “Cannabis Road,” a drug-dealing portal; the “Wall Street Market,” which was one of the world’s largest online criminal marketplaces for drugs, hacking tools and financial-theft wares until it was taken down earlier this year; and sites such as “Orange Chemicals” that dealt in synthetic drugs. A botnet attack on German telecommunications company Deutsche Telekom in late 2016 that knocked out about 1 million customers’ routers also appears to have come from the data center in Traben-Trarbach, Brauer said.
EDITED TO ADD (10/9): This is a better article.
But then I start to see things that seem so obvious, but I wonder whether they aren’t just paranoia after hours and hours of digging into the mystery. Like the fact that he starts wearing a hat that has a strange bulge around the brim — one that vanishes after the game when he’s doing an interview in the booth. Is it a bone-conducting headset, as some online have suggested, sending him messages directly to his inner ear by vibrating on his skull? Of course it is! How could it be anything else? It’s so obvious! Or the fact that he keeps his keys in the same place on the table all the time. Could they contain a secret camera that reads electronic sensors on the cards? I can’t see any other possibility! It is all starting to make sense.
In the end, though, none of this additional evidence is even necessary. The gaggle of online Jim Garrisons have simply picked up more momentum than is required and they can’t stop themselves. The fact is, the mystery was solved a long time ago. It’s just like De Niro’s Ace Rothstein says in Casino when the yokel slot attendant gets hit for three jackpots in a row and tells his boss there was no way for him to know he was being scammed. “Yes there is,” Ace replies. “An infallible way. They won.” According to one poster on TwoPlusTwo, in 69 sessions on Stones Live, Postle has won in 62 of them, for a profit of over $250,000 in 277 hours of play. Given that he plays such a large number of hands, and plays such an erratic and, by his own admission, high-variance style, one would expect to see more, well, variance. His results just aren’t possible even for the best players in the world, which, if he isn’t cheating, he definitely is among. Add to this the fact that it has been alleged that Postle doesn’t play in other nonstreamed live games at Stones, or anywhere else in the Sacramento area, and hasn’t been known to play in any sizable no-limit games anywhere in a long time, and that he always picks up his chips and leaves as soon as the livestream ends. I don’t really need any more evidence than that. If you know poker players, you know that this is the most damning evidence against him. Poker players like to play poker. If any of the poker players I know had the win rate that Mike Postle has, you’d have to pry them up from the table with a crowbar. The guy is making nearly a thousand dollars an hour! He should be wearing adult diapers so he doesn’t have to take a bathroom break and cost himself $250.
This isn’t the first time someone has been accused of cheating because they are simply playing significantly better than computer simulations predict that even the best player would play.
What distinguishes location-based marketing hotspot providers like Zenreach and Euclid is that the personal information you enter in the captive portal — like your email address, phone number, or social media profile — can be linked to your laptop or smartphone’s Media Access Control (MAC) address. That’s the unique alphanumeric ID that devices broadcast when Wi-Fi is switched on.
MAC addresses alone don’t contain identifying information besides the make of a device, such as whether a smartphone is an iPhone or a Samsung Galaxy. But as long as a device’s MAC address is linked to someone’s profile, and the device’s Wi-Fi is turned on, the movements of its owner can be followed by any hotspot from the same provider.
The defense is to turn Wi-Fi off on your phone when you’re not using it.
EDITED TO ADD: Note that the article is from 2018. Not that I think anything is different today….
[2019.10.10] Kaspersky has a detailed blog post about a new piece of sophisticated malware that it’s calling Reductor. The malware is able to compromise TLS traffic by infecting the computer with hacked TLS engine substituted on the fly, “marking” infected TLS handshakes by compromising the underlining random-number generator, and adding new digital certificates. The result is that the attacker can identify, intercept, and decrypt TLS traffic from the infected computer.
The Kaspersky Attribution Engine shows strong code similarities between this family and the COMPfun Trojan. Moreover, further research showed that the original COMpfun Trojan most probably is used as a downloader in one of the distribution schemes. Based on these similarities, we’re quite sure the new malware was developed by the COMPfun authors.
The COMpfun malware was initially documented by G-DATA in 2014. Although G-DATA didn’t identify which actor was using this malware, Kaspersky tentatively linked it to the Turla APT, based on the victimology. Our telemetry indicates that the current campaign using Reductor started at the end of April 2019 and remained active at the time of writing (August 2019). We identified targets in Russia and Belarus.
Turla has in the past shown many innovative ways to accomplish its goals, such as using hijacked satellite infrastructure. This time, if we’re right that Turla is the actor behind this new wave of attacks, then with Reductor it has implemented a very interesting way to mark a host’s encrypted TLS traffic by patching the browser without parsing network packets. The victimology for this new campaign aligns with previous Turla interests.
We didn’t observe any MitM functionality in the analyzed malware samples. However, Reductor is able to install digital certificates and mark the targets’ TLS traffic. It uses infected installers for initial infection through HTTP downloads from warez websites. The fact the original files on these sites are not infected also points to evidence of subsequent traffic manipulation.
The group’s lax operational security includes using the name of a military group with ties to the SSS to register a domain used in its attack infrastructure; installing Kaspersky’s antivirus software on machines it uses to write new malware, allowing Kaspersky to detect and grab malicious code still in development before it’s deployed; and embedding a screenshot of one of its developer’s machines in a test file, exposing a major attack platform as it was in development. The group’s mistakes led Kaspersky to discover four zero-day exploits SandCat had purchased from third-party brokers to target victim machines, effectively rendering those exploits ineffective. And the mistakes not only allowed Kaspersky to track the Uzbek spy agency’s activity but also the activity of other nation-state groups in Saudi Arabia and the United Arab Emirates who were using some of the same exploits SandCat was using.
There is nothing in this book is that is not available for free on my website; but if you’d like these essays in an easy-to-carry paperback book format, you can order a signed copy here. External vendor links, including for ebook versions, here.
We know from Shor’s Algorithm that both factoring and discrete logs are easy to solve on a large, working quantum computer. Both of those are currently beyond our technological abilities. We barely have quantum computers with 50 to 100 qubits. Extending this requires advances not only in the number of qubits we can work with, but in making the system stable enough to read any answers. You’ll hear this called “error rate” or “coherence” — this paper talks about “noise.”
Advances are hard. At this point, we don’t know if they’re “send a man to the moon” hard or “faster-than-light travel” hard. If I were guessing, I would say they’re the former, but still harder than we can accomplish with our current understanding of physics and technology.
I write about all this generally, and in detail, here. (Short summary: Our work on quantum-resistant algorithms is outpacing our work on quantum computers, so we’ll be fine in the short run. But future theoretical work on quantum computing could easily change what “quantum resistant” means, so it’s possible that public-key cryptography will simply not be possible in the long run. That’s not terrible, though; we have a lot of good scalable secret-key systems that do much the same things.)
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books — including his latest, Click Here to Kill Everybody — as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.
Copyright © 2019 by Bruce Schneier.