January 15, 2019
by Bruce Schneier
CTO, IBM Resilient
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram's web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
- New Shamoon Variant
- Teaching Cybersecurity Policy
- Congressional Report on the 2017 Equifax Data Breach
- Fraudulent Tactics on Amazon Marketplace
- Drone Denial-of-Service Attack against Gatwick Airport
- MD5 and SHA-1 Still Used in 2018
- Glitter Bomb against Package Thieves
- Human Rights by Design
- Stealing Nativity Displays
- Massive Ad Fraud Scheme Relied on BGP Hijacking
- Click Here to Kill Everybody Available as an Audiobook
- China's APT10
- Long-Range Familial Searching Forensics
- Podcast Interview with Eva Galperin
- New Attack Against Electrum Bitcoin Wallets
- Machine Learning to Detect Software Vulnerabilities
- EU Offering Bug Bounties on Critical Open-Source Software
- Security Vulnerabilities in Cell Phone Systems
- Using a Fake Hand to Defeat Hand-Vein Biometrics
- Why Internet Security Is So Bad
- Upcoming Speaking Engagements
Shamoon is the Iranian malware that was targeted against the Saudi Arabian oil company, Saudi Aramco, in 2012 and 2016. We have no idea if this new variant is also Iranian in origin, or if it is someone else entirely using the old Iranian code base.
[2018.12.18] Peter Swire proposes a a pedagogic framework for teaching cybersecurity policy. Specifically, he makes real the old joke about adding levels to the OSI networking stack: an organizational layer, a government layer, and an international layer.
[2018.12.19] The US House of Representatives Committee on Oversight and Government Reform has just released a comprehensive report on the 2017 Equifax hack. It's a great piece of writing, with a detailed timeline, root cause analysis, and lessons learned. Lance Spitzner also commented on this.
Here is my testimony before before the House Subcommittee on Digital Commerce and Consumer Protection last November.
[2018.12.20] Fascinating article about the many ways Amazon Marketplace sellers sabotage each other and defraud customers. The opening example: framing a seller for false advertising by buying fake five-star reviews for their products.
Defacement: Sellers armed with the accounts of Amazon distributors (sometimes legitimately, sometimes through the black market) can make all manner of changes to a rival's listings, from changing images to altering text to reclassifying a product into an irrelevant category, like "sex toys."
Phony fires: Sellers will buy their rival's product, light it on fire, and post a picture to the reviews, claiming it exploded. Amazon is quick to suspend sellers for safety claims.
Over the following days, Harris came to realize that someone had been targeting him for almost a year, preparing an intricate trap. While he had trademarked his watch and registered his brand, Dead End Survival, with Amazon, Harris hadn't trademarked the name of his Amazon seller account, SharpSurvival. So the interloper did just that, submitting to the patent office as evidence that he owned the goods a photo taken from Harris' Amazon listings, including one of Harris' own hands lighting a fire using the clasp of his survival watch. The hijacker then took that trademark to Amazon and registered it, giving him the power to kick Harris off his own listings and commandeer his name.
There are more subtle methods of sabotage as well. Sellers will sometimes buy Google ads for their competitors for unrelated products -- say, a dog food ad linking to a shampoo listing -- so that Amazon's algorithm sees the rate of clicks converting to sales drop and automatically demotes their product.
What's also interesting is how Amazon is basically its own government -- with its own rules that its suppliers have no choice but to follow. And, of course, increasingly there is no option but to sell your stuff on Amazon.
Chris Woodroofe, Gatwick's chief operating officer, said on Thursday afternoon there had been another drone sighting which meant it was impossible to say when the airport would reopen.
He told BBC News: "There are 110,000 passengers due to fly today, and the vast majority of those will see cancellations and disruption. We have had within the last hour another drone sighting so at this stage we are not open and I cannot tell you what time we will open.
"It was on the airport, seen by the police and corroborated. So having seen that drone that close to the runway it was unsafe to reopen."
The economics of this kind of thing isn't in our favor. A drone is cheap. Closing an airport for a day is very expensive.
I don't think we're going to solve this by jammers, or GPS-enabled drones that won't fly over restricted areas. I've seen some technologies that will safely disable drones in flight, but I'm not optimistic about those in the near term. The best defense is probably punitive penalties for anyone doing something like this -- enough to discourage others.
There are a lot of similar security situations, in which the cost to attack is vastly cheaper than 1) the damage caused by the attack, and 2) the cost to defend. I have long believed that this sort of thing represents an existential threat to our society.
EDITED TO ADD (12/23): The airport has deployed some anti-drone technology and reopened.
EDITED TO ADD (1/2): Maybe there was never a drone.
[2018.12.24] Last week, the Scientific Working Group on Digital Evidence published a draft document -- "SWGDE Position on the Use of MD5 and SHA1 Hash Algorithms in Digital and Multimedia Forensics" -- where it accepts the use of MD5 and SHA-1 in digital forensics applications:
While SWGDE promotes the adoption of SHA2 and SHA3 by vendors and practitioners, the MD5 and SHA1 algorithms remain acceptable for integrity verification and file identification applications in digital forensics. Because of known limitations of the MD5 and SHA1 algorithms, only SHA2 and SHA3 are appropriate for digital signatures and other security applications.
This is technically correct: the current state of cryptanalysis against MD5 and SHA-1 allows for collisions, but not for pre-images. Still, it's really bad form to accept these algorithms for any purpose. I'm sure the group is dealing with legacy applications, but I would like it to really push those application vendors to update their hash functions.
[2018.12.25] Stealing packages from unattended porches is a rapidly rising crime, as more of us order more things by mail. One person hid a glitter bomb and a video recorder in a package, posting the results when thieves opened the box. At least, that's what might have happened. At least some of the video was faked, which puts the whole thing into question.
That's okay, though. Santa is faked, too. Happy whatever you're celebrating.
[2018.12.26] Good essay: "Advancing Human-Rights-By-Design In The Dual-Use Technology Industry," by Jonathon Penney, Sarah McKune, Lex Gill, and Ronald J. Deibert:
But businesses can do far more than these basic measures. They could adopt a "human-rights-by-design" principle whereby they commit to designing tools, technologies, and services to respect human rights by default, rather than permit abuse or exploitation as part of their business model. The "privacy-by-design" concept has gained currency today thanks in part to the European Union General Data Protection Regulation (GDPR), which requires it. The overarching principle is that companies must design products and services with the default assumption that they protect privacy, data, and information of data subjects. A similar human-rights-by-design paradigm, for example, would prevent filtering companies from designing their technology with features that enable large-scale, indiscriminate, or inherently disproportionate censorship capabilities—like the Netsweeper feature that allows an ISP to block entire country top level domains (TLDs). DPI devices and systems could be configured to protect against the ability of operators to inject spyware in network traffic or redirect users to malicious code rather than facilitate it. And algorithms incorporated into the design of communications and storage platforms could account for human rights considerations in addition to business objectives. Companies could also join multi-stakeholder efforts like the Global Network Initiative (GNI), through which technology companies (including Google, Microsoft, and Yahoo) have taken the first step toward principles like transparency, privacy, and freedom of expression, as well as to self-reporting requirements and independent compliance assessments.
Members of 3ve (pronounced "eve") used their large reservoir of trusted IP addresses to conceal a fraud that otherwise would have been easy for advertisers to detect. The scheme employed a thousand servers hosted inside data centers to impersonate real human beings who purportedly "viewed" ads that were hosted on bogus pages run by the scammers themselves—who then received a check from ad networks for these billions of fake ad impressions. Normally, a scam of this magnitude coming from such a small pool of server-hosted bots would have stuck out to defrauded advertisers. To camouflage the scam, 3ve operators funneled the servers' fraudulent page requests through millions of compromised IP addresses.
About one million of those IP addresses belonged to computers, primarily based in the US and the UK, that attackers had infected with botnet software strains known as Boaxxe and Kovter. But at the scale employed by 3ve, not even that number of IP addresses was enough. And that's where the BGP hijacking came in. The hijacking gave 3ve a nearly limitless supply of high-value IP addresses. Combined with the botnets, the ruse made it seem like millions of real people from some of the most affluent parts of the world were viewing the ads.
Lots of details in the article.
An aphorism I often use in my talks is "expertise flows downhill: today's top-secret NSA programs become tomorrow's PhD theses and the next day's hacking tools." This is an example of that. BGP hacking -- known as "traffic shaping" inside the NSA -- has long been a tool of national intelligence agencies. Now it is being used by cybercriminals.
EDITED TO ADD (1/2): Classified NSA presentation on "network shaping." I don't know if there is a difference inside the NSA between the two terms.
EDITED TO ADD (1/5): Another article on the same subject.
How the attack works:
- Attacker added tens of malicious servers to the Electrum wallet network.
- Users of legitimate Electrum wallets initiate a Bitcoin transaction.
- If the transaction reaches one of the malicious servers, these servers reply with an error message that urges users to download a wallet app update from a malicious website (GitHub repo).
- User clicks the link and downloads the malicious update.
- When the user opens the malicious Electrum wallet, the app asks the user for a two-factor authentication (2FA) code. This is a red flag, as these 2FA codes are only requested before sending funds, and not at wallet startup.
- The malicious Electrum wallet uses the 2FA code to steal the user's funds and transfer them to the attacker's Bitcoin addresses.
The problem here is that Electrum servers are allowed to trigger popups with custom text inside users' wallets.
[2019.01.08] No one doubts that artificial intelligence (AI) and machine learning (ML) will transform cybersecurity. We just don't know how, or when. While the literature generally focuses on the different uses of AI by attackers and defenders—and the resultant arms race between the two—I want to talk about software vulnerabilities.
All software contains bugs. The reason is basically economic: The market doesn't want to pay for quality software. With a few exceptions, such as the space shuttle, the market prioritizes fast and cheap over good. The result is that any large modern software package contains hundreds or thousands of bugs.
Some percentage of bugs are also vulnerabilities, and a percentage of those are exploitable vulnerabilities, meaning an attacker who knows about them can attack the underlying system in some way. And some percentage of those are discovered and used. This is why your computer and smartphone software is constantly being patched; software vendors are fixing bugs that are also vulnerabilities that have been discovered and are being used.
Everything would be better if software vendors found and fixed all bugs during the design and development process, but, as I said, the market doesn't reward that kind of delay and expense. AI, and machine learning in particular, has the potential to forever change this trade-off.
The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic -- and research is continuing. There's every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.
Finding vulnerabilities can benefit both attackers and defenders, but it's not a fair fight. When an attacker's ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender's ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.
But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.
Fast-forward a decade or so into the future. We might say to each other, "Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years." Not only is this future possible, but I would bet on it.
Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today's Internet of Things systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.
But if we look far enough into the horizon, we can see a future where software vulnerabilities are a thing of the past. Then we'll just have to worry about whatever new and more advanced attack techniques those AI systems come up with.
This essay previously appeared on SecurityIntelligence.com.
So far, industry and policymakers have largely dragged their feet when it comes to blocking cell-site simulators and SS7 attacks. Senator Ron Wyden, one of the few lawmakers vocal about this issue, sent a letter in August encouraging the Department of Justice to "be forthright with federal courts about the disruptive nature of cell-site simulators." No response has ever been published.
The lack of action could be because it is a big task -- there are hundreds of companies and international bodies involved in the cellular network. The other reason could be that intelligence and law enforcement agencies have a vested interest in exploiting these same vulnerabilities. But law enforcement has other effective tools that are unavailable to criminals and spies. For example, the police can work directly with phone companies, serving warrants and Title III wiretap orders. In the end, eliminating these vulnerabilities is just as valuable for law enforcement as it is for everyone else.
As it stands, there is no government agency that has the power, funding and mission to fix the problems. Large companies such as AT&T, Verizon, Google and Apple have not been public about their efforts, if any exist.
One attraction of a vein based system over, say, a more traditional fingerprint system is that it may be typically harder for an attacker to learn how a user's veins are positioned under their skin, rather than lifting a fingerprint from a held object or high quality photograph, for example.
But with that said, Krissler and Albrecht first took photos of their vein patterns. They used a converted SLR camera with the infrared filter removed; this allowed them to see the pattern of the veins under the skin.
"It's enough to take photos from a distance of five meters, and it might work to go to a press conference and take photos of them," Krissler explained. In all, the pair took over 2,500 pictures to over 30 days to perfect the process and find an image that worked.
They then used that image to make a wax model of their hands which included the vein detail.
This is true, and is something I worry will change in a world of physically capable computers. Automation, autonomy, and physical agency will make computer security a matter of life and death, and not just a matter of data.
[2019.01.14] This is a current list of where and when I am scheduled to speak:
- I'm speaking at A New Initiative for Poland in Warsaw, January 16-17, 2019.
- I'm speaking at the Munich Cyber Security Conference (MCSC) on February 14, 2019.
The list is maintained on this page.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of 14 books -- including the New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security and the CTO of IBM Resilient.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM, IBM Security, or IBM Resilient.
Copyright © 2019 by Bruce Schneier.