Friday Squid Blogging: Far Side Cartoon
Squid, of course.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Page 128
Squid, of course.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Researchers have found a major encryption flaw in 100 million Samsung Galaxy phones.
From the abstract:
In this work, we expose the cryptographic design and implementation of Android’s Hardware-Backed Keystore in Samsung’s Galaxy S8, S9, S10, S20, and S21 flagship devices. We reversed-engineered and provide a detailed description of the cryptographic design and code structure, and we unveil severe design flaws. We present an IV reuse attack on AES-GCM that allows an attacker to extract hardware-protected key material, and a downgrade attack that makes even the latest Samsung devices vulnerable to the IV reuse attack. We demonstrate working key extraction attacks on the latest devices. We also show the implications of our attacks on two higher-level cryptographic protocols between the TrustZone and a remote server: we demonstrate a working FIDO2 WebAuthn login bypass and a compromise of Google’s Secure Key Import.
Here are the details:
As we discussed in Section 3, the wrapping key used to encrypt the key blobs (HDK) is derived using a salt value computed by the Keymaster TA. In v15 and v20-s9 blobs, the salt is a deterministic function that depends only on the application ID and application data (and constant strings), which the Normal World client fully controls. This means that for a given application, all key blobs will be encrypted using the same key. As the blobs are encrypted in AES-GCM mode-of-operation, the security of the resulting encryption scheme depends on its IV values never being reused.
Gadzooks. That’s a really embarrassing mistake. GSM needs a new nonce for every encryption. Samsung took a secure cipher mode and implemented it insecurely.
News article.
Pangu Lab in China just published a report of a hacking operation by the Equation Group (aka the NSA). It noticed the hack in 2013, and was able to map it with Equation Group tools published by the Shadow Brokers (aka some Russian group).
…the scope of victims exceeded 287 targets in 45 countries, including Russia, Japan, Spain, Germany, Italy, etc. The attack lasted for over 10 years. Moreover, one victim in Japan is used as a jump server for further attack.
News article.
TechCrunch is reporting—but not describing in detail—a vulnerability in a series of stalkerware apps that exposes personal information of the victims. The vulnerability isn’t in the apps installed on the victims’ phones, but in the website the stalker goes to view the information the app collects. The article is worth reading, less for the description of the vulnerability and more for the shadowy string of companies behind these stalkerware apps.
Nice piece of research:
Abstract: Among the many types of malicious codes, ransomware poses a major threat. Ransomware encrypts data and demands a ransom in exchange for decryption. As data recovery is impossible if the encryption key is not obtained, some companies suffer from considerable damage, such as the payment of huge amounts of money or the loss of important data. In this paper, we analyzed Hive ransomware, which appeared in June 2021. Hive ransomware has caused immense harm, leading the FBI to issue an alert about it. To minimize the damage caused by Hive Ransomware and to help victims recover their files, we analyzed Hive Ransomware and studied recovery methods. By analyzing the encryption process of Hive ransomware, we confirmed that vulnerabilities exist by using their own encryption algorithm. We have recovered the master key for generating the file encryption key partially, to enable the decryption of data encrypted by Hive ransomware. We recovered 95% of the master key without the attacker’s RSA private key and decrypted the actual infected data. To the best of our knowledge, this is the first successful attempt at decrypting Hive ransomware. It is expected that our method can be used to reduce the damage caused by Hive ransomware.
Here’s the flaw:
The cryptographic vulnerability identified by the researchers concerns the mechanism by which the master keys are generated and stored, with the ransomware strain only encrypting select portions of the file as opposed to the entire contents using two keystreams derived from the master key.
The encryption keystream, which is created from an XOR operation of the two keystreams, is then XORed with the data in alternate blocks to generate the encrypted file. But this technique also makes it possible to guess the keystreams and restore the master key, in turn enabling the decode of encrypted files sans the attacker’s private key.
The researchers said that they were able to weaponize the flaw to devise a method to reliably recover more than 95% of the keys employed during encryption.
Tarah Wheeler and Josephine Wolff analyze a recent court decision that the NotPetya attacks are not considered an act of war under the wording of Merck’s insurance policy, and that the insurers must pay the $1B+ claim. Wheeler and Wolff argue that the judge “did the right thing for the wrong reasons..”
Here are six beautiful squid videos. I know nothing more about them.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
EDITED TO ADD (2/25): This post accidentally went live on Wednesday, two days early, and people started adding their comments then. I have changed the posting date to the correct one, which means that the comments existing before that time will appear to have been made before the post. I apologize for the confusion.
A good lesson in reading the fine print:
Cignpost Diagnostics, which trades as ExpressTest and offers £35 tests for holidaymakers, said it holds the right to analyse samples from seals to “learn more about human health”—and sell information on to third parties.
Individuals are required to give informed consent for their sensitive medical data to be used but customers’ consent for their DNA to be sold now as buried in Cignpost’s online documents.
Of course, no one ever reads the fine print.
EDITED TO ADD (3/12): The original story.
The story is an old one, but the tech gives it a bunch of new twists:
Gemma Brett, a 27-year-old designer from west London, had only been working at Madbird for two weeks when she spotted something strange. Curious about what her commute would be like when the pandemic was over, she searched for the company’s office address. The result looked nothing like the videos on Madbird’s website of a sleek workspace buzzing with creative-types. Instead, Google Street View showed an upmarket block of flats in London’s Kensington.
[…]
Using online reverse image searches they dug deeper. They found that almost all the work Madbird claimed as its own had been stolen from elsewhere on the internet—and that some of the colleagues they’d been messaging online didn’t exist.
[…]
At least six of the most senior employees profiled by Madbird were fake. Their identities stitched together using photos stolen from random corners of the internet and made-up names. They included Madbird’s co-founder, Dave Stanfield—despite him having a LinkedIn profile and Ali referring to him constantly. Some of the duped staff had even received emails from him.
Read the whole sad story. What’s amazing is how shallow all the fakery was, and how quickly it all unraveled once people started digging. But until there’s suspicion enough to dig, we take all of these things at face value. And in COVID times, there’s no face-to-face anything.
A Berlin-based company has developed an AirTag clone that bypasses Apple’s anti-stalker security systems. Source code for these AirTag clones is available online.
So now we have several problems with the system. Apple’s anti-stalker security only works with iPhones. (Apple wrote an Android app that can detect AirTags, but how many people are going to download it?) And now non-AirTags can piggyback on Apple’s system without triggering the alarms.
Apple didn’t think this through nearly as well as it claims to have. I think the general problem is one that I have written about before: designers just don’t have intimate threats in mind when building these systems.
Sidebar photo of Bruce Schneier by Joe MacInnis.