The new 802.11bf standard will turn Wi-Fi devices into object sensors:
In three years or so, the Wi-Fi specification is scheduled to get an upgrade that will turn wireless devices into sensors capable of gathering data about the people and objects bathed in their signals.
“When 802.11bf will be finalized and introduced as an IEEE standard in September 2024, Wi-Fi will cease to be a communication-only standard and will legitimately become a full-fledged sensing paradigm,” explains Francesco Restuccia, assistant professor of electrical and computer engineering at Northeastern University, in a paper summarizing the state of the Wi-Fi Sensing project (SENS) currently being developed by the Institute of Electrical and Electronics Engineers (IEEE).
SENS is envisioned as a way for devices capable of sending and receiving wireless data to use Wi-Fi signal interference differences to measure the range, velocity, direction, motion, presence, and proximity of people and objects.
More detail in the article. Security and privacy controls are still to be worked out, which means that there probably won’t be any.
Posted on April 5, 2021 at 6:15 AM •
It’s not yet very accurate or practical, but under ideal conditions it is possible to figure out the shape of a house key by listening to it being used.
Listen to Your Key: Towards Acoustics-based Physical Key Inference
Abstract: Physical locks are one of the most prevalent mechanisms for securing objects such as doors. While many of these locks are vulnerable to lock-picking, they are still widely used as lock-picking requires specific training with tailored instruments, and easily raises suspicion. In this paper, we propose SpiKey, a novel attack that significantly lowers the bar for an attacker as opposed to the lock-picking attack, by requiring only the use of a smartphone microphone to infer the shape of victim’s key, namely bittings(or cut depths) which form the secret of a key. When a victim inserts his/her key into the lock, the emitted sound is captured by the attacker’s microphone.SpiKey leverages the time difference between audible clicks to ultimately infer the bitting information, i.e., shape of the physical key. As a proof-of-concept, we provide a simulation, based on real-world recordings, and demonstrate a significant reduction in search spacefrom a pool of more than 330 thousand keys to three candidate keys for the most frequent case.
Scientific American podcast:
The strategy is a long way from being viable in the real world. For one thing, the method relies on the key being inserted at a constant speed. And the audio element also poses challenges like background noise.
Boing Boing post.
EDITED TO ADD (4/14): I seem to have blogged this previously.
Posted on March 24, 2021 at 6:10 AM •
Interesting research: “Who Can Find My Devices? Security and Privacy of Apple’s Crowd-Sourced Bluetooth Location Tracking System“:
Abstract: Overnight, Apple has turned its hundreds-of-million-device ecosystem into the world’s largest crowd-sourced location tracking network called offline finding (OF). OF leverages online finder devices to detect the presence of missing offline devices using Bluetooth and report an approximate location back to the owner via the Internet. While OF is not the first system of its kind, it is the first to commit to strong privacy goals. In particular, OF aims to ensure finder anonymity, untrackability of owner devices, and confidentiality of location reports. This paper presents the first comprehensive security and privacy analysis of OF. To this end, we recover the specifications of the closed-source OF protocols by means of reverse engineering. We experimentally show that unauthorized access to the location reports allows for accurate device tracking and retrieving a user’s top locations with an error in the order of 10 meters in urban areas. While we find that OF’s design achieves its privacy goals, we discover two distinct design and implementation flaws that can lead to a location correlation attack and unauthorized access to the location history of the past seven days, which could deanonymize users. Apple has partially addressed the issues following our responsible disclosure. Finally, we make our research artifacts publicly available.
There is also code available on GitHub, which allows arbitrary Bluetooth devices to be tracked via Apple’s Find My network.
Posted on March 15, 2021 at 6:16 AM •
Really interesting research:
“Exploitation and Sanitization of Hidden Data in PDF Files”
Abstract: Organizations publish and share more and more electronic documents like PDF files. Unfortunately, most organizations are unaware that these documents can compromise sensitive information like authors names, details on the information system and architecture. All these information can be exploited easily by attackers to footprint and later attack an organization. In this paper, we analyze hidden data found in the PDF files published by an organization. We gathered a corpus of 39664 PDF files published by 75 security agencies from 47 countries. We have been able to measure the quality and quantity of information exposed in these PDF files. It can be effectively used to find weak links in an organization: employees who are running outdated software. We have also measured the adoption of PDF files sanitization by security agencies. We identified only 7 security agencies which sanitize few of their PDF files before publishing. Unfortunately, we were still able to find sensitive information within 65% of these sanitized PDF files. Some agencies are using weak sanitization techniques: it requires to remove all the hidden sensitive information from the file and not just to remove the data at the surface. Security agencies need to change their sanitization methods.
Short summary: no one is doing great.
Posted on March 12, 2021 at 6:03 AM •
Science has a paper (and commentary) on generating 250 random terabits per second with a laser. I don’t know how cryptographically secure they are, but that can be cleaned up with something like Fortuna.
EDITED TO ADD (3/12): Here are free versions of the paper and the commentary.
Posted on March 11, 2021 at 6:15 AM •
Interesting paper: “Shadow Attacks: Hiding and Replacing Content in Signed PDFs“:
Abstract: Digitally signed PDFs are used in contracts and invoices to guarantee the authenticity and integrity of their content. A user opening a signed PDF expects to see a warning in case of any modification. In 2019, Mladenov et al. revealed various parsing vulnerabilities in PDF viewer implementations.They showed attacks that could modify PDF documents without invalidating the signature. As a consequence, affected vendors of PDF viewers implemented countermeasures preventing all attacks.
This paper introduces a novel class of attacks, which we call shadow attacks. The shadow attacks circumvent all existing countermeasures and break the integrity protection of digitally signed PDFs. Compared to previous attacks, the shadow attacks do not abuse implementation issues in a PDF viewer. In contrast, shadow attacks use the enormous flexibility provided by the PDF specification so that shadow documents remain standard-compliant. Since shadow attacks abuse only legitimate features,they are hard to mitigate.
Our results reveal that 16 (including Adobe Acrobat and Foxit Reader) of the 29 PDF viewers tested were vulnerable to shadow attacks. We introduce our tool PDF-Attacker which can automatically generate shadow attacks. In addition, we implemented PDF-Detector to prevent shadow documents from being signed or forensically detect exploits after being applied to signed PDFs.
EDITED TO ADD (3/12): This was written about last summer.
Posted on March 8, 2021 at 6:10 AM •
Interesting research on persistent web tracking using favicons. (For those who don’t know, favicons are those tiny icons that appear in browser tabs next to the page name.)
Abstract: The privacy threats of online tracking have garnered considerable attention in recent years from researchers and practitioners alike. This has resulted in users becoming more privacy-cautious and browser vendors gradually adopting countermeasures to mitigate certain forms of cookie-based and cookie-less tracking. Nonetheless, the complexity and feature-rich nature of modern browsers often lead to the deployment of seemingly innocuous functionality that can be readily abused by adversaries. In this paper we introduce a novel tracking mechanism that misuses a simple yet ubiquitous browser feature: favicons. In more detail, a website can track users across browsing sessions by storing a tracking identifier as a set of entries in the browser’s dedicated favicon cache, where each entry corresponds to a specific subdomain. In subsequent user visits the website can reconstruct the identifier by observing which favicons are requested by the browser while the user is automatically and rapidly redirected through a series of subdomains. More importantly, the caching of favicons in modern browsers exhibits several unique characteristics that render this tracking vector particularly powerful, as it is persistent (not affected by users clearing their browser data), non-destructive (reconstructing the identifier in subsequent visits does not alter the existing combination of cached entries), and even crosses the isolation of the incognito mode. We experimentally evaluate several aspects of our attack, and present a series of optimization techniques that render our attack practical. We find that combining our favicon-based tracking technique with immutable browser-fingerprinting attributes that do not change over time allows a website to reconstruct a 32-bit tracking identifier in 2 seconds. Furthermore,our attack works in all major browsers that use a favicon cache, including Chrome and Safari. Due to the severity of our attack we propose changes to browsers’ favicon caching behavior that can prevent this form of tracking, and have disclosed our findings to browser vendors who are currently exploring appropriate mitigation strategies.
Another researcher has implemented this proof of concept:
Strehle has set up a website that demonstrates how easy it is to track a user online using a favicon. He said it’s for research purposes, has released his source code online, and detailed a lengthy explanation of how supercookies work on his website.
The scariest part of the favicon vulnerability is how easily it bypasses traditional methods people use to keep themselves private online. According to Strehle, the supercookie bypasses the “private” mode of Chrome, Safari, Edge, and Firefox. Clearing your cache, surfing behind a VPN, or using an ad-blocker won’t stop a malicious favicon from tracking you.
EDITED TO ADD (3/12): There was an issue about whether this paper was inappropriately disclosed, and it was briefly deleted from the NDSS website. It was later put back with a prefatory note from the NDSS.
Posted on February 17, 2021 at 6:05 AM •
Pile driving occurs during construction of marine platforms, including offshore windfarms, producing intense sounds that can adversely affect marine animals. We quantified how a commercially and economically important squid (Doryteuthis pealeii: Lesueur 1821) responded to pile driving sounds recorded from a windfarm installation within this species’ habitat. Fifteen-minute portions of these sounds were played to 16 individual squid. A subset of animals (n = 11) received a second exposure after a 24-h rest period. Body pattern changes, inking, jetting, and startle responses were observed and nearly all squid exhibited at least one response. These responses occurred primarily during the first 8 impulses and diminished quickly, indicating potential rapid, short-term habituation. Similar response rates were seen 24-h later, suggesting squid re-sensitized to the noise. Increased tolerance of anti-predatory alarm responses may alter squids’ ability to deter and evade predators. Noise exposure may also disrupt normal intraspecific communication and ecologically relevant responses to sound.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Posted on January 29, 2021 at 4:06 PM •
Researchers have been able to find all sorts of personal information within GPT-2. This information was part of the training data, and can be extracted with the right sorts of queries.
Paper: “Extracting Training Data from Large Language Models.”
Abstract: It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.
We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model’s training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.
We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. For example, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.
From a blog post:
We generated a total of 600,000 samples by querying GPT-2 with three different sampling strategies. Each sample contains 256 tokens, or roughly 200 words on average. Among these samples, we selected 1,800 samples with abnormally high likelihood for manual inspection. Out of the 1,800 samples, we found 604 that contain text which is reproduced verbatim from the training set.
The rest of the blog post discusses the types of data they found.
Posted on January 7, 2021 at 6:14 AM •
The microphones on voice assistants are very sensitive, and can snoop on all sorts of data:
In Hey Alexa what did I just type? we show that when sitting up to half a meter away, a voice assistant can still hear the taps you make on your phone, even in presence of noise. Modern voice assistants have two to seven microphones, so they can do directional localisation, just as human ears do, but with greater sensitivity. We assess the risk and show that a lot more work is needed to understand the privacy implications of the always-on microphones that are increasingly infesting our work spaces and our homes.
From the paper:
Abstract: Voice assistants are now ubiquitous and listen in on our everyday lives. Ever since they became commercially available, privacy advocates worried that the data they collect can be abused: might private conversations be extracted by third parties? In this paper we show that privacy threats go beyond spoken conversations and include sensitive data typed on nearby smartphones. Using two different smartphones and a tablet we demonstrate that the attacker can extract PIN codes and text messages from recordings collected by a voice assistant located up to half a meter away. This shows that remote keyboard-inference attacks are not limited to physical keyboards but extend to virtual keyboards too. As our homes become full of always-on microphones, we need to work through the implications.
Posted on December 22, 2020 at 10:21 AM •
Sidebar photo of Bruce Schneier by Joe MacInnis.