Comments
ResearcherZero • January 3, 2025 1:30 AM
In the databroker files are 3.6 billion cellphone locations enabling mass surveillance.
(includes English audio)
ResearcherZero • January 3, 2025 1:54 AM
All of this is possible because of apps sharing location data with third parties.
If you can get 1000 hits on someone working in national security, where they go, who their partner is and even find contact details, this a problem with consequences for everyone.
The SDKs, which are automatically included in apps, share this data instantly with data broker companies as soon as location access is granted to the particular application.
Points of interest, Property, Mobility, Address, Imagery and Demographic data can be purchased by anyone for relatively cheap prices to analyze footprints of any body/thing.
Fog purchases “billions of data points” from some “250 million devices” around the United States, originally sourced from “tens of thousands” of mobile apps.
https://www.eff.org/deeplinks/2022/06/what-fog-data-science-why-surveillance-company-so-dangerous
Clive Robinson • January 3, 2025 4:31 AM
@ ALL,
There are two obvious “first glance” reasons for this change,
1, Increase influence/profit
2, Provide new input data for AI
As these would directly benefit Alphabet / Google.
That said there is another issue some will immediately think about. Which is,
3, EU data protection legislation.
This is seen by some as emboldening others in the US. With states like California passing legislation to likewise constrain the rampant surveillance on Internet users.
Thus for the “some” cutting access to “clean input” to just funnel into the input corpus of AI systems[1].
Especially as there is an increasing trend to move from central to federated social media caused by the well observed “blue bird dive of doom”.
Device fingerprinting currently has to be seen like “On Device Scanning” (something Apple apparently desperately wants). It is in effect an artifact of “device use” rather than the device it’s self. Whilst there is little research currently it is not hard to see how a users choices of apps, services and settings is going to be the same or similar across their devices. Thus even if a user separates out their activities across devices to prevent direct linking the fingerprint will cause sufficient similarity to enable correlation thus indirect but probable linking. It is just a variation on what intelligence analysts, investigative journalists, and investigators in law enforcement do.
Now consider this being done on mass, it is something that is actually well within the capabilities of “Current AI” LLM and ML systems. Thus users would in effect be corralled from federated systems back into central systems, so allowing the sorted aggregation of user output into a coherent whole thus vastly increasing it’s value as data for AI training.
I’ve previously warned of this sort of activity,
https://www.schneier.com/blog/archives/2024/06/online-privacy-and-overfishing.html/#comment-437979
As part of the AI business plan of,
“Bedazzle, Beguile, Bewitch, befriend, and Betrayal.”
(Read the thread “overfishing” article as it explains why “the frog gets boiled” in the first place, due to another human failing that is part of “short-termism”).
[1] There is a lot of talk about “synthetic data” that is using one AI to generate input for another AI. For some reason people are not thinking this through very critically or I suspect in some cases at all. AGI output is not human output and is at best a poor facsimile of it in many ways (think randomised or chaotic semantic layers changes). For AGI to make useful output it will need to be filtered through humans to be sanity checked much as the processes for “guiderails” have to be in place. Thus ask just how fast such checking could be done on synthetic data?
Jan Doggen • January 3, 2025 8:19 AM
The change from “don’t use” to “full disclosure” is essentially a “just do what you like” to advertizers. They will put up pages of notifications and options that nobody bothers to wade through, as so many web sites already do with cookies.
Harass the users enough and they will agree with anything to get rid of the clutter.
finagle • January 3, 2025 9:11 AM
2 questions
Firstly how useful is device fingerprinting, without cookies or local storage, geo location or IP tracking?
Secondly why does a browser ‘app’ have access to any information at all outside the limited information of the sand box it is in. Limited in this case means things like CSS media enquiry data needed for presentation, or access to secure payment APIs for purchasing.
Aside from these highly common and non unique parameters of the sandbox there should be nothing visible to a browser ‘app’ to fingerprint.
Which comes back to the first point, if sandboxing is effective, and a browser app cannot create a unique fingerprint, then tracking comes back to technologies which are governed by legislation and/or opt out.
At which point, who cares about Google T&Cs governing a useless technology? The article linked to makes the point that Google think device fingerprinting is sufficiently blocked or controlled via other features, and while leaving the T&C in place would be my preferred option, if it is no longer effective, meh.
If sandboxing is not effective, that is the real issue. No browser app should be able to tell where or on what it is running, and only be able to make requests to the sandbox to do things, not ask the sandbox about the environment the sandbox is in. Otherwise it is not a sandbox.
As an aside, User-Agent is a problem here already, in so much as the User-Agent string can, and does, contain information that is NOT relevant to the rendering or behaviour of the app. The main use of the User-Agent seems to be denial of service by middleware nowadays.
Michael Holly • January 3, 2025 1:46 PM
Devices shed too much information. It’s in their nature. If you want basis for a cross platform fingerprint, try fonts. Almost all devices use our modern font system. The likelihood that an app loads its own fonts (the app designer wants it to look pretty) says that almost any device that has a custom mix of apps will have a distinguishing mix of fonts. This mix of fonts is easy to turn into a fingerprint. This may not be completely unique, but couple it with IP, and other data points and you can pretty much tell if you have seen a previous request from that device.
Winter • January 4, 2025 9:04 AM
Device fingerprinting likely falls within the reach of the GDPR which would limit it severely.
‘https://www.termsfeed.com/blog/legal-requirements-device-fingerprinting/
Device fingerprints have much more longevity and stability, which makes them a better choice for websites or mobile apps looking to track user data.
The Article 29 Working Party, which is made up of representatives of each of the EU Member States Data Protection Authorities, has been pushing to have device fingerprinting made subject to the same EU data protection requirements and regulations as cookies.
Anyhow, who in their right mind would trust a company? Even if they are icons of morality and trustworthiness today, the next wave of employees could, and will, sell out.
Clive Robinson • January 4, 2025 2:30 PM
@ Winter, ALL,
Is amoral behaviour a crime? And if not should it be if it causes harm to individuals or others?
Is the questions that arises from your observation of,
“[W]ho in their right mind would trust a company? Even if they are icons of morality and trustworthiness today, the next wave of employees could, and will, sell out.”
You forgot to mention perhaps the worst offenders, that is those 10cents on the dollar firesale and bankruptcy buyers who acquire the transfer of a companies held data to a third party who then seeks to make best profit from it.
Thus companies should not acquire or hold such information, because any contract or promise they make to you is immediately negated when the “asset” of your information is sold.
Buggy • January 6, 2025 4:39 PM
I fully expect this is already happening, like all the previous times Google got caught with their fingers in the cookie jar after claiming “nuh-uh”. Certainly, Chrome plugins are doing this as well … and this is one of the ways that popular CRMs already identify unique users, so expect most pages you visit that are commercial in nature to be fingerprinting your session.
I manage Google Ads for a few clients; Google tried to force us to use server-side page scraping (effectively) to identify PII from forms, shopping carts, credit card entry, etc. in order to ID a unique user. They claimed they were going to stop using cookies once we switched methods, and then backtracked. When adoption (and effectiveness) proved low on the server side solution, and anti-trackers were making cookies ineffective, they resort to fingerprinting (which is trivially easy these days due to the number of variables in the browser and the ease of accessing them — simply make the site unusable without JS, and ID’ing a customer is trivial).
I’m sure someone is going to say “simply don’t enable JS” … which is like saying “effectively turn off 90% of the commercial internet” which is worse than what the Chinese do on purpose. So, you’re cool with China’s internet practices, and want more of that?
no_search_4_u • January 7, 2025 12:53 PM
@ Buggy
I see something new.
Today I tried to search Google using a browser with JS off. No search results. Instead a page telling me to turn on JS if I wanted to search or download Chrome.
So I went to DDG and did my search.
Then I tried Google again after running the search on DDG. Curiously, Google then worked with JS off.
JS was disabled in the browser the whole time.
CDN Hell • January 7, 2025 11:41 PM
I’ve also noticed the recent JS requirements too. My guess was that they didn’t like not seeing the 3rd link I clicked after.
Heck, it’s happening here too! Only shows week old comments unless I interact beyond the recent comment page.
Ok you keep your AI secrets then…
I’ve heard about that in other terms called something like pre-compliance?
Who? • January 9, 2025 11:09 AM
I fear I miss the point here. Fingerprinting has existed for decades and will not magically disappear just because a policy says it should. Corporations have been fingerprinting our devices for years, even those that do not “say” and even those that say “they aren’t” are fingerprinting us too.
We need to be proactive. We must not expect a privacy-friendly policy be enough.
Google wants to fingerprint us? Ok, then choose a browser different to Chrome, or even Ungoogled Chromium, use another search engines and try to implement add-ons that protect us against fingerprinting. Not to say, use DoT at the operating system level and DoH at the browser level. Use secure operating systems, as I would not rule out large corporations trying to get into our devices. Compartment browser tabs using extensions like Firefox Multi-Account Containers and try to be as quiet as possible.
Fly under the radar where possible.
Obviously, patterns will emerge from our behavior and network-related activity, there are no add-ons that will easily protect us against it. But it is another matter that has nothing to do with cookies, tracking pixels, device identifiers and so on.
Choose whatever operating system you consider secure (as I said, I do not rule out large corporations hacking our devices to stole information in the near future). My choice is OpenBSD. Try to make it behave as Linux or Windows (the browsers can be configured to work this way) to get a generic fingerprinting that will not easily identify us.
Be aware certain operating systems (MacOS, some Linux distributions and, of course, Windows) are not playing in our team, but are in the same league. These operating systems are not our friend.
Try to replace stock Android by GrapheneOS or DivestOS.
Use common sense. As I said in the past, do not trust regulations or corporate policies. As an example, GDPR supposedly protects european citizens privacy, but far from it its goal is protecting data gathered by brokers as this regulation allows corporations to store “any gathered data that fits its business model”.
Do not trust governments, do not trust laws, do not trust corporations. As said before in this post, they are in the same league as us but obviously not playing in our team.
Be slightly paranoid. Not too much, just enough. Some years ago I was at risk of being jained, believe it or not, because I published my curriculum vitae and a defense contractor was too determined to hire me.
Privacy and computer security is like a game of chess. Play this game in a clever way.
Clive Robinson • January 13, 2025 12:42 AM
@ Bruce, Winter, ALL,
Related though technically “off topic”
Above I mentioned the security concerns of failed/failing companies being sold off at “fire sale” pricing and then having the data on/of users being appropriated and monetized.
In the news currently is a company called “Bench” that failed with a lot of users financial records.
According to this article,
Those who have taken over the remains of Bench are in effect extorting Bench’s customers/users via their data.
It would appear “the moral” of this is,
“If you value your data, ensure you back it up as often as you update it.”
Further you need to,
“Back it up in a ‘universal import’ form not a ‘service proprietary storage’ form.”
Such as a text based form such as CSV or similar, so you can then much more easily move to another provider.
Though in all honesty, perhaps the best thing to do is not to use SaaS or Cloud services, but keep your data firmly under your control not just in an ‘Open Data Format’, but ‘in house’, ‘inside your security perimeter’, and under your ‘disaster recovery plan’ where possible.
Clive Robinson • January 13, 2025 12:35 PM
@ Bruce, ALL,
One aspect of “fingerprinting” is that of “identifying the hardware” the user is associated with.
Related to this is the increasing list of hardware vulnerabilities caused by the internal logic of the CPU as the “Xmas gifts that keep giving” that most visibly started with Spectre and Meltdown, and if the system user/owner has mitigated them or not
Well some interesting work has arisen from a suggestion by David Kaplan who is a Senior Fellow at AMD in security engineering. Put simply,
“[It’s called] Attack Vector Controls for Linux, [and] it’s a new approach to managing what CPU security mitigations are applied or not based on the class/scope of vulnerabilities rather than managing the mitigations at an individual level.”
Well as David Kaplan has noted in the very recently released on DVD 3rd iteration of the Attack Vector Controls patch set for Linux
“While many users may not be intimately familiar with the details of these CPU vulnerabilities, they are likely better able to understand the intended usage of their system. As a result, unneeded mitigations may be disabled, allowing users to recoup more performance. New documentation is included with recommendations on what to consider when choosing which attack vectors to enable/disable.”
With a handy break down of which vulnerability mitigations should be used for each use case of their system.
As most of these CPU Hardware Vulnerability mitigations will be detectable by remote “users” they now stand as a new class of remote hardware fingerprinting.
See the chart in,
https://www.phoronix.com/news/Linux-CPU-Attack-Vector-Control
ResearcherZero • January 13, 2025 11:10 PM
@Who
FLoC allows advertisers to side-step consumer control over their user data through what it calls privacy-enhancing technologies (or PETs). This technology allows users to be identified and tracked cross-device using pieces of information about a device’s software or hardware, which, when combined, can uniquely identify a particular device and user.
The data can be obtained through multiple sources and locations including IP, GNSS/GPS, advertising IDs and other identifying sources such as biometrics to uniquely identify users. So, even if you ‘clear all site data’, the organisation using fingerprinting techniques could immediately identify you again.
List of apps:
https://pastejustit.com/atnbotturr
Android packages:
https://gist.github.com/fs0c131y/f498b21cba9ee23956fc7d7629262e9d
Further list of sources of bulk data ingestion:
https://docs.google.com/spreadsheets/d/1Ukgd0gIWd9gpV6bOx2pcSHsVO6yIUqbjnlM4ewjO6Cs/edit?usp=sharing
Chris Drake • January 15, 2025 3:28 AM
I doubt it really is “fingerprinting” – google owns too much of your software “infrastructure” and fingerprinting is still really iffy: my guess is they’re adding a new indelible identifier into all devices (chrome, android TLS, …), which only they will be able to access, and they’re disguising this as “fingerprinting”.
Subscribe to comments on this entry
Leave a comment
Sidebar photo of Bruce Schneier by Joe MacInnis.
ResearcherZero • January 3, 2025 12:44 AM
Tracking digital ‘fingerprints’ will undermine consumers’ control over information.
‘https://tribune.com.pk/story/2517259/google-reintroducing-digital-fingerprinting-to-track-users
Coming soon if Google gets it’s way…
https://www.forbes.com/sites/zakdoffman/2024/12/21/forget-chrome-google-will-start-tracking-you-and-all-your-smart-devices-in-8-weeks/