People-Search Site Removal Services Largely Ineffective

Consumer Reports has a new study of people-search site removal services, concluding that they don’t really work:

As a whole, people-search removal services are largely ineffective. Private information about each participant on the people-search sites decreased after using the people-search removal services. And, not surprisingly, the removal services did save time compared with manually opting out. But, without exception, information about each participant still appeared on some of the 13 people-search sites at the one-week, one-month, and four-month intervals. We initially found 332 instances of information about the 28 participants who would later be signed up for removal services (that does not include the four participants who were opted out manually). Of those 332 instances, only 117, or 35%, were removed within four months.

Posted on August 9, 2024 at 9:24 AM8 Comments

Comments

TimH August 9, 2024 10:13 AM

I tried 6 optout pages after reading the piece for giggles. The best and worst 3. They are set up to discourage, with multi pass captchas, and some with email verification followed by another set of multi pass captchas. Gave up with Intelius as the optout page didn’t make sense. And I’m in CA, so by law there should be an obvious optout.

Daniel Popescu August 9, 2024 1:27 PM

Well 😁…not at all an expert on this, but: HDDs, NAS-es, and any other storage device that you can think of, always get backed up, archived and restored. Then you have your messy databases, disgruntled employees or business partners who leave with heaps of confidential data, and so on.

The companies themselves trade the data with anybody that wants to buy it: as Bruce (I think) said in one of his books, “data is a commodity”.

Clive Robinson August 9, 2024 3:23 PM

@ ALL,

Re : Who knows how to stay off of the Internet and out of other daya brokers DB’s?

The report really only covers a tiny fraction of people.

1, Those living in the US
2, Covered by the legislation of just two states.

So take the results with a very large pinch of salt.

Further the report writers up front admit that the group they selected was not just “too small” but in a way “self selecting” so not representative of a larger population.

In a way those people are likely to be more careful in what information they have revealed than “the norm”.

So we could expect their results to be in effect artificially higher than the norm.

So with,

“Consumer Reports has a new study of people-search site removal services, concluding that they don’t really work”

Concluding “don’t really work” for a group that is more aware and probably takes more care… Kind of suggests they don’t work for anyone who would not be seen as “Paranoid” by others that are less or unaware.

So it raises a question of,

“Would anybody who is aware seriously think that as long as they are providing information into the likes of Meta, LinkedIn, Alphabet, Microsoft and Application developers of any sort for “Walled Gardens” are not going to have any Personal, Private Information stolen by these organisations?”

We know for instance the Meta Group is a extraordinarily rapacious grab everything and then force identifying tags in where ever they can organisation (install any Meta App and your privacy is deader than the first Dodo). Likewise Google and Microsoft are forcing unique identifiers on people where ever they can. Google in particular has been pronounced guilty by a judge of behaviour that if you look behind it’s intent is to many peoples eyes deeply shocking.

And all these US Corps and Organisations try to link such tags it into “Personal Identifying Information”(PII)[1] to then sell on highly profitably.

Heck Google/Alphabet has just been found guilty in court of not just such and related activities, but paying the likes of Apple and others $billions to ensure their “spyware” is preeminent on hardware users buy and think they own.

Without doubt the US Government bears a very large responsibility for the behaviours of these Corps, Companies, Organisations, and Individuals.

And they are more than fully aware of it, because when other Countries Corps, Companies, Organisations, and Individuals do similar there are cries of “outrage” and “faux-shock” and similar claims.

Keep your eyes open about Temu that is seen by some as a “bottom feeder” for many reasons. And is just now starting to being given the “China Bad” treatment.

As someone who does not reside in the US I do find wry / ironic humour in these behaviours.

But it’s not these “public sources” that worry me it’s the likes of Palantir and other “Private Data Brokers” that use the data to “build profiles” with other non public data to sell to Law Enforcment and other Government “Guard Labour” organisations. They do this in part by pushing the data into the likes of AI LLM and ML systems. Have a think on the likely consequence of this especially as this soet of information begats similar information that then becomes grist in the AI mill of utter nonsense that is called “Hallucinations” by some but either Soft or Hard Bullshit by those more professional than the pushers of the current AI Investor Bubble.

[1] The TLA “PII” is more often seen than the term “Personally Identifying Information” and thus hides the horror of what gets taken by such Criminals due to no or chronically lax legislation in the US and other jurisdictions. Some people do give indicators,

https://www.investopedia.com/terms/p/personally-identifiable-information-pii.asp

Clive Robinson August 10, 2024 8:22 AM

@ SpaceLiform, JonKnowNothing, ALL,

Re : Xmas Gift keeps giving.

Serious Hardware Security Vulnerability in AMD CPU chips

Above I ask the question,

“Who knows how to stay off of the Internet and out of other data brokers DB’s?”

Well think of those “others” a little more broadly as system attackers, including most “cybercriminals” but also include those you “don’t invite in” via the OS or Apps you have.

The main issue is “how do they get in” and there are various answers depending on how you look at it, some call it “finding a toehold”. But put simply in all cases,

“The attackers use an existing vulnerability to subvert the intended operation of the system user/owner.”

Originally this was seen by most as a “software issue” and few except those that had crawled into the knarly depths of “firmware” in IO devices or microcode etc over the other side of the CPU ISA were aware of the nasties that lurked much further down on the computing stack[1]. Having being involved with the design of very high performance CPU’s made from “bit-slice” chips in the early 1980’s I was more than aware of the problems not just with microcode but the lower “Register Transfer Language/logic”(RTL). I moved on to do other things in the late 80’s and 90’s with security in critical communications for the petrochemical and satellite fields and was less focused on CPU issues apart from those to do with “Smart Cards”. However in May 1995 the IEEE had a symposia on security and one of the presentations covered such a fault in the 8086 family.

“‘The Intel 80×86 Processor Architecture: Pitfalls for Secure Systems’ published May 1995 at the IEEE Symposium on Security and Privacy”

https://dl.acm.org/doi/10.5555/882491.884240

And as can be seen from the list of papers that cited it, it did not create much apparent interest for the following decades.

Then some students discovered a real doozy of a vulnerability in 2017 that Intel tried to keep a real quiet at a time that happened to include the Black Friday to Xmas Sales period when many home computers are sold, and just by coincidence a “senior insider” flogging off a big chunk of shares…

Called Meltdown it quickly gave rise to another discovery or two Specter V1 and V2.

I jokingly named such CPU internal “Go Faster Strip” faults as “The Xmas Gift that keeps giving” and said I expected it to keep giving for a half decade or more (based on my previous knowledge of CPU design).

Well it’s just been revealed that “The Xmas Gift that keeps giving” has revealed that there is another catastrophic fault in many AMD CPU chips that has been there for a decade or even two,

https://www.wired.com/story/amd-chip-sinkclose-flaw/

It’s a “brick of death” type vulnerability in that if an attacker uses it you can not as end of supply chain user “undo it”. As the article notes one of the discoverers puts it as

“Nissim sums up that worst-case scenario in more practical terms: “You basically have to throw your computer away.””

People wonder why I,

1, Still use mid 90’s and earlier computers (where these issues do not exist).
2, Advise people to remove all external communication paths from computers that are used for private, sensitive or privileged information.

As was once noted,

“To attack a computer you first have to be able to reach it.”

Hence the old joke about dropping it in a deep sea trench.

But… It looks like I was wrong about “The Xmas Gift that keeps giving”, I thought such “CPU Go Faster Stripe” vulnerabilities would be cleaned up and in effect be in the past by now not in production…

However Intel decided not to fix Meltdown/Spector for performance and other reasons and other CPU designer manufacturers appear to have followed suit.

So “segregation” of sensitive information systems is your only mitigation.

I’ve talked about setting up “Energy Gapped” systems in the past decade or more and a quick search on this and other sites will pull them up.

[1] For a more in-depth overview of this see the Jan 2018 article,

https://www.csoonline.com/article/564203/spectre-and-meltdown-explained-what-they-are-how-they-work-whats-at-risk.html

Who? August 10, 2024 3:19 PM

…and here it comes the European General Data Protection Regulation (“GDPR”) who, supposedly, plays in the team of the European citizens. However, nothing could be further from the truth as this European regulation allows any data broker to preserve whatever they find useful for their nightmarish business model.

As one of those data brokers replied to my request to remove personal data from their databases one year ago: GDPR is here to protect their business model, allowing them to preserve anything they want if they think this information is valuable for their business model.

Do not trust regulations, do not trust politicians and do not trust technology not under our complete control. Easy, and it should be obvious right now.

This is war, and governments and corporations around the world are not playing on our side. They never did.

Do you want to protect your valuable personal information? Then hide you as much as you can, make it as difficult as possible retrieving the new “gold”. Once they got information about us, they will never drop it. And governments will support this sickly business model as they are highly interested in getting access to this information too.

britelite77 August 10, 2024 6:33 PM

As someone that doesnt have access to a “profile” of an individual. I am very curious to see what said profiles look like. What information is there and more importantly how accurate that information is. Since the average person cant see what their “profile” looks like, how can one asses the validity of the information? These profiles could potentially have significant adverse effects on someones standing in society.

One of the issues related to the original post is that even if a databroker deleted your information after requested, when new information comes in might likely create a new record. New information is being generated and shared 24/7. Lets say you have John Smith with a specific SSN and DOB request their information be deleted. When new information comes in about John Smith the record might be recreated. But what about other John Smith’s?, what if some of their information is inadvertently mixed with the original John Smith? now you have Multiple Johns Smiths with mixed information relating to multiple records named John Smith. When a request is made for deletion, which John Smith will be deleted? When a profile is built, what information is accurate or inaccurate.

Lets say an individual has internet service and connects their devices at home via WIFI. Anyone that might connect to this wifi access point, including guests or malicious entities that compromise the connection can use that internet connection. ISP’s are known to collect user information and sell it to databrokers and advertising companies. Are you, as the legal account holder associated with the ISP account with whatever activities and correlations attached to potentially unknown activities? Yes. How does this affect a profile?

What if your google, linkedin, microsoft etc. account was compromised and used for malicious purposes? Is your name now attached to said activities in your “profile”?

These profiles put together by databrokers are purchased by government agencies. What sort of “list” might someone get put on, depending on what information is collected?

One of the major issues for an individual looking to protect their privacy is, its already too late. Your information is already out there. Its possible you could scrub it from every single US databroker depending on your knowledge, resources and standing, but what about databrokers in other countries? They dont have to follow laws in the US.
A motivated malicious threat actor with resources can simply purchase information from databrokers in another country.

The good news however is that data can get stale. Birthdays and Social security numbers might not change, but emails, home addresses, phone numbers, credit card numbers, etc. can change over time which can help mitigate risk.

Ones threat model relative to risk due to personal information on the internet might greatly vary from one individual to the next. Does a CEO of a defense contractor need the same amount of privacy as a Hostess Twinkies delivery person? Does this mean every citizen shouldnt be afforded a strong privacy protection law? No. The Twinkies Delivery person today might be a CEO or senator tomorrow. The US needs very strong privacy laws. Will this in the short term upend the business model of the internet? Yes. Will a new business model replace it eventually? Yes. Privacy and security arent the same thing. However, a lack of privacy leads to insecurity.

jimbeaners August 11, 2024 12:45 PM

I use DeleteMe and Incogni and I know for a fact that my information is no longer on any of these sites.

ResearcherZero August 12, 2024 10:22 PM

@britelite77

Personally, I recommend having a very uncommon name and an uncommon face.

Privacy law is ignored and circumvented by companies. Penalties are small and uncommon.
Data is hoovered up constantly and legislators really drag their feet.

Tech companies have not given users any [real] control over how their data is used.

‘https://www.reuters.com/technology/x-hit-with-austrian-data-use-complaint-over-ai-training-2024-08-12/

Twitter International is not complying with its obligations under the GDPR

“DPC, working in conjunction with our EU/EEA peer regulators, continue to examine the extent to which the processing complies with the GDPR,” DPC chairman Dr Des Hogan, said.
https://www.rte.ie/news/business/2024/0808/1464108-x-to-pause-using-european-user-data-to-train-ai-systems/

“Twitter appears to have breached a number of other GDPR provisions, including GDPR principles, transparency rules and operational rules. In addition to the lack of a valid legal basis, it’s highly unlikely that Twitter properly distinguishes between data from users in the EU/EEA and other countries where people don’t enjoy GDPR protection.”

https://noyb.eu/en/twitters-ai-plans-hit-9-more-gdpr-complaints

privacy!?

“You’re going to be taken to a privacy policy page. Once this loads, you’re going to click ‘right to object’ … and you’re going to be taken to a form that you’re going to fill out and explain why you don’t want your data taken.”

‘https://www.snopes.com/news/2024/06/13/meta-ai-training-user-data/

“We believe that Europeans will be ill-served by AI models that are not informed by Europe’s rich cultural, social and historical contributions. Meta is not the first company to do this – we are following the example set by others, including Google and OpenAI, both of which have already used data…”

‘https://about.fb.com/news/2024/06/building-ai-technology-for-europeans-in-a-transparent-and-responsible-way/

Meta users are automatically opted in to consent to data harvesting.
https://arstechnica.com/tech-policy/2024/06/meta-to-train-undefined-ai-tech-on-facebook-users-posts-pics-in-eu/

Meta claims that there is no option to opt-out at a later point to have your data removed (as foreseen under Article 17 GDPR and the “right to be forgotten”)

https://noyb.eu/en/noyb-urges-11-dpas-immediately-stop-metas-abuse-personal-data-ai

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.