Privacy for Agentic AI

Sooner or later, it’s going to happen. AI systems will start acting as agents, doing things on our behalf with some degree of autonomy. I think it’s worth thinking about the security of that now, while its still a nascent idea.

In 2019, I joined Inrupt, a company that is commercializing Tim Berners-Lee’s open protocol for distributed data ownership. We are working on a digital wallet that can make use of AI in this way. (We used to call it an “active wallet.” Now we’re calling it an “agentic wallet.”)

I talked about this a bit at the RSA Conference earlier this week, in my keynote talk about AI and trust. Any useful AI assistant is going to require a level of access—and therefore trust—that rivals what we currently our email provider, social network, or smartphone.

This Active Wallet is an example of an AI assistant. It’ll combine personal information about you, transactional data that you are a party to, and general information about the world. And use that to answer questions, make predictions, and ultimately act on your behalf. We have demos of this running right now. At least in its early stages. Making it work is going require an extraordinary amount of trust in the system. This requires integrity. Which is why we’re building protections in from the beginning.

Visa is also thinking about this. It just announced a protocol that uses AI to help people make purchasing decisions.

I like Visa’s approach because it’s an AI-agnostic standard. I worry a lot about lock-in and monopolization of this space, so anything that lets people easily switch between AI models is good. And I like that Visa is working with Inrupt so that the data is decentralized as well. Here’s our announcement about its announcement:

This isn’t a new relationship—we’ve been working together for over two years. We’ve conducted a successful POC and now we’re standing up a sandbox inside Visa so merchants, financial institutions and LLM providers can test our Agentic Wallets alongside the rest of Visa’s suite of Intelligent Commerce APIs.

For that matter, we welcome any other company that wants to engage in the world of personal, consented Agentic Commerce to come work with us as well.

I joined Inrupt years ago because I thought that Solid could do for personal data what HTML did for published information. I liked that the protocol was an open standard, and that it distributed data instead of centralizing it. AI agents need decentralized data. “Wallet” is a good metaphor for personal data stores. I’m hoping this is another step towards adoption.

Posted on May 2, 2025 at 2:04 PM14 Comments

Comments

ResearcherZero May 3, 2025 3:03 AM

Anthony Jancso, a former Palantir employee, is recruiting and reaching out to Palantir pals for an AI agent project. Potential recruits would not require a security clearance.

It really does look like America will become the United States of Surveillance. One with plenty of contracts for private surveillance companies and little privacy or security.

‘https://www.wired.com/story/doge-recruiter-ai-agents-palantir-clown-emoji/

DOGE is racing to replace thousands of government employees with AI agents.
https://www.theatlantic.com/technology/archive/2025/03/gsa-chat-doge-ai/681987/

In order to do this work an undergraduate is rewriting regulations using AI tools.
https://www.independent.co.uk/news/world/americas/us-politics/doge-staffer-college-ai-rewrite-regulations-b2743052.html

DOGE is also using AI for surveillance purposes, raising concerns over data ingestion.
https://www.reuters.com/technology/artificial-intelligence/musks-doge-using-ai-snoop-us-federal-workers-sources-say-2025-04-08/

Alessandro May 3, 2025 5:28 AM

Hi, I am following Solid since quite some time, but I am not sure to understand how these Agentic Wallet works. Does it mean that, somehow, the Wallet (i.e. connected to a Solid pod, I guess?) will contain data about users transactions online (e.g. purchases, etc..)? But that would mean that all websites would have to interact with the Solid pods/Wallet?

Sorry for the very – probably silly – question. I love the idea of Solid, and I am very curious.
Thanks!

Clive Robinson May 3, 2025 12:28 PM

@ Bruce, ALL,

Your opening lines,

“Sooner or later, it’s going to happen. AI systems will start acting as agents, doing things on our behalf with some degree of autonomy. I think it’s worth thinking about the security of that now, while its still a nascent idea.”

Is too late, as we are well beyonf “nascent ideas”, that is it’s already happened and if the 1990’s issues are anything to go by it’s going to be a disaster and nightmare combined

What will be new however is the “control of people” by “surveillance legislation”.

Back in the 1990’s when the Internet as the “World Wide Web” was nascent the “adult things” that were getting US Politicos panties in a wad was what was jokingly called “the 3 Gs” (to rhyme with “The Bee Gees”).

Standing for “Girls, Gambling, and Games” was a political nightmare especially in US Red-States and a whole load of authoritarian nation states.

Way to many activists that had nothing better to do in life put saucepans and similar on their heads and went of “tilting at windmills” to slay the devils that they claimed others should be protected from weather they wanted to be protected or not, because these “lids” knew best about how people should behave.

As a rule, the agitating types were ultra conservative, lacking in worldly experience and oft a certain type of religious that is strongly patriarchal. Yet those given to actually being patriarchal leaders making political milage, all to often turned out to be the types children really should be protected from.

Well back then “age gating” was the excuse of the conservative types and in oh so many ways it failed to do what they wanted, and in fact back then it was clear nothing really would work for significant reasons that still remain valid today.

The issue of Age Gates and Constitutional rights and freedoms has just recently come up again hence this article,

https://arstechnica.com/tech-policy/2025/04/redditor-accidentally-reinvents-discarded-90s-tool-to-escape-todays-age-gates/

The big difference is that to try and fix “yesterdays failings” certain control freak types, are looking at what those with any tech common sense already know, is going to be “tomorrows failings” disaster… Which is,

“Current AI systems with a box of bells and cocktail umbrellas thrown in.”

And all that will be needed to round it out is “the clown car” full of on the make politicians and agitating control freaks.

The article has lots of observations that apply to any technology used to “verify” thus “gate or control” people. And these observations all indicate that we are not even close to balancing the needs of society against the ludicrously stupid politicians and control freaks and their desires to control others.

lurker May 3, 2025 4:01 PM

“Sooner or later, it’s going to happen.”

When? Half a century ago I was told robots would be doing all our work, leaving us with much more leisure time, paid for by the productivity of the machines. The one country (China) that has installed enough robots to start approaching that nirvana, has found its own peple don’t believe it, and are not spending the fruits of automation, thus starving their economy. A side effect of the efficiency of robot factories is that manufacturing has moved away from historically strong industrial countries, and provoked a tariff war.

“AI systems will start acting as agents, doing things on our behalf with some degree of autonomy.”

Why? Sure, there are some people who are so busy that they haven’t got time to do their own shopping. But will AI be more efficient, or effective, than the human agents who are presently doing this work? Cheaper perhaps, but it won’t use current LLM style AI.

The SF shelves are full of dystopian tales of what happens when humans surrender their agency to machines.

Clive Robinson May 3, 2025 8:36 PM

@ lurker,

You say of “Current AI LLM and ML systems” and proposals for “AI Agents”,

“But will AI be more efficient, or effective, than the human agents who are presently doing this work? Cheaper perhaps, but it won’t use current LLM style AI.”

Thus recognising a clear difference between,

1, Current AI
2, AI Agents

With the implication that the former will not be capable of being the latter.

It’s a point I agree with and can show that it’s going to be a major problem imminently. That is,

“Current AI LLM and ML systems are incapable of acting as Future AI Agents, of any capability.”

The reasons for this are not just the lack of incremental improvements in efficiency or effectiveness of current LLM and ML systems they just are not designed or built to do it. Look at it this way

“Current delivery bots can not dance”

Essentially such bots are “a box on wheels” just like an “Ox Wagon” their design prohibits them from “back flips” and the like that human agents can do.

If we look at what is required to function as an “agent” we come up with a check list of “required” functions. Likewise we have a capability checklist already for LLM and ML systems. If we were to compare the “required for agent” and “capable with LLM&ML” checklists a couple of obvious questions arise,

1, What would the gap be?
2, Can LLM&ML expand to fill it?

The current AI LLM and ML systems are already at or close to their technology minima or capability maxima depending on what way you want to view it (or as some say “they lack moat”[1]). That is they are literally “in the bag”, or cul-de-sac, so going further down it will gain you nothing of desired worth.

As Seth Godin once noted[2],

“If your job is a cul-de-sac, you have to quit or accept the fact that your career is over.”

As a manager of an “agent” you should be aware of this and either pull the agent out of the “dip” or “close them out” otherwise you end up with “a zombie on the books” eating resources[4].

It applies to “agents” in general not just human agents. So the same applies to technology development.

Some functions or jobs are mechanistic or entirely deterministic, they are essentially “known and characterised”. Thus they can be satisfactorily automated in “predictable” environments. This was known back in the 1980’s with “Robot Research”.

However robots fail when either,

1, The required response is unpredictable or unknown.
2, The environment is unpredictable or unknown.

I’m known for saying that under the Newtonian laws of physics we mostly live under,

“There is no such thing as an accident, just lack of information and or time.”

Thus those Self Driving Car “accidents” are not “acts of deity” but “failure of design”[4].

A worse fate awaits those who want AI Agents for even very simple tasks most humans can do without conscious thought.

There used to be a joke back in the early days of industrial robotics last century in the UK due in part to a steeple jack called Fred Dibnah and an off hand comment he allegedly made about assistants being of “No bl**dy use” because they could not make a cup of tea.

The fact is any apprentice in a factory in the UK can make a cup of tea –even today– just by being asked in one of hundreds if not more ways. You would be hard pressed to find any robot anywhere that could do the same.

Thus there is also an “Expectation Gap” between the capabilities of a human agent and a Robot / AI agent. It’s something that few will be able to cross because sadly most are now not as cognitively adept as people were just a generation or so ago.

It’s something that shows up more and more frequently in the work place and why “bringing back manufacturing jobs” to the US is not going to happen even in three or four generations…

As for AI agents to use them you have to understand how they work at rather more than just the “user interface level”. If you doubt this try getting someone “in accounts” or similar that uses computers as their job function to make a simple design from scratch on a 3D printer using CAD/CAM software… You will get a tiny feel of what is going to happen over the next decade or so.

[1] The Moat in question is a more general concept of the “Economic Moat” of Warren Buffet,

https://pictureperfectportfolios.com/warren-buffetts-views-on-economic-moats-analysis/

In essence it’s a form of a combined notion of a “Perimeter Defence” and “Defence in depth”. In essence it buys you time and knowledge by which you can fend off attackers.

Unfortunately for current AI LLM and ML system developers the usual AI issue of “illusory moat” crops up with each new tiny step hence the AI bubbles every few years.

For instance Sam Altman and his investors including Microsoft thought that they had significant or unassailable “moat” due to the way model building needed lots of resources that nobody else had access to. Then at the begining of the year along came DeepSeek ripped off his crown and stomped it into the dirt and revealing it was not just broken but made of gilded brass and glass not gold and diamonds. Which caused the markets to barf and OpenAI to try hiding behind the “out house hedge” of Executive Orders and Chinia Ire that fills the void.

https://www.ctol.digital/news/openai-rethinks-china-strategy-as-deepseek-disrupts-ai/

The simple fact is such disruption was quite predictable and hiding behind EO’s or even US Legislation will actually not help. And may cause significant collateral damage such as the EU legislating Google
Microsoft, Meta, and similar AI and it’s use illegal due to various existing legislation to do with personal and private information.

[2] The quote is from Seth’s little book “The dip” and is about what we now call “Strategic Quitting” or “getting out ahead of the game or gun”. In effect it requires you recognising you are in a trap not a dip, and you need to get out whilst you have the ability to do so[2].

[3] Seth indicates there is a dip/trap difference with,

“You can know before you start whether or not you have the resources and the will to get to the end. It’s pretty easy to determine whether something is a Cul-de-Sac or a Dip.”

However… The three unanswered problems of that are how can,

3.1, “You know before you start”?
3.2, If “you have the resources”?
3.3, It’s possible “to get to the end”?

That is achieve an objective before you exhaust your reserves, options or both. Seth like many “Executive gurus” does not answer those questions, because the questions are them selves an unsolvable trap (thus makes Seth’s comments moot outside of a narrow and oft meaningless viewpoint).

The answer to 3.1 is “you can only know” if what you are thinking of doing has “been tried many times before and written up” such that you have adequate examples to make a valid hypothesis. In engineering it’s what project “History Files” are all about. They are the most essential part of project documentation as they “carry forward” what has been learnt or can be learnt from the project (see the “scientific process” as to why).

The answer to 3.2 is the same as 3.1. that is “you can only know” the resources required if you know what is required to finish the project. The slight difference is “process improvements” generally mean less resources for a given function over time.

The answer to 3.3 is what the famous Church Turing Thesis of the “halting problem” is all about. If a function has no known or predictable end point. Then it’s not possible to answer any of the questions.

The catch of course was highlighted a year or two before by Kurt Gödel though nobody really noticed at first. In essence Gödel pointed out that the really big problem was that the system/agent did not have the ability to actually recognise traps and resolve them. Gödel showed for non simple well found logics this was not possible…

Hence the all to frequent application of the human statement of,

“Take a blind leap of faith.”

Which most would not buy a book to be told.

[4] Sometimes as a manager it’s desirable to have some one “held in a dip or potential trap”. Most “agents” don’t realise that this is about 1/3rd to 2/3rds of their job/project or work function anyway.

We call it “capacity planning” but it can also be “make work”. In both cases it’s purpose is oft too difficult for many to understand without a lot of background (something DOGiE does not have). To do so you first have to have a very good understanding of the total “process chain” of which the “value added” and “supply” chains form very small parts of. The best way to understand it is to study “natural systems” and their resilience to threats such as supply shortages and attacks by other entities. In human terms we talk about,

“Putting by for a rainy day”
“Building slack into the system”
“Keeping a weather eye”
“Knowing which way the wind is blowing”

In essence they are an integration and prediction process to deal with peeks and troughs in the process chain, such that what appears chaotic can be smoothed out so “trends” can be spotted and used for “planning”.

The reason that Spain and Portugal had a major “power outage” a few days ago will eventually be found to be due to a lack of “slack” in the system that tripped and cascaded. Most probably caused by the fact that most grid connected “renewable energy” systems are designed not to have “fly wheel reserve” equivalent (the reason for this oddly enough is for both reliability and safety, but that’s a long story).

ResearcherZero May 3, 2025 11:18 PM

stock went up

Visa is the first partner for the platform’s “X Money Account” service.

‘https://www.business-standard.com/world-news/elon-musk-x-digital-wallet-partnership-visa-instant-payment-125012900312_1.html

gave up May 4, 2025 5:13 AM

Yes, let’s swap ever decreasing human intelligence for Absent Intelligence because it has always been our dream to become complete retards.

Clive Robinson May 4, 2025 7:17 AM

@gave up, ALL,

You say,

“let’s swap ever decreasing human intelligence for Absent Intelligence because it has always been our dream to become complete retards.”

Has it not been that way for a century or more. Go back to 1890’s and read the works of HG Wells.

He wrote about what you ask with the Eloi and the Morlocks in “The Time Machine”,

let’s swap ever decreasing human intelligence for Absent Intelligence because it has always been our dream to become complete retards.

In a way Well’s was describing what happened to the Spanish with the gold they plundered from South America.

The wealth ment that the self entitled did not have to work, they employed others to do it for them. To try and stop the wealth being lost they acquired land and thus became “gentleman farmers” or “lords of the land”.

But to stop the land wealth being dissipated they practiced a form of “closed stud book breeding” society. Which brought not just genetic disorders it further diminished the cognitive ability.

These could easily have been the “Eloi” indulging in “courtly pleasures” life style

Arguably what Well’s got wrong was the depiction of the Morlocks. Think of them as the result of what these “Agents” might become…

Others have extended Well’s ideas hence we have by twist and turn the notion of the “sheeple citizens” and the brutish “authoritarians” as coloured shirt wearing “Guard Labour”.

But none of this is really new, history shows the use of “brit force” to establish an “overlord system”. Look up the “Estates of Man”. Organised both by imposed social hierarchy,

https://en.m.wikipedia.org/wiki/Estates_of_the_realm

And by locality,

https://en.m.wikipedia.org/wiki/The_Estates

Oh and also look into the history of “Resource Wars” one of the earliest on record is “Water Wars” one of which has started yet again in Asia, but also look carefully at Corporate USA where “water rights” are being bought up and droughts created.

A more modern equivalent is “Energy Wars” with the more visible side being the unlawful building of pipelines that then give the opportunity for others to “turn the tap” or “cut / blow them up”.

But perhaps the most modern are the “communications” and “medical / health” resources being used as weapons.

With the wars being fought not nation to nation, but corporation to corporation, with always the targets being the ordinary people.

But also realise that the tyrant of old “The Church” is not slain, it’s rising again and wants to regain it’s position “advising” as “humble servants” that pull the strings of the puppets on their thrones.

ResearcherZero May 5, 2025 3:55 AM

America might need some kind of privacy to navigate the ruins of its health system. Much of what is being done, apart from deliberate destruction, seems designed to allow private companies, swindlers and cranks to take over government functions for profit.

[hail warning]

RFK Jr. aims to amass American’s health records to conduct an ‘autism study’.

‘https://www.cbsnews.com/news/rfk-jr-autism-study-medical-records/

RFK Jr. is determined to now find “environmental causes” that he claims are responsible.
https://abcnews.go.com/Health/rfk-jr-lays-new-studies-autism-shuts-diagnoses/story?id=120882735

Participation in registries is normally voluntary and not run by the NIH.
https://www.snopes.com/news/2025/04/22/rfk-jr-registry-to-track-autism/

One possible reason for the change of excuse by RFK Jr. is that his claims were untrue.

‘https://www.nbcnews.com/health/health-news/new-research-contradicts-rfk-jrs-claim-severe-autism-cases-are-rising-rcna202791

Geier was RFK Jr’s pick to lead his ‘autism’ study. David Geier and his father published a draft study claiming that reducing testosterone in children could treat autism. David Geier was disciplined for injecting children with a puberty blocker named Lupron.

https://www.theguardian.com/us-news/ng-interactive/2025/may/04/maga-soft-eugenics

ResearcherZero May 5, 2025 4:10 AM

‘Soft’ Eugenics I’m guessing, is a more subtle way of increasing the number of dead, tortured and tramuatised children, by announcing the methodology over the public air waves.

Rontea May 6, 2025 5:15 PM

The concept of personalization through standardization presents an intriguing paradox. It suggests that by imposing a uniform framework, we can achieve a more tailored experience, much like how HTML standardized web content to allow for diverse and personalized browsing. This raises questions about the nature of individuality and autonomy in an increasingly interconnected world. If AI agents require decentralized data to function effectively, does this imply a shift in power dynamics, where individuals have greater control over their personal information? The metaphor of a “wallet” for personal data stores evokes a sense of ownership and privacy, yet it also highlights the potential for misuse and the need for robust security measures. Ultimately, this evolution in data management challenges us to reconsider the balance between convenience and control in our digital lives.

adlai chandrasekhar May 16, 2025 6:55 AM

Bruce,

I’m slightly disappointed by the amount of link-chasing that I need to do to follow some of your recent posts; for example, in this one, the word “Solid” only appears once, and it’s not an HTML anchor!

In case other readers are similarly “out of the loop”, it’s the name of the protocol that Bruce and his colleagues at Inrupt are working on; my clue for narrowing my crawl was that the site-internal link with anchor text “digital wallet” has the token “solid” in the resource name.

Eventually I also found the general site of the standards organisation, with this probably being the best starting point for reading up on the project: https://solidproject.org/about

This is probably a case where automated web spiders might find the link the same way one lost human did! Either way, I hope this comment makes future crawls more efficient for any learner.

fungo May 17, 2025 11:50 AM

Imagine a future where an AI agent can shop and buy for you. AI commerce — commerce powered by an AI agent — is going to transform the way consumers around the world shop.

Apparently, in the ideal future I’ll keep shopping even when I’m asleep, or dead.

Clive Robinson May 17, 2025 2:09 PM

@ fungo, ALL,

With regards,

“Apparently, in the ideal future I’ll keep shopping even when I’m asleep, or dead.”

I suspect you’ve heard of the “rent seeking economy” it’s a form of theft against those on the lower steps of the socioeconomic ladder.

It was thought up many years ago and was effectively forced out of UK society back in the 1960’s only to return in various ways and still exists today one way or another.

If you look up “Rachmanism” you will find it’s named after “Peter Rachman” who in the late 1950’s and early 1960’s ran a system of companies that owned property to force immigrants into penury, he also ran several prostitution rackets and a night club with the notorious Kray brothers and their various protection rackets.

Eventually politicians who had “turned a blind eye” to what was going on were forced to act due to what came out in the “Profumo Affair” and in the mid 1960’s the UK Rent Act came into being to stop Rachminism and similar “rent seeking” behaviour. However in the years since the legislation has been watered down in favour of “land lords” in various ways. Virtually every elected government promises to make changes for the better, but for some reason they don’t often happen and get watered down in a different way if they do.

So “Rent seeking businesses” may not brme your or my notion of an ideal future, but certainly the ideal future of those illicitly acquiring wealth and data that they believe they are not just entitled to but have an “absolute right” to take.

It’s just one of the things that concerns me about “AI Chatbots” being “given agency” as “AI Agents”. Because as I note the Business plan of Microsoft and other major US Corps is,

“Bedazzle, Beguile, Bewitch, Befriend, and BETRAY”

The betrayal can happen in any number of ways, mostly I would suggest “silently” in that it extracts PII that can be “packaged and sold” to third parties but also to enhance the “befriend” function of the ChatBot.

But as has been found with “Voice Assistants” if given them the agency to make purchases not only will they do so “On your Command” but also “on the command of others” you can not even hear. Now consider an AI Agent that will throw in a little hallucination… And all to soon you will be the owner of the worlds most expensive old boots or similar moth-eaten underwear, which you will not be able to return for refund.

Maybe it will decide you might like some,

“$10,000 ‘healing crystals’ that have the magic of ancient Mayan entropy.”

Or some other Woo-hoo… that in reality are just food colouring dye in table salt or sugar crystals. There’s plenty of scams like that going on already (you would be surprised at just how much “Himalayan Rock salt” has been made in a cement mixer or similar and has never been within a 1000kM of even the Himalayan foot hills.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.