AIs as Trusted Third Parties

This is a truly fascinating paper: “Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties:

Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them.

When I was writing Applied Cryptography way back in 1993, I talked about human trusted third parties (TTPs). This research postulates that someday AIs could fulfill the role of a human TTP, with added benefits like (1) being able to audit their processing, and (2) being able to delete it and erase their knowledge when their work is done. And the possibilities are vast.

Here’s a TTP problem. Alice and Bob want to know whose income is greater, but don’t want to reveal their income to the other. (Assume that both Alice and Bob want the true answer, so neither has an incentive to lie.) A human TTP can solve that easily: Alice and Bob whisper their income to the TTP, who announces the answer. But now the human knows the data. There are cryptographic protocols that can solve this. But we can easily imagine more complicated questions that cryptography can’t solve. “Which of these two novel manuscripts has more sex scenes?” “Which of these two business plans is a riskier investment?” If Alice and Bob can agree on an AI model they both trust, they can feed the model the data, ask the question, get the answer, and then delete the model afterwards. And it’s reasonable for Alice and Bob to trust a model with questions like this. They can take the model into their own lab and test it a gazillion times until they are satisfied that it is fair, accurate, or whatever other properties they want.

The paper contains several examples where an AI TTP provides real value. This is still mostly science fiction today, but it’s a fascinating thought experiment.

Posted on March 28, 2025 at 7:01 AM9 Comments

Comments

Cigaes March 28, 2025 9:09 AM

Or they could agree not on a language model and a prompt but rather on a programming language interpreter and a trivial program that does the job: query their income, give the result of the comparison, forget the info. It looks to me that all AI brings into this is letting Alice and Bob do their shenanigans even if they are incompetent in computing. Plus hype, of course.

Clive Robinson March 28, 2025 9:20 AM

@ Bruce, ALL,

I started reading the paper and to be blunt after the first few pages I came to the conclusion,

“It is probably not sound.”

The reason is the number of assumptions the authors hand wave on, some of which I happen to know are not safely sound.

Thus I’d treat it with at a minimum distinct caution.

lurker March 28, 2025 1:57 PM

@Bruce

“with added benefits like (1) being able to audit their processing,”

Crooked auditors are a fact in the finance industry. Why not here?

“and (2) being able to delete it and erase their knowledge …”

Ah, delete and erase. In the chain of trust, how do you know it is deleted? when? do you trust the makers of the AI not to have a backdoor? Open Source models may go some way to solve this part of the puzzle.

Like @Clive I see too much handwaving in the paper. But your final sentence is a good summary:

“This is still mostly science fiction today, but it’s a fascinating thought experiment.”

Clive Robinson March 28, 2025 3:51 PM

@ lurker, Bruce, ALL

With regards,

“Ah, delete and erase. In the chain of trust, how do you know it is deleted? when? do you trust the makers…”

Let’s just call that the start of a list of “probably not sound” assumptions.

Whilst I would not say “science fiction” as that generally implies “far future”. It is certainly the germ of an idea, that could be proved within a very short time one way or the other.

As for being a thought experiment, yes by definition it is that. But consider why it’s not more than that currently.

That is, what is holding it back from progressing… Some of those things are holding back a loot more than this idea. Thus I suspect we will see not just movement but progress.

I’m back at home from the hospital supposedly “resting”… Which means I have the excuse to stick my feet up and catch up on some reading and this paper will be in that pile.

@ ALL,

If someone could find an AI to tell me what order of “reading material” to follow for maximum utility, I might be interested 😉

ResearcherZero March 29, 2025 12:36 AM

If AI can make me a hamburger when I’m lost in the middle of the desert I’d be impressed.

ResearcherZero March 29, 2025 2:15 AM

Research finds that AI might be a threat to national security.

‘https://www.news.vcu.edu/article/2025/03/ai–could-weaken-national-security-crisis-responses-new-vcu-research-finds

Proper training and established institutional and professional experience will be crucial.
https://carnegieendowment.org/research/2024/06/artificial-intelligence-national-security-crisis

Lessons from Nuclear Energy
https://www.justsecurity.org/108644/united-states-must-avoid-ais-chernobyl-moment/

ResearcherZero March 29, 2025 2:29 AM

Amazon is not really interested in building trust with customers anymore. Yanking the power cord and tossing the device away will restore the on-device privacy setting to your home.

‘https://arstechnica.com/gadgets/2025/03/everything-you-say-to-your-echo-will-be-sent-to-amazon-starting-on-march-28/

ResearcherZero March 31, 2025 6:54 AM

What politicians do not tell you about drug smuggling.

‘https://www.abc.net.au/news/2025-03-31/mark-standen-crime-commission-four-corners/105098992

When people in positions of trust mislead everyone for their own benefit.
https://www.sbs.com.au/news/article/ex-nsw-drug-investigator-loses-appeal/kvb0hueux

Dutch police had begun to suspect an investigator was assisting drug traffickers.
https://markstanden.blogspot.com/2011/03/humble-fax-led-police-to-drug-dealing.html

Standen was charged in 1980 while working for the Customs’ narcotics bureau.
https://www.smh.com.au/national/nsw/cocky-confident-and-living-the-high-life-20110811-1iosl.html

Clive Robinson March 31, 2025 10:22 AM

@ ResearcherZero, ALL,

To be honest what is surprising about,

“What politicians do not tell you about drug smuggling.”

At the end of the day the only things politicians do is,

1, Try to look like they do something usefull.
2, Sound like they know how to solve issues.
3, Argue about legislation pointlessly in most cases.

And when you get old enough and creaking enough you know that the reality is

1, The make things worse due to lobbying.
2, They know next to nothing and rarely if ever listen to those that do.
3, Their purpose is to keep party bosses and lobbyists with back handers happy, not even the average voting citizen.

But… Consider how many criminals there are in the population?

The UK has something under 70million residents and a maximum of under 70,000 places to lock people up. With their not being enough of the latter we can say “as a minimum” that one in one thousand people are criminal or awaiting to be tried.

A few years ago we had a political scandle involving Tony Blair and “cash for questions” amongst other things… And a lot of bones fell out of the closet. Taking publicly available figures the Editor of Private Eye found that one in four politicians were criminals at that time and he said as much on nearly live TV in “Have I Got News For You” (a program who’s presenter Angus Deaton got caught out being involved with drugs).

A UK Chancellor was known by some as “Gidiot” for good reason, yet many in Parliament called him “White Lines” and one MP during PM questions time asked if he was for “the coke tax” and that got broadcast all over the UK and where ever the BBC world service could be heard.

But getting back to your list of “naughty boys”, did you note that the evidence against them was usually due to “phone intercepts”?

Yup the Intel services listen in to all “important people” but don’t say very much…

It’s been noted that Donald Trumps book emphasizes “leverage” as the way to succeed…

Thus a question might be,

“If phone intercept evidence was available for years, why were they allowed to carry on not just doing their day job at the public expense, but also “filling their boots” with the criminal activities?”

Thus a second question might be,

“Who knew and what sort of leverage were they accumulating?”

Oh and toward what end.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.