The Role of Humans in an AI-Powered World

As AI capabilities grow, we must delineate the roles that should remain exclusively human. The line seems to be between fact-based decisions and judgment-based decisions.

For example, in a medical context, if an AI was demonstrably better at reading a test result and diagnosing cancer than a human, you would take the AI in a second. You want the more accurate tool. But justice is harder because justice is inherently a human quality in a way that “Is this tumor cancerous?” is not. That’s a fact-based question. “What’s the right thing to do here?” is a human-based question.

Chess provides a useful analogy for this evolution. For most of history, humans were best. Then, in the 1990s, Deep Blue beat the best human. For a while after that, a good human paired with a good computer could beat either one alone. But a few years ago, that changed again, and now the best computer simply wins. There will be an intermediate period for many applications where the human-AI combination is optimal, but eventually, for fact-based tasks, the best AI will likely surpass both.

The enduring role for humans lies in making judgments, especially when values come into conflict. What is the proper immigration policy? There is no single “right” answer; it’s a matter of feelings, values, and what we as a society hold dear. A lot of societal governance is about resolving conflicts between people’s rights—my right to play my music versus your right to have quiet. There’s no factual answer there. We can imagine machines will help; perhaps once we humans figure out the rules, the machines can do the implementing and kick the hard cases back to us. But the fundamental value judgments will likely remain our domain.

This essay originally appeared in IVY.

Posted on November 14, 2025 at 7:00 AM13 Comments

Comments

Keith Douglas November 14, 2025 9:45 AM

I have heard this sort of argument previously, and it is interesting to consider what the line is supposed to be. If the line is supposed to be “we’re better at this” or “this is so hard to do for anything we better be conservative (small-c) here” we should likely want to say this. So what happens in the first case when someone claims that system X is better at some ethical, aesthetic, political matter? Do we reject it out of hand, or do we consider it and then add to the pile of “not human exclusive”? People used to think that grandmaster level chess was like this, after all.

I have no answer. I do think about “different failure modes” all the time now; what will happen when X is roughly comparable in some domain but with fundamental different failure modes. AI based vulnerability scanners (like Burp AI) and static code analysis tools are a IMO harbingers here.

BCS November 14, 2025 9:49 AM

A possible alternate way to say the same thing: AI is effective when there’s a single correct answer and ineffective when there’s ether no correct answers or many.

I think that’s equivalent to the original essay, but framed in a way that my brain prefers to think no in terms of.

You could also go in the exact opposite direction and say “AI has no ethics” which is likely also equivalent.

Colin November 14, 2025 10:13 AM

Good article but I’m not 100% convinced about fact-based decisions v judgement in all cases.

As an example, we may (eventually) have factual evidence that shows self-driving cars kill fewer people overall than human-driven ones. However, they will still kill some people and there it would be understandable if, as a society, we decided that the impact of this was less acceptable than what we have today. There is a similar discussion here in UK today about the costs and benefits of “smart motorways”.

It’s not quite the same argument, I realise, but nonetheless the ‘social acceptability’ of a technology needs to be considered as well as the utilitarian view.

Clive Robinson November 14, 2025 11:28 AM

@ Bruce, ALL,

With regards,

“Chess provides a useful analogy for this evolution…

…But a few years ago, that changed again, and now the best computer simply wins[1]

…There will be an intermediate period for many applications where the human-AI combination is optimal[2]

…but eventually, for fact-based tasks, the best AI will likely surpass both.[3]”

Firstly and in reverse[3] : it’s not “fact-based tasks” but “rule based tasks”.

To most it would be the same but it very much is not. A computer or AI system if you want to make a distinction, can not tell what is “factual” or not. However for simple enough rules it can tell if something is inside the rules or not, which is a major distinction.

The problem is most Current AI LLM and ML Systems can not actually play chess to the rules even though they are simple enough for humans to learn and play by… LLM AI simply plays in a way that “has been played before” and therefore in it’s input training data. For things outside of that, the LLM gets it wrong more often than not. This happens because the AI can not reason forward, only pattern match previous play.

As for the “intermediate period”[2] it is likely to be way more than “intermediate”. In fact it will probably never end. Because AI can not reason and can not tell when something is truly “optimal” because currently AI does not have sufficient knowledge of “the environment” and we are unlikely to give it the “Free Hand” it would need to understand the environment.

As for “simply wins”[1] this is not the way “LLM’ systems will get used, a 1980’s “Expert System” can do that at considerably less cost and resource usage.

This “chess playing” issue is a very clear example of why we should not go down this path.

Because it’s not playing to Current AI LLM and ML systems strengths.

I can and probably should go on, but I will leave it open to others for now.

Rontea November 14, 2025 12:20 PM

The machine may calculate the weight of the world, but it will never know the weight of a conscience. In the governance of souls and nations, no arithmetic can replace the trembling heart of man.

Rontea November 14, 2025 2:16 PM

@Keith Douglas
“different failure modes”

I also share your fascination with these different failure modes. Our errors are poetic; theirs, mechanical. Perhaps the future will be a strange dialogue between the poetry of human mistakes and the precision of machine missteps. In any case, it is better to think like a steward than a gatekeeper—welcoming what is useful, and learning how to live with what is inevitable.

Rontea November 14, 2025 2:37 PM

@Colin
“I’m not 100% convinced about fact-based decisions v judgement in all cases.”

Facts illuminate the road, but it is the human heart that decides whether to walk it. A society, like a man, is not merely a ledger of risks and benefits; it trembles at the thought of lives lost, even when reason whispers that progress is safer. To weigh technology only with the scales of utility is to forget that we are not machines. Wisdom is often born where cold logic kneels before human dignity.

Rontea November 14, 2025 2:46 PM

@Clive

“It’s not fact-based tasks but rule based tasks”

You speak as if the machines were philosophers, but they are only mirrors polished by human hands. We must not flatter them with the title of reason when all they do is echo our own moves, like a parrot reciting prayers it does not understand. The soul of chess belongs to the man who trembles before the board, not the cold lattice of circuits. And if one day the machine “wins,” it will not be a triumph of thought, but a triumph of repetition—like a clock that strikes the hour without knowing the meaning of time.

A Nonny Bunny November 14, 2025 3:50 PM

“What’s the right thing to do here?” is a human-based question.

Unfortunately, the human-based answer often depends on whether it’s before or after lunch. What the weather is. How bad traffic was. And so on.
Humans are a fickle bunch.

I mean, I have no doubt we can get computers to copy those biases. So there’s no reason to assume AI will do better. But I don’t exactly expect humans to be good at it either.

What is the proper immigration policy? There is no single “right” answer; it’s a matter of feelings, values, and what we as a society hold dear.

If you decide how to aggregate society’s values, then there probably is an objective optimal answer. And arguably that would be a lot easier to find with AI.

The way people find “answers” to these kinds of questions typically starts by throwing most of their values out the window and focusing on just one or two.

But the fundamental value judgments will likely remain our domain.

Well, the domain of the people with enough power and money to ram it down our throats, anyway.

ResearcherZero November 15, 2025 1:52 AM

The other role humans play is to feed data into the machine until the time comes that they are eventually fired when they are no longer needed or are highly competent at their jobs.

The Department of Homeland Security is not required to document or explain its use of surveillance tools. Customs and Border Protection can conduct searches without warrant and archive data for any use at any time and is a prime target for any adversary to breach.

‘https://cdt.org/insights/the-border-search-device-database-and-ai-how-emerging-tech-could-supercharge-the-dangers-of-an-outdated-warrant-exception/

Brigita Private Limited November 20, 2025 10:44 PM

This is a thoughtful and timely reflection on how AI is reshaping decision-making — especially your distinction between fact-based tasks (where AI may clearly excel) and judgment-based tasks (where humans retain a crucial, irreplaceable role).

At Brigita Private Limited, we strongly believe that as AI grows more powerful, defining this boundary is essential. While machines can deliver accuracy and scale, they don’t inherently possess moral sensitivity or human values. As you point out, questions like “What’s the right thing to do here?” often require deeper societal context, empathy, and collective values.

We also appreciate your analogy with chess — historically, humans competed head-to-head with machines, then collaborated, and finally AI came to dominate in purely fact-based games.
This suggests a future where the best outcomes come from hybrid human-AI teams, not from one replacing the other.

Importantly, the enduring role for humans lies not just in making decisions, but in navigating conflicts of values — for example, in policy-making, ethics, or justice.
AI can assist in implementation, but humans must remain at the center of value judgments. At Brigita, we’re exploring how AI systems can support—but not replace—human-led judgment by amplifying voices, aggregating feedback, and offering tools for more inclusive deliberation.

Finally, your argument also underscores the need for robust governance and accountability. If we let AI handle only fact-based tasks and reserve value-laden decisions for humans, we need transparency around how these systems are trained, how they make decisions, and who is responsible when they err.

Thank you for this insightful essay. It’s a powerful reminder that, even in an AI-powered world, the human element — our ethics, our judgment, our values — remains indispensable.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.