Where AI Provides Value

If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.

But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce.

AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.

Speed

First, speed. There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos.

AI models can do the job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.

Real-time performance matters in these cases, and the speed of AI is necessary to enable them.

Scale

The second dimension of AI’s advantage over humans is scale. AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally.

AI models can do this for every single product, TV show, website and internet user. This is how the modern ad-tech industry works. Real-time bidding markets price the display ads that appear alongside the websites you visit, and advertisers use AI models to decide when they want to pay that price—thousands of times per second.

Scope

Next, scope. AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all.

It’s the combination of these competencies that generates value. Employers often struggle to find people with talents in disciplines such as software development and data science who also have strong prior knowledge of the employer’s domain. Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.

Sophistication

Finally, sophistication. AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Computers have long been used to keep track of a multiplicity of factors that compound and interact in ways more complex than a human could trace. The 1990s chess-playing computer systems such as Deep Blue succeeded by thinking a dozen or more moves ahead.

Modern AI systems use a radically different approach: Deep learning systems built from many-layered neural networks take account of complex interactions—often many billions—among many factors. Neural networks now power the best chess-playing models and most other AI systems.

Chess is not the only domain where eschewing conventional rules and formal logic in favor of highly sophisticated and inscrutable systems has generated progress. The stunning advance of AlphaFold2, the AI model of structural biology whose creators Demis Hassabis and John Jumper were recognized with the Nobel Prize in chemistry in 2024, is another example.

This breakthrough replaced traditional physics-based systems for predicting how sequences of amino acids would fold into three-dimensional shapes with a 93 million-parameter model, even though it doesn’t account for physical laws. That lack of real-world grounding is not desirable: No one likes the enigmatic nature of these AI systems, and scientists are eager to understand better how they work.

But the sophistication of AI is providing value to scientists, and its use across scientific fields has grown exponentially in recent years.

Context matters

Those are the four dimensions where AI can excel over humans. Accuracy still matters. You wouldn’t want to use an AI that makes graphics look glitchy or targets ads randomly—yet accuracy isn’t the differentiator. The AI doesn’t need superhuman accuracy. It’s enough for AI to be merely good and fast, or adequate and scalable. Increasing scope often comes with an accuracy penalty, because AI can generalize poorly to truly novel tasks. The 4 S’s are sometimes at odds. With a given amount of computing power, you generally have to trade off scale for sophistication.

Even more interestingly, when an AI takes over a human task, the task can change. Sometimes the AI is just doing things differently. Other times, AI starts doing different things. These changes bring new opportunities and new risks.

For example, high-frequency trading isn’t just computers trading stocks faster; it’s a fundamentally different kind of trading that enables entirely new strategies, tactics and associated risks. Likewise, AI has developed more sophisticated strategies for the games of chess and Go. And the scale of AI chatbots has changed the nature of propaganda by allowing artificial voices to overwhelm human speech.

It is this “phase shift,” when changes in degree may transform into changes in kind, where AI’s impacts to society are likely to be most keenly felt. All of this points to the places that AI can have a positive impact. When a system has a bottleneck related to speed, scale, scope or sophistication, or when one of these factors poses a real barrier to being able to accomplish a goal, it makes sense to think about how AI could help.

Equally, when speed, scale, scope and sophistication are not primary barriers, it makes less sense to use AI. This is why AI auto-suggest features for short communications such as text messages can feel so annoying. They offer little speed advantage and no benefit from sophistication, while sacrificing the sincerity of human communication.

Many deployments of customer service chatbots also fail this test, which may explain their unpopularity. Companies invest in them because of their scalability, and yet the bots often become a barrier to support rather than a speedy or sophisticated problem solver.

Where the advantage lies

Keep this in mind when you encounter a new application for AI or consider AI as a replacement for or an augmentation to a human process. Looking for bottlenecks in speed, scale, scope and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.

This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.

EDITED TO ADD: This essay has been translated into Danish.

Posted on June 17, 2025 at 7:08 AM24 Comments

Comments

finagle June 17, 2025 7:44 AM

You missed one.
AI can consume resources and accelerate climate change at an unprecedented rate.
The energy, water and mineral resources used to build and refine models that are used for trivial purposes can destroy the environment and put pressure on systems already stressed. Witness the rush to build nuclear power stations, supply issues with compute systems, and the problems of data centres diverting essential water supplies already.
Using AI for systems which provide power that humans cannot bring to bear is one thing. Using it to undress your school mates, deskill your workforce, downsize your workforce or use it purely for convenience is where the real problem lies. Not in AI, but in the irresponsible and redundant uses of it being pushed by unethical companies.
Far from embracing the rush to the bottom, rather AI models should be evaluated for their purpose BEFORE they are built, and unless their intended purpose is to fill gaps humans cannot, they should be refused licenses to build.

Clive Robinson June 17, 2025 8:04 AM

@ Bruce,

With regards,

“But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope and sophistication.”

You’ve left out the most important,

“Repeatability”

Especially “reliable repeatability”

Where AI will score is in two basic areas,

1, Drudge / Makework jobs
2, High end reference based professional work.

The first actually occupies depending on who you believe between 1/6th and 2/5ths of the work force.

We’ve seen this eat into jobs involving “guard labour” first with CCTV to “consolidate and centralise” to reduce head count. Then to use automation / AI to replace thus reduce head count even further.

The second is certain types of “professional work” where there are complex rules to be followed, such as accountancy and law.

In essence such proffessions are actually “a game” like chess or go, and can be fairly easily automated away.

The reason it’s not yet happened is the “hallucination issue”. Which actually arises because of “uncurated input” as training data etc. Which is the norm for current AI LLM and ML systems.

Imagine a “chess machine” that only sees game records of all games. Which includes those where people cheat or break the rules.

The ML can not tell if cheating is happening… So will include cheats in it’s “winning suggestions”. Worse it will “fill in” between “facts” as part of the “curve fitting” process. Which due to the way input is “tokenised and made into weights” makes hallucination all to easily possible.

It’s what we’ve seen with those legal persons who have had to work with limited or no access to “legal databases” and has caused Judges to get a little irritable under the collar.

The same applies to accountancy and tax law, but is going to take a while to “hit the courts”.

With correct input curation and secondary refrence checking through authoritive records these sorts of errors will reduce to acceptable levels.

At which point the human professional in effect becomes redundant.

Though care has to be exercised, some apparently “rules based professions” are actually quite different. Because they essentially require “creativity” for “innovation”. So scientists and engineers, architects and similar “designer / creatives” will gain advantage as AI can reduce the legislative / regulatory lookup / checking burden. In a similar way that more advanced CAD/CAM can do the “drudge work” calculations of standard load tolerances and the like.

pattimichelle June 17, 2025 10:09 AM

Has anyone proven that it’s always possible to detect when AI “hallucinates?” And under what conditions? If an AI is doing something truly important, this is a huge risk.

Clive Robinson June 17, 2025 11:21 AM

@ pattimichelle, ALL,

With regards,

“Has anyone proven that it’s always possible to detect when AI “hallucinates?””

The simple short answer would be,

“No and I would not expect it to be.”

Think about it logically,

Think how humans can be fed untruths to the point they believe them implicitly, it is after all what “National curricula” do. Yet they have never checked what they have been told is factual or not. Nor are they likely too because they have exams to pass. Even so nor in a lot of cases are they capable of checking for various reasons not least because information gets withheld or falsified. It’s why there is the saying,

“History belongs to the victors”

Even though most often it’s the nastier belief systems that go on to haunt us down the ages over and over (think fascism or similar totalitarian Government).

Looking in as an outsider and from the perspective of knowing their belief is not factual… Then when they shout out for their beliefs, we politely say that they are “cognitively biased” or predisposed. We further know it’s fairly pointless trying to correct their incorrect / false view point.

Well the same happens with LLM’s that have been predisposed in their training process via an ML system that is biased.

As the LLM is based on not just statistics but the order it’s fed input data it’s fairly easy to see how even factually correct data can be fed in, in an order that would cause the LLM to effectively believe the opposite of what the data actually indicates. It’s a rarified version of

“A falsehood is half way around the world before the truth has it’s boots on.”

So it happens in humans all the time and there is no reason to believe that ML systems that train LLM’s do not do the same thing…

And yes it’s worse than “Garbage in Garbage out”(GIGO) because a normal statistical analysis of the input data won’t necessarily show how the ML got biased by the order the data was presented in…

wiredog June 17, 2025 11:52 AM

“Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.” and as Clive mentioned “High end reference based professional work.”

As a programmer with 30 years experience I’ve been using some of the LLMs in my work. One thing I’ve noticed is that LLM often knows about a Python library I’ve never heard of, so when I ask it to write code to compare two python dictionaries and show me the differences it tells me about DeepDiff and gives me some example code. Which would have taken hours of research and some luck otherwise.

The other thing I’ve noticed is that LLMs seem to follow a 90/10 rule. 90% is right on, 10% whisky tango foxtrot? The 10% seems to arise related to lightly or inconsistently documented APIs (AWS, for example…). The thing is, a dev just out of college has the same success rule. So junior devs absolutely can be replaced with LLMs.

But then where will we get the midlevel and senior devs in 5 to 10 years? Accountancy firms are apparently wrestling with this question too.

Matt June 17, 2025 12:08 PM

If any of this was true, we’d see widespread adoption of the public by AI tools, rather than a handful of niche inroads, with most people being disgusted by or opposed to these tools–which are mostly unusable garbage–being foisted upon us despite not wanting or asking for them.

The main use cases for AI tools are:
* avoid having to pay wages to workers
* justify racist or other exclusionary/authoritarian decisions by saying “the AI did it, my hands are tied”

Bauke Jan Douma June 17, 2025 1:21 PM

You know you’re being targeted by a prop-generating bot when its lists of qualities ‘happen’ to be alliterating.

To wit — the ‘facts’ it propounds and utters are shaped according to propagandistic-linguistic templates (this is the realm of snake-oil and bullshitters).

By doing this, it reveals itself to be simplistic and transparent, and thus … pathetic.

Clive Robinson June 18, 2025 4:57 AM

@ Bruce, ALL,

I don’t know if you read this “The Register” article,

https://www.theregister.com/2025/03/14/ai_running_out_of_juice/

AI running out of juice despite Microsoft’s hard squeezing

But the last couple of line of the last paragraph,

“Pointless is increasingly becoming the word I use for AI. Yes, it can be helpful when used carefully as a tool, but that’s not what I see happening. Instead, AI’s being used as either a lazy way to create second-rate work or to make work. Businesses are also finally figuring this out. In Gartner Hype Cycle terms, we’re entering the Trough of Disillusionment. I’ll wake you up when we start climbing the Slope of Enlightenment.

Still indicate the state of play… Though that last sentence could usefully change “when we” to “if we ever” with regards current LLM LRM and ML systems…

But also read the comments some are gems of humour but others nail the issues.

Take @Tron’s comment of,

“Good point. AI may be able to replace management and politicians without anyone noticing.

Of course that doesn’t mean it is intelligent, capable or should be trusted with anything important.

It cannot replace people with real skills and expertise or anyone who actually does any work.”

And as a summary It’s probably better than the best AI could do 😉

Clive Robinson June 18, 2025 5:27 AM

Oh I forgot to add the thoughts / Statements by Gartner,

‘AI is not doing its job and should leave us alone’ says Gartner’s top analyst

https://www.theregister.com/2025/06/17/ai_not_doing_its_job/

He gives examples of where it works and mostly does not.

Interestingly where it works is against “drudge/make” work that is almost always a process that can be automated away.

However what is not mentioned is that this builds up the “Computer says NO” issue… That will amplify any bias no matter how unintentional in Agentic AI based on current AI LLM, LRM and ML systems.

Which suggests that “putting rails on” by humans may well turn into a growth industry, thus job savings will not be as high as “the promises” however we already know that such jobs are very stressful.

Rontea June 18, 2025 1:10 PM

AI’s ability to handle tasks with speed, scale, scope, and sophistication is indeed impressive, but it’s crucial to balance this with the unique strengths of human capabilities, such as creativity and emotional intelligence.

lurker June 18, 2025 2:18 PM

@Cive Robinson, El Reg
“Just go and do it already,” call[ing] for AI to simplify users’ lives by automatically performing tiresome tasks.

Errm, do you trust it that much?

And the final sentence attributing Albert Camus on misnaming things had been dealt with by Confucius two and a half millenia prior.

“Social disorder often stems from failure to call things by their proper names, that is, to perceive, understand, and deal with reality.”

Current AI could do well to follow the precepts of the philosophers, Chinese and Greek.

https://en.wikipedia.org/wiki/Rectification_of_names

not important June 18, 2025 5:43 PM

https://www.yahoo.com/news/chatgpt-linked-mental-health-breakdowns-200812931.html

=Mattel, the maker of Barbie dolls and Hot Wheels cars, has inked a deal with OpenAI to use its AI tools to not only help design toys but power them, Bloomberg reports.

it’s impossible to ignore the inherent riskiness of a young mind developing an attachment to something that pretends to be real and an actual friend. The most high-profile example comes from the death of a 14-year-old boy last year, who died by suicide after falling in love with a companion on the platform Character.AI. AI models are also notorious for making up facts — or hallucinating — and, more importantly, breaking their own guardrails. An AI may be designed to be safe for kids, but there’s no guaranteeing that it won’t disobey its
instructions.=

Grima Squeakersen June 19, 2025 7:37 AM

@Clive Robinson re: “The simple short answer would be, “No and I would not expect it to be.”

Exactly. I suppose that conceptually an independent, supervisory AI system could be established and trained on valid date to do that task, but that idea has at least two problems. First, it would be even more resource (electrical power, etc.) intensive than the systems it evaluated; second, imo it creates a classic example of the “who watches the watchers” dilemma. My general comment on Bruce’s analysis is that while he is technically correct, he seems to assume that at some point, a critical mass of entities employing AI will be willing to make the no doubt massive investment in proper training of systems, instead of continuing to grab the low-hanging fruit by siccing the AI engine on the lowest cost resource that can be identified. I strongly suspect that expectation to be wildly overoptimistic.

Grima Squeakersen June 19, 2025 7:59 AM

@lurker re: ““Social disorder often stems from failure to call things by their proper names, that is, to perceive, understand, and deal with reality.””

Ha! I would view AI’s proclivity for language distortion to be the amplification of a pronounced tendency in the internet “knowledge base” in general, as it has evolved over the past several decades. I pride myself on my English language skills (based in verifiable fact: ~55 years ago I earned twin 740 scores in SAT Verbal and English Composition exams) but I have been concerned that extensive internet use had been steadily eroding those skills. As a counter influence, I recently purchased a full 2nd edition copy of the Webster’s New International Dictionary. While no doubt evolution/corruption of a language is an ongoing process, I think that given a choice between 1934 and 1961 (publication date of the 3rd edition) to “freeze” non-technical English, 1934 would have the edge. That huge book is quite cumbersome, but I now have a immutable reference to the English language in use as I grew up.

piglet June 19, 2025 8:09 AM

“If AI recommends glue as a pizza topping, then you’re safe for another day.”

For what definition of “safe”?

piglet June 19, 2025 8:37 AM

wiredog: “But then where will we get the midlevel and senior devs in 5 to 10 years? Accountancy firms are apparently wrestling with this question too.”

That seems an important point. Some people say LLMs are reliable at solving small coding tasks. But those are the tasks that I’d give to junior developers for training. Now those junior developers will just ask an LLM and maybe they will deliver working code, but how will they ever learn to understand the code so you can correct the mistakes that the LLM makes and so you can tackle bigger tasks that LLMs can’t solve on their own?

This is similar to the predicament of students doing their assignments with LLMs. The outut is often trash but even if it isn’t, the biggest loss is that the student doesn’t learn any skills, other than using an LLM. And perhaps the “skill” of writing LLM prompts can get you an entry level job but at some point it will become obvious that you don’t actually know anything about the work you are supposed to be doing.

Apart from that, never underestimate the capacity of fascists to use LLMs for their purposes. It’s horrible.

S. https://bsky.app/profile/paleofuture.bsky.social/post/3lru3trtejs2f

Clive Robinson June 19, 2025 12:05 PM

@ piglet,

With regards,

“For what definition of “safe”?”

Remember as I’ve noted before “glue” can be made from edible protein.

1, Wheat flour glue.
2, Casein milk/cheese glue.
3, Cow/pig foot gelatin glue.
4, Egg glue.

So yes your “Pizza Topping” arguably could be made with two or more glues.

You might want to watch this,

https://m.youtube.com/watch?v=45JhacvmXV8

Where it briefly explains how to not just make the strongest form of flour glue (also called “dope” when used on airframe fabric covering) but how to make it water proof.

You can find basic recipes for these “food glues” in,

https://www.wikihow.com/Make-Glue

Casein is also used for making the earliest form of plastic, and is still used for making high end natural bone / antler buttons and handles on carving knives and forks used at the table.

jelo 117 June 20, 2025 3:05 PM

AI is built to serve human purposes and to assist getting to the right answer as humans see it. It could be asked to get to any answer whatsoever. So it and likewise any machine artifact is not nor ever will be intelligent.

So also AI is useful only if the human user is sufficiently expert, at least implicitly by way of society, to intelligently vet the result.

Clive Robinson June 20, 2025 5:09 PM

@ jelo 117, and others,

What will AI fail?

As you note,

“So [Current AI] and likewise any machine artifact is not nor ever will be intelligent.”

That is true for databases as well, but it does not stop them being of significant use “currently”

But we have two broad and one narrow time frames of “Past, Present, and Future”. If we are thoughtful the first two can be predictors of the future.

So what can the past and present tell us about AI and humanity?

Well changing technology is not new, as some will know I’ve an interest in industrial archaeology, from which I pull examples of likely human futures.

Because whilst technology changes rapidly, humans have not really changed over the past millennium. And it is humans that put technology to work as “force multipliers” and the like for certain people to profit at the expense of others.

Now rather than take my word for the fact Current AI will have a significant and probably irreversible detrimental effect on mankind, how about reading somebody elses view based on established history,

https://xeiaso.net/blog/2025/rolling-ladder-behind-us/

The simple fact is as technology replaces human skills the skills become lost unless we have cause to carry them forward.

I sometimes half jokingly talk about the loss of an “entry level skill” of the,

“Sagger makers bottom knocker”

https://www.thepotteries.org/bottle_kiln/saggar.htm

Given to boys to see if they were worth apprenticing through the craft of making china and glazing it with coloured glass dust to a finished product fit for a life time of use.

But it also tells a story of how we now have really crap pottery that barely lasts a few years, not a lifetime as it once did,

Consider current AI as just a machine like a pastry cutter. The skill of pastry making and baking is getting lost in exactly the same way as clay making and firing has now been.

Some would say “so what” but Terry Pratchett’s story about Samuel Vine and his boots still rings true.

You could by a cheap pair of boots for ten dollars and they would last you a year, but for fifty dollars you could buy a pair of boots that would out live you.

In five years you would be loosing money to cheap shoddy bootmakers as you in effect were pushed into a “rent seeking economy” where you would pay endlessly for shoddy goods.

Look around you at “the cloud” and XaaS they are all “rent seeking” of “cheap today but forever expensive”.

The idea that has the C-Suit Salivating with AI is that not only will it give endless income, it will also kill off the skilled artisans who spend years not just “learning a craft” but “actually moving it forward”.

AI will we know not be proficient or skillful, it will always be on the low side of average, and it certainly won’t be able to push the craft forward, just destroy those who can.

That’s the real existential threat of AI sub par mediocrity that kills forward movement.

Any one who tells you otherwise is flying in the face of thousands of years of human behaviour, for the reason of conning people out of money for their self entitled benefit.

WD Lindberg June 21, 2025 8:38 PM

AI = Artificial inanity … there is nothing intelligent demonstrated by these models. You can tell immediately what is actually in play as soon as the word “training” is used. It is a neural net or something similar making probabilistic predictions based on the database to which it has been exposed. Therefore it can not do anything other than repeat what is in the database. It is a copycat of the highest order.

For clear mathematically based problems problems with clear boundary conditions and a training database that contains no errors, this type of model can perform very effectively. This type of model can not effectively extrapolate beyond it training data envelope and the program must be shut OFF if the inputs exceed the boundary conditions.

That is why it is utterly stupid to apply this type of technology to something as indeterminate as conversational language. There is no such thing as error free training data, the input data often drives the model beyond its boundaries, and those that have foisted this type of program on the public have refused to set clear conditions under which the program will shut down. Therefore it is no surprise that the results provided by the program are often erroneous and sometimes spectacularly erroneous.

Additionally these programs are using subsequent interaction input to become part of its training database. Thus any erroneous material being fed to the program adds to the probability that resulting output will also become more often erroneous.

This whole public venue of exposure of these models to the public search, blogosphere and social media needs to be shut down. Return it to its proper place of clearly definable applications.

Johan Lammens July 15, 2025 8:31 AM

LLM-based applications can generate very convincing output, and in general it’s not easy to detect when they might be hallucinating or just plain off the wall. One of the reasons for this is that their (vast) training data sets are not known and not referenced with generated output, at least as far as foundational models go. RAG is a different case, and the sources it cites are in fact unknown to the underlying foundation model. For a twist on RAG architecture that could also provide references for foundation model generated output, see https://www.linkedin.com/pulse/rags-orgs-how-make-ai-applications-more-transparent-lammens-ph-d–xh0ef

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.