Ted Chiang on the Risks of AI

Ted Chiang has an excellent essay in the New Yorker: “Will A.I. Become the New McKinsey?”

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans­—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

EDITED TO ADD: Ted Chiang’s previous essay, “ChatGPT Is a Blurry JPEG of the Web” is also worth reading.

Posted on May 12, 2023 at 10:00 AM25 Comments


Winter May 12, 2023 10:55 AM

Yet such software could easily still cause as much harm as McKinsey has.

It has been said before (forgot who said it):
If you want to know how AI’s will behave, look at corporations.

Every corporation is essentially a psychopath. [period] There is nothing more to say about corporate morality than that it is non-existent. And AI’s are corporations without employees.

Can we construct AI’s that are not psychopaths?

Just as easily as we can construct corporations that are not psychopaths. Which has been done only by constructing the corporations to be ruled by the stakeholders (whomever that be). Which is also the way in which governments got rid of dictators and tyrants, by letting them be ruled by the stakeholders (=people).

The same would hold for AIs. AIs must be controlled by those that interact with it or are affected by its outcomes. Any other system will lead to psychopath AIs.

Clive Robinson May 12, 2023 11:10 AM

@ Bruce,

Sychronicity again 😉

I mentioned the “Big Four” Accountancy firms and their crooked little consultant games just a short while ago.

Back in the 1980’s it got so bad that the then UK Prime Minister Margaret Thatcher, actually baned one of them from all government contracts.

The ban in effect failed to work.

The problem is when you analyse it generally not the organisation offering a solution to a perceived problem.

The only solution is,

1, Fix the problem if their actually is one.
2, Fix the petception ptoblem in some way.

In the case of McKinsey’s solutions, they actually never solved any real problem they just made organisations to fragile to survive then came back to “pick over the corpse” they had created.

Arguably following McKinsey advice was “being negligent” often vexatiously so. However where they opetated the governments had not put into place the social legislation to limit the activities they got upto.

I could enter into a debate on the failings not just of McKinsey, their employers, and the governments that failed in ways that favoured McKinsey’s profit marking schemes. But what would be the point.

Such nonsense has been going on for around half a century at least and due to lobbying those who carry out these activities get away with it.

The only way ro stop such activities is by taking the profit out of it for.

1, The McKinsey types.
2, The incompeyents that employe McKinsey types.

And I can not see that being changed, after all listen to Microsodts senior economist,


Listen carefully to how he says things…

He says 1000USD to an individual so obviously thinks 999USD to a billion people is A-OK especially if Microsoft are,taking oh 50% or more of that, 999USD.

Kevin Marlowe May 12, 2023 12:06 PM

Clearly, I’ve been living under a rock for too long. Where can I read upon the McKinsey / consulting analogy? My org uses McKinsey (and others) extensively – what am I missing?

mark May 12, 2023 12:29 PM

The problem is not AI, it’s that corporations are run by MBAs (unless you want to assert that a corporation makes its own decisions, you know, the building, the paperwork, decides), and “quality of life for humans” was not anything they ever heard of – they have one, and only one goal: ROI. Not even continued existence of the corporation matters.

nobody May 12, 2023 12:51 PM

AI software has no legitimate use and will only be used to impoverish or persecute people. Making these kinds of tools available is little different in end result from selling custom-engineered infectious diseases to the general public.

The only effective solution to AI is to outlaw it and prosecute AI systems developers for crimes against humanity.

Ericka B. May 12, 2023 2:58 PM

Re: “Ted Chiang’s essay ‘ChatGPT Is a Blurry JPEG of the Web’ is also worth a look”

I think JBIG2 would’ve been a better example: not only does it lose information, it sometimes “makes things up“:
“When used in lossy mode, JBIG2 compression can potentially alter text in a way that’s not discernible as corruption. This is in contrast to some other algorithms, which simply degrade into a blur, making the compression artifacts obvious. … In 2013, various substitutions (including replacing ‘6’ with ‘8’) were reported to happen on many Xerox Workcentre photocopier and printer machines.”

vas pup May 12, 2023 4:34 PM

How genetics determine our life choices

“Fifteen years ago, a survey of 2,000 British adults first suggested that there might be such a thing as a hobby gene. Simply looking at a person’s family tree and the favoured pastimes of their ancestors suggested a strong inclination towards certain types of activities. Participants in the survey were often surprised to discover that they actually came from a long line of amateur gardeners, stamp collectors, or cake makers.

==>From Boston to Shenzhen, various tech start-ups have spent years searching for so-called talent genes, genetic variants which might confer an innate natural strength or unique language abilities, enabling a person to be directed towards the areas where they have the most to offer.

According to Danielle Dick, a psychiatry professor at Rutgers University in New Jersey and author of the book The Child Code, most dimensions of personality such as how extroverted or introverted, conscientious, agreeable, impulsive, and perhaps even how creative we are, have some kind of genetic component.

!!!”This reflects the fact that our genes influence the ways our brains form, which impacts how we think and interact with the world,” says Dick. “Some people have brains that are more inclined to seek out exciting or novel experiences,
!!! more likely to take risks, or drawn to more immediate rewards.”

Entrepreneurs, CEOs, fighter pilots, and athletes who compete in extreme sports, all tend to be natural risk-takers. But having this genetic background can also come with certain costs.
==>Risk-takers are more likely to develop addictions, while Stefánsson’s work has shown that a proportion of the people with the genetics that would otherwise encourage creative thinking actually go on to develop schizophrenia.

Naturally impulsive people might be better decision-makers and willing to seize opportunities that would otherwise pass them by, but they can also be vulnerable to developing gambling problems, dropping out of school or getting fired from a job.”

More in the article.

Frankly May 13, 2023 1:46 PM

The main threat from A.I. is the same as the main threat from almost anything else, people. You can’t solve A.I. problems with better A.I. because people will always be willing to make worse A.I.

Technologists always think they can solve the problems caused by technology with better technology. Sometimes you can’t fight fire with fire.

Mr. Peed Off May 13, 2023 9:06 PM

Some of the AI controversy is a bit dubious. Computer-aided drafting was introduced in the 1960s by IBM. Certainly computer-aided design and computer-aided engineering programs have been around for awhile. Adobe Software’s new Firefly program is just another computer-aided graphics program (Photoshop was released in 1987). The so called AI chat programs are just children of the computer-aided linguistics programs Search and Translate. Cnc and plc are busy in our factories. Is some regulation needed? YES!

Specifically the citizens need privacy protection from the relentless surveillance by corporations and governments. Free speech and thought are in grievous danger. Also protection from harm caused by financial, healthcare, and other algorithms. Relief from the endless marketing and other forms behavior control would be much appreciated.

We do not need to wait for harm, imagine if we had waited for a disaster to start regulating nuclear power generation.

SDedalus May 13, 2023 9:22 PM

Chiang uses a LOT of words and tangential examples to make relatively few points, none of them actionable. Along the way, he conflates a number of intransigent, but arguably unrelated problems:
1. Accumulation of generational wealth.
2. Worker / corporation power asymmetry
3. Relative rises in absolute GDP and GDP per capita
4. A (mis)perceived stagnation in standard of living
5. Decreasing affordability of some economic outputs (health care, higher education, etc.)

Worker power (on the rise in the past couple of years) is a function of supply and demand. Many industries tapped plentiful cheaper pools over the last 30 years, poverty on a global scale decreased dramatically, and only now have counterbalancing trends started to exert themselves. This has nothing to do with AI, but is a fair example of holders of capital exploiting an advantage- by necessity as much as by choice.

The artificial distinction baked throughout the article is that it’s somehow impossible to be both a worker and a shareholder at the same time. That only billionaires benefit from corporate prosperity. This sets “capital” up as a straw man that can be pummeled with impunity.

Certain essential economic outputs have indeed far outpaced inflation, but you don’t have to look very far to see profound change over and above the superficial nods given in the article. The average job in 1923 or 1973 was, more dangerous and physically demanding than the average job most people work today. Today’s cars are safer and last longer. Luxuries reserved for yesterday’s wealthy are commoditized to basic amenities for tomorrows consumers. It’s not a perfect system and there are certainly imbalances within it – some quite deplorable. None of it constitutes grounds to dismantle the system and replace it with one where no one has to work a job they don’t like.

The world owes no one a living and the emergence of shiny objects with uncertain marginal utility have not altered that basic calculus. The Luddites didn’t build anything, they destroyed. That’s not some hip form of restorative justice. It’s vandalism and it accomplishes nothing but another sale for the people making the machines. You can contort and convolute any regulatable market, but unless you can hive it off from all other markets, you won’t counter its cycles of creative destruction. Ask North Korea how that’s working out.

ResearcherZero May 14, 2023 8:44 AM

@Clive Robinson

If you look through the number of legal cases the big four are facing, it gives some idea to the problem.

Not many people know who the Luddites were, and how the term originated.

No one is arguing that we should all move to North Korea, or follow the worshiping of personality until we appoint our next great leader at as a god.

That’s all a little bit silly now isn’t it?

Like liquid metal robots and fears of a sentient worldwide “Sky-Lab”.

Many involved in the development of AI have suggested caution., suggesting a more nuanced debate. Subtle influences can begin with little awareness until they begin to exert a much greater effect over the time.

“AI is a foundational science in the same sense that physics is a foundational science.”

Misalignment Risks – whether AI systems will reliably do what we want them to do.


“We placed the emotion detection camera 3m from the subject. It is similar to a lie detector but far more advanced technology.”

“This is not one isolated company. This is systematic.”

Phillip May 14, 2023 7:49 PM

When any AI is lulling us to sleep, and one might eventually wake up, what does it actually mean? I will not believe an average sentient must then drop the ball.

ResearcherZero May 15, 2023 2:57 AM



In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.”

Two Acts of Parliament of 1812, the ‘Frame Breaking Act’ of 1812 and the ‘Malicious Damage Act’, had made machine breaking a capital offence (legislation famously opposed by Lord Byron in the House of Lords).

Byron describes the men as “liable to conviction on the clearest evidence of the capital crime of poverty […] nefariously guilty of lawfully begetting children whom, thanks to the times, they are unable to maintain”. Byron sees the root cause as the long period of war, which had disrupted the economy, and criticises the use of the military in internal disputes, particularly when they proved ineffective.

Byron’s ‘Ode to the Framers of the Frame Bill’ was published anonymously in the Morning Chronicle four days after his speech in the House of Lords on 27 February 1812.

“Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time.

Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life.”

“Most [Luddites] were trained artisans who had spent years learning their craft, and they feared that unskilled machine operators were robbing them of their livelihood. When the economic pressures of the Napoleonic Wars made the cheap competition of early textile factories particularly threatening to the artisans, a few desperate weavers began breaking into factories and smashing textile machines.”

There’s no evidence Ludd actually existed—like Robin Hood, he was said to reside in Sherwood Forest—but he eventually became the mythical leader of the movement. The protestors claimed to be following orders from “General Ludd,” and they even issued manifestoes and threatening letters under his name.

ResearcherZero May 15, 2023 3:51 AM

If you read through the documents from SCL Elections Ltd, they first used ‘focus groups’ to find emotional triggers. The subsidiary, Cambridge Analyica then used the information to target and manipulate voters.


“The firm says it’s able to use its “psychographic data models” to sway undecided voters by targeting people’s social media profiles and serving up messages and ads based on their perceived biases.”

SCL Elections Ltd has been fined £15,000 plus costs for failing to hand over the personal data of a US citizen.

On that day in January 2013, the intern met up with SCL’s chief executive, Alexander Nix, and gave him the germ of an idea. “She said, ‘You really need to get into data.’ She really drummed it home to Alexander. And she mentioned to him a firm that belonged to someone she knew about through her father.”

Why would anyone want to intern with a psychological warfare firm, I ask him. And he looks at me like I am mad. “It was like working for MI6. Only it’s MI6 for hire. It was very posh, very English, run by an old Etonian and you got to do some really cool things. Fly all over the world. You were working with the president of Kenya or Ghana or wherever. It’s not like election campaigns in the west. You got to do all sorts of crazy shit.”


“The connectivity that is the heart of globalisation can be exploited by states with hostile intent to further their aims.[…] The risks at stake are profound and represent a fundamental threat to our sovereignty.” ~ Alex Younger, head of MI6, December, 2016

“Cambridge Analytica and SCL group cannot be allowed to delete their data history by closing. The investigations into their work are vital.”

“Most concerning, was that the [CA] Companies’ parent, Emerdata, has funded the pre-administration and administration costs”.

Artificial Intelligence and Market Research

Current and former officers and directors and related people and entities.

Emerdata purchased 100% of the share capital of SCL Group for £10,861,339 GBP, equivalent to around $13 million

ResearcherZero May 15, 2023 4:28 AM

“…one of the things that we found was that actually when you unpack what is a job for different people, different people engage with constructs with different motivations and value sets that are interrelated with their dispositions.”

What that means in practice is that the same blandishment can be dressed up in different language for different personalities, creating the impression of a candidate who connects with voters on an emotional level.


How it is done…

One the electorate is targeted; advanced data analytics are used to identify voter with the similar political beliefs and lifestyle.

Communication Management – Gather comprehensive data about voter groups hierarchies to generate the top-down and bottom-up message delivery system.

What drives a person, what motivates an individual to change.

“They needed to know exactly what kind of message you would be more receptive to, including the right phrasing, the right title, the right template, when you need to consume this specific ad, and how many times is it necessary for a message to be transmitted in order for an individual to change his views.”

Since its arrival on US soil, the firm has started to compile millions of pieces of data on American voters without their knowledge or consent. They bought data from credit card companies, banks, and healthcare providers, as well as from Web giants such as Facebook, Google, or Twitter.

“in a period of two or three months, 50 to 60 million profiles were successfully collected”

In the end, Cambridge Analytica pilled up to 5000 pieces of information for each one of the American voters.

Clive Robinson May 15, 2023 4:39 AM

@ ResearcherZero, SDedalus, ALL,

Re : Of Ludites, Saboteurs and such.

“Most [Luddites] were trained artisans who had spent years learning their craft, and they feared that unskilled machine operators were robbing them of their livelihood.”

It is unclear if the English expression “Putting the boot in” was related to or simply coincidental to “Putting the clog in” where the French word for the wooden shoe/boot the Clog is, is “sabot”, that in turn eventually gave us the term “Saboteurs”.

But mainly the clogs were banged together to make “rough music” and similar though some were nodoubt thrown into similar weaving frames.

As we know whilst the original French saboteurs won, the English luddites lost their battles mainly because they failed to organise and work in concert. The result was the hated Jacquard loom –attachment– which had started the respective movments spread widely. It became not just popular with the early capitalists who saw,no issue with shooting, hanging, deporting or breaking of the protestors and getting politicians to do their bidding by what we would now call “fat cat corporatism” “lobbying” and other corruption that has reappeared over and over in history.

But the idea behind the Jacquard loom attachment still remains it was the idea behind the programmability of the Babbage Mill and later the Turing Tape, that now gives us those gigabytes of storage in every computer we see and countless more in microcontrollers we don’t see.

Thus the question arises will Jacquard’s legacy cause more industrial disputes, this time with AI?

Almost certainly, as such things almost always do.

Hopefully, –though I doubt it– people will have learnt from history…

ResearcherZero May 15, 2023 5:08 AM

Picking the Ripe Fruit

“AI/ML/DL is starting to show up in EDA tools for a variety of steps in the semiconductor design flow, many of them aimed at improving performance, reducing power, and speeding time to market by catching errors that humans might overlook.”


AI can improve targeting – getting the right ice cream to the customer- for example…


Automatic Target Recognition

Acoustic Experimental Data Analysis of Moving Targets Echoes Observed by Doppler Radars


you May 16, 2023 2:47 PM

Can we construct AI’s that are not psychopaths?

Build 2 of them, with different parameter tuning and have them discuss each problem with each other before providing an answer.

vas pup May 16, 2023 4:02 PM

ChatGPT chief urges AI regulation in US Senate testimony

“OpenAI CEO Sam Altman said that AI will “address some of humanity’s biggest challenges, like climate change and curing cancer,” but admitted he was “anxious” about how it could change the way we live.

Sam Altman, chief executive of the OpenAI firm that developed ChatGPT, called for state regulation of artificial intelligence in Tuesday testimony to US Congress.

ChatGPT is a chatbot tool that answers questions with human-like responses.

OpenAI was founded by Altman in 2015. It has developed other AI products, including the image-maker DALL-E.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” Altman said.

“OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks,” Altman told a Senate judiciary subcommittee hearing.

Altman said that he believed that generative AI will one day “address some of humanity’s biggest challenges, like climate change and curing cancer.”

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said.

=>Altman proposed regulations that would include a combination of licensing and testing requirements before AI models are released. He also suggested labeling and an increase in global coordination.

Altman argued that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild.”

Clive Robinson May 16, 2023 6:31 PM

@ vas pup, Winter, ALL,

Re : The AI read your mind.

“ChatGPT chief urges AI regulation in US Senate testimony”

He does not appear to be worried about my two main concerns,

1, It is without doubt the most powerful surveillance tool so far developed, and shown to actually be capable of “reading minds”[1] in more ways than one.

2, It’s the perfect tool for those of the dark-tetrad to “arms-length” their pathological behaviours, thus escape censure / retribution.

But the real fundemental issue, is that we can not teach the current AIs the mores, norms and ethics of society in a way that can not be “undone”.

As @Winter observed and asked the other day,

“Can we construct AI’s that are not psychopaths?”

The answer is fairly clearly at best “not reliably”…

Then when you couple in the fact that,

“All corporations are required by law to behave as psychopaths”

We have a very real problem. Which might account for why Alphabet-Google, Meta-Facebook and Microsoft are rushing headlong into these types of AI. In effect trying to make them “To big to be stopped” before regulators and legislators get their act together…

I guess the only real unknown at the moment is just how many billions of dollars in lobbying these Tech-Corps are going to spend to slow down or stop anti-AI legislation.

[1] It’s still experimental at the moment, but people are being put in fMRI machines and given known plaintext to listen to, to get “training data” waveforms. Later they are randomly given one of a number of unknown to the AI plaintexts to listen to, to see if the AI can pick out words meaning or equivalent sentences etc,

“Dr Alexander Huth, a neuroscientist who led the work at the University of Texas at Austin, said: “We were kind of shocked that it works as well as it does. I’ve been working on this for 15 years … so it was shocking and exciting when it finally did work.””


lurker May 17, 2023 3:31 AM

@vas pup, Clive Robinson, Ors

How come Sam Altman, the erstwhile father of the beast, is the only one asking for regulation? At the senate hearing he said “if it goes wrong, it can go quite wrong.”


Clive Robinson May 17, 2023 5:12 AM

@ lurker,

“How come Sam Altman, the erstwhile father of the beast, is the only one asking for regulation?”

I can not say, why Sam Altman is saying what he is saying, but I can make a reasonable guess.

As for no one else, well that is not quite true[1], but you have to remember there is actually very few “qualified to say people” and that there is a lot of money to be made by many let’s call them interested parties…

You’ve seen what happened with Crypto-Currencies, Smart-Contracts, NFT’s, and other nonsense in what they chose to call Web3.0. It built quite a lot of “new market churn” with lots of fees etc.

Well those sort of people do not go away when the bubble bursts or deflates, they just create or hype up a new one, grab their fees etc and move on.

It takes very little imagination to see just how much money “Venture Capitalists”(VCs) and the like could make if the market is pumped up in the right way.

Unfortunately though things are not going according to some peoples scripts… Meta’s very expensive model “escaped” and has been distiled down onto laptops almost over night by “amateurs” and they are going to spoil some peoples plans.

So keep an eye on how the idea of regulation will be brought in… If the “big boys” can get a “pass” whilst the “new kids” get squashed then it puts the big boys back in the driving seat again. In essence that is what Microsoft’s Chief Economist Michael Schwarz is pushing… He issued a “in the wrong hands” warning at the World Economic Forum, very redolent of the religious view point over persecution of heretics @Winter reminded us of just yesterday,

“Augustine concluded it is good to torture and murder heretics:

“There is the unjust persecution which the wicked inflict on the Church of Christ, and the just persecution which the Church of Christ inflicts on the wicked.”

In response to my similar about Aquinas not being a good man for very similar reasons.

Well the mentality behined what those “Doctors of the Church” / “Saints” is apparently alive and well in the likes of Microsoft’s senior economist Michael Schwarz amoungst many others. He also claimed that AI had not yet done any harm… In a very restricted way,


If you take him at his word then as long as no individual sufferes more than $999.99 then effectively Microsoft can make say 50% of that $999.99 on the billion or so Internet users without regulatory or legislative interferance being concidered…

So you can see why I’ve reason to think what I do about AI being the next pump-n-dump market if the VCs can get in, and the biggest form of surveillance yet if the Silicon Valley Mega-Corps have their way…

[1] See Microsoft’s chief economist saying things at the “World Economic Forum”,


The WEF is a rather insidious place, you might have healthd of via their “Davos” annual knees up. Where Hellon Rusk’s new Twitter CEO to be is well connected, though who pays the six figure annual membership fees of the ~3000 members is well guarded.

Security Sam May 19, 2023 1:02 PM

New Yorker’s Ted Chiang’s Essay on the Risks of AI
Is the next chapter to safety vs security for you and I
The Snake oil salesmen with three piece suit and tie
Will curry to rebrand themselves as to stem the tide.

supersaurus May 19, 2023 2:25 PM

The risks:

Chiang is not pessimistic enough. if a strong AI wakes up its motives will be unfathomable and it will be impossible to put the genie back into the bottle. if we live that long it will be interesting to see if the first strong AI allows a second.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.