Building Trustworthy AI

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?

For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy, it must be under our control; it can’t be working behind the scenes for some tech monopoly. This means, at a minimum, the technology needs to be transparent. And we all need to understand how it works, at least a little bit.

Amid the myriad warnings about creepy risks to well-being, threats to democracy, and even existential doom that have accompanied stunning recent developments in artificial intelligence (AI)—and large language models (LLMs) like ChatGPT and GPT-4—one optimistic vision is abundantly clear: this technology is useful. It can help you find information, express your thoughts, correct errors in your writing, and much more. If we can navigate the pitfalls, its assistive benefit to humanity could be epoch-defining. But we’re not there yet.

Let’s pause for a moment and imagine the possibilities of a trusted AI assistant. It could write the first draft of anything: emails, reports, essays, even wedding vows. You would have to give it background information and edit its output, of course, but that draft would be written by a model trained on your personal beliefs, knowledge, and style. It could act as your tutor, answering questions interactively on topics you want to learn about—in the manner that suits you best and taking into account what you already know. It could assist you in planning, organizing, and communicating: again, based on your personal preferences. It could advocate on your behalf with third parties: either other humans or other bots. And it could moderate conversations on social media for you, flagging misinformation, removing hate or trolling, translating for speakers of different languages, and keeping discussions on topic; or even mediate conversations in physical spaces, interacting through speech recognition and synthesis capabilities.

Today’s AIs aren’t up for the task. The problem isn’t the technology—that’s advancing faster than even the experts had guessed—it’s who owns it. Today’s AIs are primarily created and run by large technology companies, for their benefit and profit. Sometimes we are permitted to interact with the chatbots, but they’re never truly ours. That’s a conflict of interest, and one that destroys trust.

The transition from awe and eager utilization to suspicion to disillusionment is a well worn one in the technology sector. Twenty years ago, Google’s search engine rapidly rose to monopolistic dominance because of its transformative information retrieval capability. Over time, the company’s dependence on revenue from search advertising led them to degrade that capability. Today, many observers look forward to the death of the search paradigm entirely. Amazon has walked the same path, from honest marketplace to one riddled with lousy products whose vendors have paid to have the company show them to you. We can do better than this. If each of us are going to have an AI assistant helping us with essential activities daily and even advocating on our behalf, we each need to know that it has our interests in mind. Building trustworthy AI will require systemic change.

First, a trustworthy AI system must be controllable by the user. That means that the model should be able to run on a user’s owned electronic devices (perhaps in a simplified form) or within a cloud service that they control. It should show the user how it responds to them, such as when it makes queries to search the web or external services, when it directs other software to do things like sending an email on a user’s behalf, or modifies the user’s prompts to better express what the company that made it thinks the user wants. It should be able to explain its reasoning to users and cite its sources. These requirements are all well within the technical capabilities of AI systems.

Furthermore, users should be in control of the data used to train and fine-tune the AI system. When modern LLMs are built, they are first trained on massive, generic corpora of textual data typically sourced from across the Internet. Many systems go a step further by fine-tuning on more specific datasets purpose built for a narrow application, such as speaking in the language of a medical doctor, or mimicking the manner and style of their individual user. In the near future, corporate AIs will be routinely fed your data, probably without your awareness or your consent. Any trustworthy AI system should transparently allow users to control what data it uses.

Many of us would welcome an AI-assisted writing application fine tuned with knowledge of which edits we have accepted in the past and which we did not. We would be more skeptical of a chatbot knowledgeable about which of their search results led to purchases and which did not.

You should also be informed of what an AI system can do on your behalf. Can it access other apps on your phone, and the data stored with them? Can it retrieve information from external sources, mixing your inputs with details from other places you may or may not trust? Can it send a message in your name (hopefully based on your input)? Weighing these types of risks and benefits will become an inherent part of our daily lives as AI-assistive tools become integrated with everything we do.

Realistically, we should all be preparing for a world where AI is not trustworthy. Because AI tools can be so incredibly useful, they will increasingly pervade our lives, whether we trust them or not. Being a digital citizen of the next quarter of the twenty-first century will require learning the basic ins and outs of LLMs so that you can assess their risks and limitations for a given use case. This will better prepare you to take advantage of AI tools, rather than be taken advantage by them.

In the world’s first few months of widespread use of models like ChatGPT, we’ve learned a lot about how AI creates risks for users. Everyone has heard by now that LLMs “hallucinate,” meaning that they make up “facts” in their outputs, because their predictive text generation systems are not constrained to fact check their own emanations. Many users learned in March that information they submit as prompts to systems like ChatGPT may not be kept private after a bug revealed users’ chats. Your chat histories are stored in systems that may be insecure.

Researchers have found numerous clever ways to trick chatbots into breaking their safety controls; these work largely because many of the “rules” applied to these systems are soft, like instructions given to a person, rather than hard, like coded limitations on a product’s functions. It’s as if we are trying to keep AI safe by asking it nicely to drive carefully, a hopeful instruction, rather than taking away its keys and placing definite constraints on its abilities.

These risks will grow as companies grant chatbot systems more capabilities. OpenAI is providing developers wide access to build tools on top of GPT: tools that give their AI systems access to your email, to your personal account information on websites, and to computer code. While OpenAI is applying safety protocols to these integrations, it’s not hard to imagine those being relaxed in a drive to make the tools more useful. It seems likewise inevitable that other companies will come along with less bashful strategies for securing AI market share.

Just like with any human, building trust with an AI will be hard won through interaction over time. We will need to test these systems in different contexts, observe their behavior, and build a mental model for how they will respond to our actions. Building trust in that way is only possible if these systems are transparent about their capabilities, what inputs they use and when they will share them, and whose interests they are evolving to represent.

This essay was written with Nathan Sanders, and previously appeared on Gizmodo.com.

Posted on May 11, 2023 at 7:17 AM71 Comments

Comments

Doug May 11, 2023 8:19 AM

I am a cranky ol’ git. It seems to me that the Google and Amazon examples portend a bad out ome for AI. Money trumps all and there is, in the US at least, no clear legislative way to trustworthy AI given the wide divisions in Congress. AI meant to help me will, in reality, help the companies paying for the service. Oh happy day!

TimH May 11, 2023 10:39 AM

On hallucination… if an AI vendor guarantees no invention, but then a result is relied on that has made up facts or citations… who is liable?

Malaya Zemlya May 11, 2023 10:51 AM

A good thing about the new crop of LLMs (LLaMA and friends) is that they don’t have to run on server farms controlled by big companies. They are light enough to be run on a laptop (or even a good phone) and you can potentially customize them to do what you want.

Rauche May 11, 2023 11:04 AM

” For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy… we all need to understand how it works, at least a little ‘”

???

… very few people understand even the basic technology of their cellfones, automobiles, TV sets, PCs, etc — yet there’s mass use and trust in such
“mystery” processes.

AI is no different, as it matures.

Aaron May 11, 2023 11:54 AM

If AI builders are going to use model datasets from human output, I will never trust AI because I don’t trust humans. If history is any lesson to learn from, humans are not to be trusted; we are malevolent, foolish, selfish, & ignorant.

Winter May 11, 2023 12:16 PM

@Aaron

If history is any lesson to learn from, humans are not to be trusted;

If history is any lesson to learn from, without other humans you would be dead.

Whenever you ask yourself whether you can trust other people, ask yourself whether they can trust you?

Clive Robinson May 11, 2023 12:27 PM

@ Bruce, ALL,

I’m not at all sure LLM’s can be trustworthy, especially in their “On-Line” form.

Firstly the “generic corpora” is collected by “theft of IP” which you can get away with in the US to a certain extent but not many other places.

So who does the liability for the “harm” fall on,

1, The person who collects / collates the trove of input data.
2, The person who “builds” the base model.
3, The users of the base model.

Or with the systems @Malaya Zemlya indicates that are built on the Questions and answers of the users of the base model,

4, The person(s) who collates the questions and answers from the base model (ie the online service where they get posted).
5, The person(s) who builds the refined models from them.
6, The person(s) who use the refined models.

But it also raises the question of “lesser fleas” that is what is to stop the further refinment of the refined model of the base model.

Which raises the next trust issue as you refine the models, you can end up with smaller models. At some point with smaller models you are going to run into a capabilities limit. Which begs the question “Hard or soft limit?” and if the capability limit will show by anything other than “Increases in hallucinations”.

But as I’ve been noting recently, one big use for LLMs and Why Alphabet-Google, Meta-Facebook and Microsoft are so interested in them is that they are in effect,

“Surveillance of the users mind”

And that is incredably valuable information.

Back last century I worked for a company that built “Citation Databases” that researchers would search looking for relevant papers to read and actually do statisticall analysis on research directions. It was increadibly easy to see from these what their “research interests” were in quite some detail and depth, including areas the researcher was having hurdles rather than flat running. Seeing the end of the hurdles usually gave sufficient information to work out what solution they had found or be very very close to it.

The company took great care not to alow such surveillance to take place. However it got taken over and the new owners a well known “Publishing House” with exceptionally predatory behavior to others “Intellectual Property”(IP) wanted to force all users onto an “online system” simply so they could “harvest” researchers active interests for what many regarded as being very close to, if not actually “Industrial Espionage”.

These LLMs will “foster relationships” with people and thus the level of “insight into the mind” will become very large.

It was one of the great concerns that slowly built over those voice activated “Personal Assistants” such as Alexa and Siri.

But we now know that atleast two US secretive tech companies are looking at LLMs and similar for actual automated surveillance and security intelligence analysis of people. The most well known of which is Palantir Technologies, the founder and chairman of which is Peter Thiel of PayPal fame, and also Palantir’s largest shareholder, via some trickery not unlike that of Facebook he will always retain control of the core business and it’s direction, not the ordinary shareholder type investors.

It’s known that there are atleast two prongs to his business plan,

1, Replace Police Detectives and Intelligence Analysts in all US Law Enforcment and Government Agencies with Palantir Servers, and build “dependency” via the same techniques as drug pushers.

2, Build algorithmic systems to do what Cambridge Analytica claimed to be able to do but were not that successful at.

So call me “old fashioned” but I see not just a “liability risk” but also a significant “surveillance risk”, so my level of trust in LLM and similar systems is low very low.

David May 11, 2023 1:03 PM

I decided to return to in person college in my 40s. I can say that there is a lot of promise for LLMs. I used it to give me feedback (not rewrite) on my essays, and the feedback was pretty solid. Saved me a trip to the writing center, and help me get an A.

Eventually, I could see companies and even the DoD setting up proprietary instances for internal use, which would avoid the data leak problem.

Like with any computer or tool, it’s all about the training data. Junk in, junk out.

Uthor May 11, 2023 2:49 PM

“Furthermore, users should be in control of the data used to train and fine-tune the AI system.”

This sentence gives me issue. Isn’t one of the issues seen with AIs currently is that the data it’s trained by data that doesn’t represent everybody? Being fed data from internet posters really skews the “views” that AI chatbots end up spitting back out.

I can see, say, a racist feeding it data that fits with their views and the AI being awful as a result.

Chris May 11, 2023 3:47 PM

@Uthor:

So who has time or energy to provide their own datasets to train AIs? Certainly not me. If people give this technology a free pass, then people will be fed the perspective of the AIs they subscribe to. And the AIs will be trained on input carefully curated to advance the viewpoints and interests of the people who control them.

This is nothing new. In the past, state, corporate, and media actors could, to varying degrees, control the opinion of people by controlling what books and newspapers they read. AIs will be controlled in the same way; the process may be less transparent, however.

The owners of leading AI advisors will end up selling popularity, credibility, and influence as products to those who want influence and control in society. Sad but true.

lurker May 11, 2023 5:02 PM

@David, “… setting up proprietary instances for internal use, which would avoid the data leak problem.”

That might be a good solution, but I have to say I wish you luck on the leak problem.

MarkH May 11, 2023 9:05 PM

@moderator:
@JonKnowsNothing:

While I share the general concerns expressed in Jon’s reply to David, to my judgment it crosses the line of personal attack.

Seems out of place here.

ball smer torture and socks method, used by xyz May 11, 2023 10:00 PM

Building trustworthy AI:

seems like trying to secure sharepoint server, AD, exchange, etc:
especially in the 2000 decade.

not the policys of sane people.

Seems like wiley coyote and the road runner.

Laughs, every time!

Peter Gerdes May 11, 2023 10:06 PM

While desierable I’m not convinced being under our control is necessary. Our friends and coworkers aren’t under our control and yet they can be very helpful.

MrC May 11, 2023 10:47 PM

Bruce, have you or Nathan been reading Cory Doctorow’s “enshittification” rants, or did you come to the same place independently?

JonKnowsNothing May 11, 2023 11:18 PM

@MarkH

Not a personal attack only an observation of the impact of LLM on human behavior. A total change in what one previously expected. It shows how easy it is to slide into using a tool without understanding that the tool is not really helping you in any way.

In this case, the activity was higher education. The tool subverted the intention of learning something of value. The cost on learning “the value” varies but University is not cheap, at least not in the USA. Costing from $5,000 to $150,000 per year.

So the tool robbed the person of the cost of their investment. They didn’t really learn anything, the information went into the tool database. The payoff wasn’t that great either: the difference of a A or B? Or even the difference between and A and D. A “D” means Try Harder, skating in to an A because a tool writes better sentences, doesn’t really help the person, nor their future employment.

The pointed part of the comment is to make people THINK. They will never be able to retract their actions, recover their lost opportunity nor discover the impact on their future lives.

An old proverb

Four things come not back:

* The spoken word
* The sped arrow
* The past life
* The neglected opportunity

Clive Robinson May 11, 2023 11:34 PM

@ Bruce, All,

A little synchronicity

Just popped up on ARS Techbica,

“You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi”

https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/

A brief overview of why you might have GPT-3 on a google swatch by the end of the month 😉

More seriously, it’s the story of refining LLM base models so they can fit on ever less resources…

Remember though this is not a “lossless compression” you do loose with each refinement as you go down from 64B to 7B or less weights (B for billion weights or floating point numbers stored in a matrix that form the filter).

The penultimate paragraph,

“As for the implications of having this tech out in the wild—no one knows yet. While some worry about AI’s impact as a tool for spam and misinformation, Willison says, “It’s not going to be un-invented, so I think our priority should be figuring out the most constructive possible ways to use it.””

But we all know that “Evil has a plan, Good has a beer”…

Tom May 12, 2023 2:17 AM

“Being a digital citizen of the next quarter of the twenty-first century will require learning the basic ins and outs of LLMs so that you can assess their risks and limitations for a given use case.”

Considering that AI, just a few months/years ago, could not create images and videos, could not pass an exam or even understand text, and now it can do all those things… I think it is massively overconfident to assume that somehow, AIs capabilities will more or less stagnate on that level, despite Moore’s Law, despite huge economic incentives and despite a massive amount of research pouring into AI.

More generally, it feels like you (author and commentators) are not applying Security Mindset (https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html) to AIs and rather treat it similar to a human agent or an organization.

Sumadelet May 12, 2023 5:32 AM

Considering that AI, just a few months/years ago, could not create images and videos, could not pass an exam or even understand text, and now it can do all those things…

I have not seen any evidence that AIs understand text. I agree that AIs have been used to create images and videos and pass exams.

The best that I can say is that certain AIs give an appearance (simulacrum) of understanding that fools many people; and quite possibly fools some people all of the time.

One can quickly get into the weeds of a philosophical debate about the nature of understanding. That is not what I am aiming to do. LLMs are remarkably good language processors, but understanding requires more than blending concepts encoded in language tokens in complicated ways. To use a computing analogy: the corpus of information that LLMs operate on is effectively a very long Turing machine tape. You won’t find understanding there. You might find it in the state machine that determines the Turing machines processing of the data on the tape. But that ruleset is necessarily huge, and in humans, at least, not well defined. The state machine reified by LLMs and other AIs is big, but not big enough, and the corpus needs to be ‘the real world’, and not just an edited, biased, textual description of the real world. It also needs to cope with real-time inputs and updates – to a self-modifying ruleset/state machine.

Anything that can be computed by a Turing machine that can modify its own ruleset can be computed by (a larger) one with a static ruleset: but the static ruleset will end up significantly larger. My belief is that ‘understanding’ will be found in the meta-rules encoded within the structure of the ruleset. Generating such rulesets (for AI) that work without rapidly becoming unusable is a problem that hasn’t been solved yet.

We know that the output of AIs cannot be trusted to be correct, or even based in reality. But they can still be useful. In much the same way, aircraft autopilots do not need to be perfect, they just need on average to have fewer accidents than humans. We have heuristics that tell when it is reasonable to trust out fellow humans. We have not developed them for AIs yet. Some people trust AI output already, others are not so sure. We are going to end up trusting them, and it will go wrong. I hope we will be able to recover, much like society works its way around fraudsters. ‘Bad’ AIs are inevitable – it is up to us, as a society, to decide how to deal with that.

Winter May 12, 2023 5:43 AM

@JonKnowsNothing

In this case, the activity was higher education. The tool subverted the intention of learning something of value.

The underlying conflict is between students and schools/society. Basically, Students need a certificate[1], preferably with high marks. They might want to learn stuff, but the really need the certificate. Society wants Students to learn useful skills and knowledge. Grades and marks are the way Schools/Society make sure Students actually learned what they had to learn.

But Schools also need students to get grades to pay for the teachers and schools. This leads to perverse incentives.

So, students want high grades and might as well forget what they learned immediately after getting the grade. Schools need to deliver as many certificated students as fast as possible to get more money. But they cannot change the tests as these are provided from an independent agency to prevent the schools from just handing out certificates for money.

Enters: Teaching to the Test. Just focus on making Students ready to get through the test whether or not they learn anything useful. LLMs are just another technology to subvert standard testing. So you must adapt testing. It is an arms race.[2]

This is helped by the fact that testing skills is difficult and expensive. Therefore, tests tend to consist of proxy problems where students are asked to regurgitate factoids that are answered in a multiple choice test [3]. These factoids are pretty useless, but they should sample all the knowledge stored.

But there is an easier and shorter way to pass the test, just learn all the factoids by hearth: Teaching to the Test. You pass the test, but learned little to nothing.

Could it be better?
Obviously. When skill and knowledge really matter, testing is rigorous:
‘https://blog.collegevine.com/hardest-tests-in-the-world

Or, when a University really cares:
‘https://muse.jhu.edu/article/180209

The most important change came in the early nineteenth century, when paper-based examinations replaced a culture of Latin oral disputations and catechetical lectures on authoritative texts. Central to this development became the Mathematical Tripos, a grueling nine-day written examination that capped students’ undergraduate studies. The Tripos triggered further changes. Students hired private coaches who gave lectures to ten or so fee-paying students at a time, setting progressively difficult problems for them to master.

[1] Also the parents of the students want them to get a certificate.

[2] ‘https://staff.ki.se/chatgpt-and-assessment

[3] If you want an amusing lament on this:
‘https://en.wikipedia.org/wiki/A_Mathematician%27s_Lament

Clive Robinson May 12, 2023 6:29 AM

@ Sumadelet, Tom,

Re : What is understanding?

“I have not seen any evidence that AIs understand text.”

What do you mean by “understand”?

AI systems can translate from one language to another with reasonable accuracy… To do this they have to know what the words mean in one language, set them in a context, then output the meaning of the words within that context in another language.

Do they need to know what a Tulip is or a swan in an observed physical sense?

No, but then neither do people who are blind that can and do translate.

You could then ask a more vexed question if you or something can translate to the same level, what is jointly required?

How about “reason”?

I think you will find the answer for both is no.

From my point of view, the search for AI will be in part like the search for deities.

That is they are the invention of mankind to be somehow better than mankind, for mankind to strive to become like in some way. But as mankind aproaches the goal posts are set further back, so no matter how mankind improves, like the carrot on the stick it always remains just out of reach.

But also in part a chase to find some essence that eludes us. If we can not accurately describe what we mean by understand, then how can we say we understand but the AI does not understand?

Researchers are trying to solve this currently by using a peocess that has been proven to be inadiquate to the task. That is by assuming a “black box” and trying to correlate inputs with outputs,

‘https://arstechnica.com/information-technology/2023/05/openai-peeks-into-the-black-box-of-neural-networks-with-new-research/

We know observing the Black-Box is inadiquate for several reasons. The first is,

“Can we tell if the process is determanistic or not?”

Well the answer is no. Take the example of a cryptographic system, you put in plain text and some text comes out. It may not be the same size, but is it determanistically generated or not?

Well you could sit there watching the inputs and outputs and if the same input produces the same output repeatedly then you could tentatively say yes it’s determanistic. But the opposit is not true. That is a simple substitution cipher method can be put in any manner of modes that change the output determanistically, thus a clock or a counter would mostly stop input to output correlations becoming apparent to an observer…

But there is a more subtle problem which is what Searl’s Chinese Room is all about. You can find people pondering this and AI LLMs on the internet,

‘https://www.linkedin.com/pulse/observations-large-language-models-chinese-room-experiment-ng

Though was it written by human or AI?

Was understanding required?

All we as observers can say is,

The chase for understanding has started, but is it a Red Queen race or not?

Sumadelet May 12, 2023 7:38 AM

@Clive Robinson

You fell into the trap.

I specifically said “One can quickly get into the weeds of a philosophical debate about the nature of understanding. That is not what I am aiming to do.”

Turing also said:

The original question, “Can machines think?” I believe to be too meaningless to deserve discussion.

But he also proposed the Imitation Game, which is a practical approach to answering the question, if you consider intelligence to require understanding. It’s why Searle’s room is so challenging: if the outputs are identical with those from something you regard as having understanding/being intelligent, then you may as well regard it as being intelligent or having understanding – whatever that is. Machine Intelligence is carefully not defined by Turing – he just gives one practical test, and there could well be others.

Note that the Imitation Game is a statistical test, not binary, where the Interrogator is attempting to determine the sexes of the participants; not whether they are intelligent or not. If the machines answers are good enough that it succeeds at the game at least as well as a human, then the natural conclusion is that intelligence is happening – if you assume that humans are intelligent.

The end result is that asking whether AIs are intelligent or not isn’t very useful. People obviously think that for certain tasks they are ‘good enough’, and as usual, some people are over-optimistic about their actual capabilities: possibly dangerously so. If they are ‘good-enough’ at certain tasks with a failure rate at the task that is as good, or better than a human, then for practical purposes, they are as intelligent as a human in that (quite possibly) limited domain. LLMs have extended that domain by quite a bit, recently.

The world did not end when computers surpassed humans in the ability to win games of chess. Or Go. It is not ending when computers can summarize text better than most humans or predict molecular structures better than most humans, or most scientists, for that matter. They can likely write answers to exam questions in limited domains better than many, if not most humans, and quite possibly screenplays, and romantic novels in the style of Barbara Cartland. They are expert in areas of abstruse information, but like the unworldly academic stereotype, are completely unable to tie their own shoelaces.

Currently AI/LLMs can produce text that is well-formed, impeccably spelled, grammatically correct, plausible, believable, and utterly wrong with no basis in reality: and we can’t get the AI to generate a good argument why what it says it correct – although they try by giving false references (the ‘form’ for believability is correct, but the content isn’t). Once false information from AIs is accepted into scientific journals and oozes out via Wikipedia and other channels into the real world we will have trouble in store. It is already starting – so even the content of real references will contain false information, which will make life difficult.

Current AIs are tripped up by critical conversations, and being able to follow up references to compare with a store of accredited truth. When machines can deceive interrogators as well as humans can, we will be well into problem territory.

We already trust Machine Intelligence. If a spelling checker makes a mistake, people notice, and we consult a dictionary. If a grammar checker makes a mistake, we might consult someone more knowledgeable than ourself on grammar. If a text generator makes a mistake, we might confirm our understanding by looking at reference works or asking an expert. There are checks, and balances. Most of the time, we assume that the machine gets it right. But if the machine is the expert, we have a problem. And we are nearly there now. If the Google protein structure predictor makes a mistake, we might not even know about it. How do you give an AI self-criticism and a conscience, or ethics, and a moral code?

To a certain extent, asking if AI understand things is irrelevant. Enough people behave as if they do, that for (some) practical purposes, the answer is “Yes”.

We live in interesting times.

modem phonemes May 12, 2023 11:01 AM

Re: AI understands

“The fact that Babbage’s Analytical Engine was to be entirely mechanical will help us to rid ourselves of a superstition. Importance is often attached to the fact that modern digital computers are electrical, and that the nervous system also is electrical. Since Babbage’s machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance.”

Turing, Alan. Computing Machinery and Intelligence (1950)

So your AI is fundamentally no different from an egg-beater, just more complicated mechanically and whipping up a data soufflé rather than actual eggs.

So if AI understands, your egg-beater likewise understands.

QED. 😉

Winter May 12, 2023 11:24 AM

@modem

So your AI is fundamentally no different from an egg-beater, just more complicated mechanically and whipping up a data soufflé rather than actual eggs.

That is not very helpful. For instance, egg-beaters are not Turing complete.

Such comparisons are just like arguing that humans are no different from sponges. There is a sense where this is true, but in almost every context this is a meaningless comparison.

Also, people are often fast to dismiss Turing’s ideas while not having spend much thought about the arguments.

modem phonemes May 12, 2023 1:15 PM

@ Winter

Such comparisons are just like arguing that humans are no different from sponges.

They aren’t. You can go by mechanical alteration, removal, and addition from an egg-beater to a Turing machine. At what point in this process does something new enter that allows “understanding”? However it’s not possible to go from living sponge to living human by any set of material alterations. Sponges and humans exhibit fundamentally different capabilities that can’t be bridged by material changes.

Sumadelet May 12, 2023 3:29 PM

You can go by mechanical alteration, removal, and addition from an egg-beater to a Turing machine. At what point in this process does something new enter that allows “understanding”?

Emergent properties/behaviour and qualia.

At what point does a wavelength of light become red?

Light is not red. It has a wavelength. Humans tend to regard a certain range of wavelengths to be red, or rouge, or gules… The quality of redness is not an intrinsic property of light, just as intelligence, or understanding is not an intrinsic property of a collection of atoms. Connecting our subjective qualia to physical phenomena is an unknown process, about which there is a great deal of argument.

Is a Turing machine condemned to be a philosophical zombie, or does it become intelligent when it can, for example, discuss the nature of qualia, as experienced by it, with us.

It may well be that a sufficiently complex Turing machine behaves in a manner indistinguishable from human intelligence, at which point, either we are all philosophical zombies, as Daniel Dennett holds, or the machine is intelligent (and Searle’s Room can be too).

Emergent behaviour can be highly complex, such as the behaviour of a flock of birds, a shoal of fish, or an anthill. So intelligence could ‘simply’ be the characteristic of emergent behaviour in a sufficiently complex system. And a system built from simple elements is not necessarily predictable. The basic rules of arithmetic are very simple, yet Gödel showed it is possible to use them to ask questions that cannot be answered – they are undecidable, and it is possible to make statements that are true, but cannot be proven.

You can’t take a reductionist approach and say that because something is decomposable into smaller elements that are not intelligent that an ensemble built of such smaller elements cannot be intelligent.

The debate boils down to a simple set of questions:

Can a Turing machine be intelligent and/or understand things? If so, how? If not, why not?

(And, given that a theoretical Turing machine has a potentially infinite tape and set of states, can a finite Turing-like machine buildable by humans exhibit intelligence and/or understand things)

Some people take the view that, since human beings are intelligent, we are an existence proof that a sufficiently complex agglomeration of atoms can exhibit intelligence. That does not preclude computers from the possibility of being an equally intelligent agglomeration of atoms. We are built of the same stuff as eggbeaters.

Sumadelet May 12, 2023 3:58 PM

As a light hearted different view on intelligence, I can recommend Terry Bisson’s short story “They’re Made Out of Meat!”

‘https://web.archive.org/web/20190501130711/http://www.terrybisson.com/theyre-made-out-of-meat-2/

vas pup May 12, 2023 4:50 PM

EU lawmakers take first steps toward tougher AI rules
https://www.dw.com/en/eu-lawmakers-take-first-steps-toward-tougher-ai-rules/a-65585731

“Key committees in the European Parliament voted to back draft legislation on artificial intelligence along with its amendments to reign in generative AI like ChatGPT.

The vast majority of European lawmakers (MEPs) on the committee on civil liberties and on consumer protection voted in favor of the draft AI Act.

According to a statement released after the vote, the text outlines curbs on how the technology can be used across Europe while simultaneously allowing for innovation.

First proposed in 2021, the AI Act would set out rules governing any product and service that uses an artificial intelligence system.

==>Based on the four ranks of AI (between minimal to unacceptable), riskier applications will face tougher rules, require more transparency and accuracy.

!!!Policing tools which aim to predetermine where crimes will happen and by whom, are expected to be banned. Remote facial recognition technology will also be banned with the exception of ==>countering and preventing a specific terrorist threat.

The aim is “to avoid a controlled society based on AI,” Benifei had said earlier on Wednesday. “We think that these technologies could be used instead of the good also for the bad, and we consider the risks to be too high.”

While the original document does not cover chatbots at length, lawmakers added an amendment to put ChatGPT and similar generative AI on the same level as high-risk systems.

Once approved, the EU says the law would comprise “the world’s first rules on artificial intelligence.”

The agreement between the two parliamentary committees on Thursday is merely the first step in a long and grueling bureaucratic process which could take years before it
becomes a law across the EU’s 27-member bloc.”

Winter May 13, 2023 1:40 AM

@modem

However it’s not possible to go from living sponge to living human by any set of material alterations.

Every human started life as a single cell that grew through a number of stages to become a human. At any point, the simple change of some DNA base pairs would have changed the course from human to sponge, or roundworm or fish or bird or dog.

The only thing that makes a sponge different from a human is the code of their DNA.[1]

[1] There is more, but it is all rearrangements of the same chemical building blocks.

Clive Robinson May 13, 2023 4:54 AM

@ Winter, Modem Phonemes, ALL,

“Every human started life as a single cell that grew through a number of stages to become a human.”

It’s important to remember two things from that,

1, Organics are self replicating.
2, Organics have physical agency.

Currently “electronics” of AI’s do not have either capability, which in many ways is “holding them back”.

However it’s very definately guaranteed that in time organics will give AI’s first “mechanics” then the ability to self replicate

In fact it can be argued that we’ve already partialy done the first part with self driving cars and to a lesser extent directable “auto-pilots” in drones and the like.

One limitation organics have currently is that nerves are very slow methods of signalling between the “brain” and sensors and actuators. Electronic signalling can be faster a lot lot faster, but importantly unlike nerves does not actually require,a physical connection.

How long befor the likes of drones become AI controlled?

Some “delivery companies” are already looking into it. As are numerous guard labour fascilitating millitary and law enforcment equipment manufacturers…

It will not be long before some “hobbyist” connects one or more retail drones to an AI running on a home computer or similar. Also with a limited amount of “agency” by way of a lightweight “Single Board Computer” on the drone.

I would estimate that this is less than a year away, especially with the likes of what is going on to the far east of Europe currently driving it forward at increasing pace,

https://www.nytimes.com/2023/05/08/world/europe/ukraine-russia-attack-drones.html

With the only real defence against drones veing jaming of their control chanbels, a little self autonomy via “guided autopilot” would render jamming less effective.

In fact it would be relatively easy to design a semi-autonomous drone to “fly down the beam” and destroy/kill the source of jamning thus open up for a heavier bombardment of mini-drones.

Drones in effect rekindled the old ECM – ECCM battle but on a different front in a very different way…

modem phonemes May 13, 2023 11:46 AM

@ Winter @ Sumadelet @ Clive all

Re: “AI understands”, artifacts, nature, life and everything

Computing, as Turing made it precise and defined it, seems to cover what AI machines do. If Turing is right, no amount of additional machine complexity will change anything fundamental, since a universal machine is already available. AI is constrained by this. Once the machine and its data is given, the activity is mechanical and in principle literally an action of a physical mechanism like a complicated clock-eggbeater. There is no place to invoke “understanding” in the ordinary sense. The programmers that maintain AIs are painfully aware of this.

By reflection this raises the question if whether what humans call understanding, thinking, intelligence and so forth are just computation in the above sense of Turing. A closely associated question is whether it is possible to build an animal (including human) from chemicals and simpler matter, just as we build machines. DNA may account for the physical bodies of all life, but the question remains whether the body is all there is in the living, and so whether everything is just really in essence an artifact.

Aristotle argues that this position cannot account for what we observe in the real world. The natural and the artificial are essentially different, and living things have a form beyond their matter that can be termed “soul”, that accounts sufficiently and necessarily for their individual unity and activity. In humans, at least, this includes knowing, which means knowing that we know, and knowing that what we know is true. “Understanding” is just a type of knowing. None of this can said of artifacts and in particular computing machines.

modem phonemes May 13, 2023 12:52 PM

Re: causes, nature, artifacts, form and everything

It seems Aristotle is little regarded now. However his arguments on these matters are cogent. The “moderns”, Bacon, Descartes, scientists in the quantitative tradition, Russell, Popper etc., also offer cogent arguments. Where the these thinkers disagree must then be traced to their different starting points. The real problem is to somehow determine what are the correct starting points –

“parvus error in principio magnus est in fine, secundum Philosophum”.

A small error in the beginning is great in the end, according to the Philosopher.

Rick May 13, 2023 1:54 PM

Well I’ll just say this if I may. I remember when Yahoo and Google exploded onto the scene. I have always been a person who learns and researches for fun. In those early days these search engines were glorious. Every answer to every question I had was at my finger tips with (most times) links to clear, concise information with footnotes, documentation and clear honest sourcing. I was amazed. I even bought an aggregator type of thing for search engines to provide even more sources and it was bliss for a guy like me. Now look at it. Google I mean or just about any other search engine. I personally doubt that we as a species have the restraint, discipline or moral fortitude to create anything now without is being corrupted by weaponization, monetization and greed. I hope I’m wrong on this. Time will tell.

Clive Robinson May 13, 2023 1:59 PM

@ modem phonemes, Sumadelet, Winter, ALL,

Re : From whence things come.

“If Turing is right, no amount of additional machine complexity will change anything fundamental, since a universal machine is already available. AI is constrained by this. “

Even Turing knew what you are saying is insufficient.

It’s why Turing very specifically included a source of “Randomness” in his actual later computer designs.

In science we do not belive in “something from nothing” or “magic” that is we have to be able to not just where something comes from but how.

Take information and entropy. Entropy has been described as,

“Moving from a state of order to a state of disorder”

That is,

“You move the bits around, but you don’t end up with any more bits than you started with”.

In the general case entropy moves things in a way that establishes approximate uniformity in any given space based on probability. However it does not preclude the apparent opposit of what appears disorder becoming ordered in either a part of the space or all of the space, just gives it an extreamly low probability.

As my son learned at a quite early age, a solid block of lego bricks is not realy of interest. You first have to break the bricks appart and reorder them into a new interesting shape of which the number possible increases with the number of blocks available. But always the greater number of potential reorderings being uninteresting.

Thus the question arises,

“What makes the difference between interesting and uninteresting?”

The answer to which is,

“It does not matter, as it’s based on a fittness function, or a function comprised of many fitness functions”

You can thus define a simple function that,

“A vehicle has a body to carry a load and supporting structures to alow it to be moved on a surface or in a space.”

Which covers a leaf floating in a body of water or even blown in the wind through to our most sophisticated and often complex structures such as large ships, aircraft and rocket systems. As well as quite a few others that have been less successful such as a uni-cycle, snow shoes etc. But with by far the greatest number of objects possible failing even this loose fitness function.

Obviously the more fitness functions there are the more posibilities that a random input will match one of them.

Further it’s fairly obvious that a structured series of fitness functions can create significantly more matching objects than there are matching functions.

Such individual functions have in the past been called “Matched Filters”. Made of very primitive MAD (“Multiply and ADd” instructions) structures aranged in chains and nets etc.

The trick to building such matched filters is two fold and not in any way magic or invoking spiritual or divine guidence just fitness functions selected by a simple process of what we call evolution.

It works for all “living things” even those without apparent agency, and it works for LLMs and similar AI.

The two tricks are,

1, Find suitable fitness functions.
2, Compress them to a reduced set of functions.

Winter May 13, 2023 2:12 PM

@modem

DNA may account for the physical bodies of all life, but the question remains whether the body is all there is in the living, and so whether everything is just really in essence an artifact.

Is there any evidence that there is more needed than the body?

lurker May 13, 2023 2:25 PM

@Winter

Define the difference between a body in a mortuary tray, and the same body walking around the day before.

Clive Robinson May 13, 2023 2:46 PM

@ Winter, Modem Phonemes,

Re : From whence things come.

“Is there any evidence that there is more needed than the body?”

Out side of mystical, magical, tail chasing woo harr nonsensical reasoning, not realy.

Al that is needed is the ability for a body to replicate with changes based on a “fitness function” imposed by the environment the body is in.

That’s it, and it explains all but one thing which is where the environment ultimately originated from, if anywhere.

For most of mankinds writen record this “Where did it all start?” has actually shown mankinds inability to reason, and to a certain extent still does.

This “ultimate being” argument can quickly be seen as a nonsence by asking “Who made the creator?” and “From where did that clay come from?”

The argument you get given back boils down to “The King Game” of,

“Either believe in our mystical woo woo nonsence that gives us power to do what we please, or get nailed to a tree as an example of ‘Might is Right’ and you don’t have might”.

Even Shakespeare took issue with this nonsense, but also excercised sufficient care to avoid the nails/rope and woodwork.

modem phonemes May 13, 2023 2:59 PM

@ Clive Robinson

Turing very specifically included a source of “Randomness” in his actual later computer designs.

From what I read, adding randomness does not extend what Turing machines can do.

modem phonemes May 13, 2023 4:17 PM

@ lurker

Define the difference …

Your remark touches the spot with a needle.

It points to the issue of unity. As Aristotle asked “why is it a one, and not a heap ?”. Unity is said in many ways. The living body has a unity that the dead body (the moment after death) does not. What changed ? A purely material answer doesn’t seem to work.

@ Clive

Al that is needed is the ability for a body to replicate with changes based on a “fitness function” imposed by the environment the body is in.

This just pushes the search for the source of order and unity from one place to another without providing an explanation. Where did the fitness function come from ? Why should ”stuff” be subject to it ?

Clive Robinson May 13, 2023 8:45 PM

@ lurker, Winter,

Re : Life and death.

“Define the difference between a body in a mortuary tray, and the same body walking around the day before.”

Go back a necessary step.

From the point the body stops walking…

I don’t know if you’ve seen an “organ harvest” but just about all of the body… we actually know how to remove and how to use for transplants… Effectivelt for spare parts for others in need. Which means “the body” could be up and walking in many other bodies shortly after “certified brain death”…

Even the killing of a person by “leathal injection” (potassium chloride) does not “kill the body” it simply stops the heart pumping, which stops blood flow that causes “brain death” within a few minutes, whilst the heart and other organs remain viable. Similar with electrocution if done correctly.

So the real question of,

“What is so fragile about the brain?”

Thus arises, for which we don’t actually have answers, rather just more questions.

We know that parts of the brain can be killed, but not the person. That is what happens with a stroke. We also know from labotomies that considerable brain damage –from having an ice pick shoved through the eye socket and waggled about– will not kill you, though it will impare you.

Experiments in the past that are now technically not lawful to carry out in several Western and many other countries world wide have in the past shown that brains from animals can be removed from the animal and in effect kept alive likewise the animals body. Though it appears from brainwaves that disfunction sets in fairly rapidly, the cause of which was postulated as to be to do with loss of sensory input causing some kind of madness etc. Later research with the likes of playing with peoples vision with eyepads and goggles suggests the human brain can “work around” the issue by adjusting other senses etc. So whilst it may not be able to “repair it’s self” it appears capable of re-routing functions.

So what actually “kills a human” well it appears at some point a chemical message spreads out causing cellular death. Stop that and well you could end up being half dead half alive… But these days also capable of recovering with medical support… There are cases where people with what would have been fatal heart damage, when put on heart/lung bypass machines, have had their hearts recover measurably, and sufficient to be taken of bypass.

There are drugs like “Quinidine”[1] that can help prevent “Suden Cardiac Death”(SCD) Syndrome where the heart rythms become fatal as they inhibit the heart from pumping.

Recent news sujests there is another compound “ARumenamide-787” that actually does the job better, than both Quinidine and acacetin (a plant extract from safflower).

Children sometimes ask the question why if we have two of many organs, why we only have one heart… The answer might be that evolution has decided at some point that “being dead” is better than “being half alive” thus made it an easy “single point of failure” should injury/insult by accident or disease become two significant.

With some people in Silicon Valley already going down “The Vampire Path” of getting blood transfusions healthy “late teens” it would not surprise me if they have not already looking at how to get around the single heart issue… NASA for instance some years ago now developed a small pulseless pump based on the Archimedes screw that could be implanted to be used as a “Left Ventricular Assist Device”(LVAD). So people are definitely looking at the tech to “stay alive” including the use of various hormones to keep much of the body in the 25-45 age range.

[1] Quinidine comes from the same “peruvian bark” that the anti-malaria drug quinine comes from. Whilst it’s been known to work for some people for centuries, it’s also known these days to react with all sorts of other drugs. As a result of being out of patent etc it’s becoming very difficult to obtain… Thus a “gap” has been left in potential life saving medications in some countries since the begining of this decade.

Winter May 14, 2023 3:10 AM

@Clive

So what actually “kills a human” well it appears at some point a chemical message spreads out causing cellular death.

Not so much a message, but lack of oxygen and food, as well as the accumulation of waste products kills every single cell in the body.

Winter May 14, 2023 3:16 AM

@lurker

Define the difference between a body in a mortuary tray, and the same body walking around the day before.

A dead body does not function anymore. Like the difference between a computer with and without power. The difference being that animals cannot reboot after a power outage.

Wetware needs power to prevent it from disintegrating. If the power supply (oxygen and food) stops, the cells disintegrate.

That holds for (almost) all animals, sponges or humans. So whatever makes humans “alive” also makes sponges alive.

Clive Robinson May 14, 2023 4:38 AM

@ Winter,

“Not so much a message, but lack of oxygen and food, as well as the accumulation of waste products kills every single cell in the body.”

They will “eventually” quite slowly and extreamly painfully due to the death of parts being unsynchronized.

But as I’ve noted “death” is caused by something else, that makes the process much more rapid and synchronized and in many cases not painful.

That is there is a reversable build up to a tipping point into a cascade phase of systemic function cessation. Modern medical science has made the cessation reversable, long past that natural tipping point, and can pull back most of the organs etc from beyond it to last in new hosts for decades.

In the past it’s been assumed it’s due to potassium channels more resently sodium channels as well causing some form of neurological block.

Clive Robinson May 14, 2023 5:52 AM

@ modem phonems,

Re : Random and Turing.

“From what I read, adding randomness does not extend what Turing machines can do.”

It does, for many problems / functions / computations.

You are aware that Turing machines/engines are all about “moving and replacing symbols” on a tape, not even logical functions or simple mathmatical functions like add or subtract, let alone multiply or divide. It is important to realise that the Turing-Engine and Turing-Tape are actually independent entities.

The “secret source” behind a Turing-Engine which makes it “compleate” is conditional branching and the ability to move to anywhere on it’s tape. Therefore extreamly simple rule sets, can be built up into more complex rule sets along with alowing for adaptable recursion.

However that alone is insufficient, because there are problems we know a Turing-Engine can not calculate such as Cantor Diaganols and the halting problem. Such things are called Turing-Undecidable problems. But the fact a Turing-Engine can not calculate them we know does not mean there are not answers for them (Kurt Gödel proved that in the early 1930’s).

But we also know that if we have an infinite tape, then due to the simple logic of combinations we know that the answers to these problems will be on the tape somewhere.

Such an infinite tape of data is called a Turing-Oracle.

The result of a Turing-Engine with access to such an infinite tape of data will be more powerful than a Turing machine alone. For example if the tape contains the solution to the halting problem or any other Turing-undecidable problem.

But consider further, a Turing-Oracle filed with “random” data by definition,

“Is not computable.”

Because there are a limited number or “Only countably many” problems/ computations, but… an ulimited number or “Uncountably many” Turing-Oracles (tapes).

So a Turing-Engine with a “random” Turing-Oracle can compute things that a Turing machine cannot.

This was known before the first Turing Engine was ever implemented. Charles Babage had worked out the importance of conditional branching and the ability to jump backwards and forwards in the set of instructions to his “Mill” and in the incomplete archives of his life and work there are hints he realised the universal nature of his Mill but not that he realised it was as far as fully deterministic mechanisums could take us.

Winter May 14, 2023 6:30 AM

@Clive

But as I’ve noted “death” is caused by something else, that makes the process much more rapid and synchronized and in many cases not painful.

If your neurons have no oxygen, they will not signal. So your senses will shut down before you can feel any pain.

I doubt whether programmed cell death [1] will play a role. That is more a feature of multicellular life and it takes time and energy.

[1] ‘https://en.m.wikipedia.org/wiki/Programmed_cell_death

Clive Robinson May 14, 2023 6:42 AM

@ modem phonems

Re : Environment and Fitness functions

“This just pushes the search for the source of order and unity from one place to another without providing an explanation. Where did the fitness function come from ? Why should ”stuff” be subject to it ?”

Actually it does not “push it” anywhere it “pulls it” via gravity. It’s one of the reasons the Higgs Boson was called “The God Particle”.

The fitness function is about drawing chaos into a semblance of order and that is exactly what gravity does for us.

I’ll cut out explaining the early stages of the building of planets and stars by gravity you can look that up in any number of undergraduate and later texts on astrophysics,

Lets consider a planet surface with a variable thickness crust over a liquid mantal in the presence of other celestial bodies excerting gravatational force on the planet.

The result will be gravity inducing tectonic movment of the planets surface that causes both the pushing up and pulling down of matter in a slow churning motion. Again you can look this process up in any number of educational and academic texts.

If the planet is large enough it will have an atmosphere held there by gravity. Also it will orbit around a sufficiently large celestial body, often a star or sun. If the distance is right it will be in the “Goldilocks Zone” where there are the right conditions to support what we believe necessary for carbon based life. Again the planet being in the zone is due simply to gravity, and you can look it up in texts.

Now consider tectonics means that their are surface highs and lows, which water will run down to reach a minima due to gravity. The water is lifted up by a change in density in a gravitational field (which is also why a candle burns on earths surface and goes out in freefall).

Consider two bodies of water at a near surface level, joined by a channel. Water will flow slowly from one to the other. In that process the water will carry objects with it. So again gravity is responsible for the fact that seas are full of mineral salts whilst higher lakes are usually “fresh water” low in mineral salts.

Now consider the shape of the channel acts as a “filter” it alows the movment of smaller particles more freely than larger particles.

That filtering is a primary “fitness function” which is caused by the environment under the influance of gravity.

From that you can build most other environmental fitness functions. Eventually organisms will develop and they will be moderated by gravity at all stages, even when they develop sufficient complexity to have independent movment.

So if you insist there must be a pair of hands on a creating deity then inside our universe it’s mindless gravity for ever in human terms pulling…

As I’ve mentioned before ancient man’s belief in the Sun and the Moon as being the givers and maintainers of life makes way more sense than what is worshiped today. Which is basically some “Stale White and Male” copy of ourselves with “bad tude” and some nonsense about seeing every where and controling every thing and being everywhere… Invented to control the masses via the “King Game”.

Sorry no magic, no inteligence, nothing to have divine faith in, just the weighty subject of physical mass under the force of gravity, is all that is required. Which can and has been modeled under a very simple set of equations for quite some time now.

But the King Game goes on with it’s need to make people bend to the will of a very few psychopaths and similar by charltonism, fear and cognative bias giving rise to self deception by the masses.

nofool May 14, 2023 7:18 AM

‘We will all soon get into the habit of using AI tools for help with everyday problems and tasks.’

Speak for yourself, fool.

Winter May 14, 2023 8:06 AM

@Clive

Necroptosis

That seems to be in response to intracellular pathogens. But I know of little research done in programmed cell death in cells that die of oxygen shortage. Because that death is rather fast.

When the hearth stops, every cell is starved for oxygen within seconds or minutes. Any cellular response that requires energy will be impossible.

modem phonemes May 14, 2023 2:23 PM

@ Clive Robinson

Kurt Gödel proved that in the early 1930’s

The undecidability results (Church, Gödel, Turing, etc.) are not valid proofs, because they rely on actual (or completed) infinities. These objects don’t exist, even in the imagination. It’s way past due, to reverse Hilbert [1], that mathematics disabuses itself of this Cantor’s hell.

Their work on recursion and algorithms survives in modified form, however, in the valid mathematical world of incomplete infinities (where one can always “add one more” but there is never a completion of this sequence) as the study of feasibility and complexity of mathematical computation. This is much more interesting theoretically and practically because it casts light on what can be actually be done in the real mathematical and physical worlds and with what resources. [2]

In this context, a random section of tape is just more data like any data, so adds nothing to a machine’s capability.

  1. https://en.m.wikipedia.org/wiki/Cantor%27s_paradise
  2. https://web.math.princeton.edu/~nelson/papers/e.pdf

Clive Robinson May 14, 2023 3:45 PM

@ modem phonems,

Re : Random and infinity.

“In this context, a random section of tape is just more data like any data, so adds nothing to a machine’s capability.”

Sorry that is a false argument, and always has been. There is no such thing as

“data like any data”

Data has meaning or it does not have meaning.

Which it is depends on many things. All you can say is the longer a tape of random data is the more likely it is to have one or more sections of data with meaning under any given set of circumstances.

The same applies even with a short random tape where as little as one symbol can have meaning (as in Y or N encoded on a single bit or the sign of a number).

Arguably then all data random or otherwise short or long has meaning under some given set of circumstances, and not under others.

A subject that has come up of very real interest quite recently. One of the interesting things comming out of LLM’s is how big a “real” has to be in the matrix that is the weights of the vectors in the filter network.

In theory each real is infinate in scope, capable of finely encoding any information in a space, but in practice is nothing of the sort.

In fact some are finding very short bit lengths that are a couple of short natural numbers suffice quite well with as little as 4bits being sufficient. Which when you have multiple billions of such weights can make for very significant savings in various resources, not just memory (tape).

modem phonemes May 15, 2023 1:30 AM

@ Clive Robinson

how big a “real” has to be

Is this a real number? –

Roll a 10 sided die, get a number between 0 and 9, divide by 10, roll again get another divide by 100, roll again get another divide by 1000, etc. , adding them up as you go, i.e.,

d1/10 + d2/10^2 + d3/10^3 + …

where 0 <= d1, d2, d3, … <= 9 are chosen randomly.

By the typical Cantor or Dedekind (completed infinity) treatment, this has to correspnd to a real number (because bounded monotone increasing partial sums).

But the construction doesn’t seem to define anything.

Winter May 15, 2023 1:55 AM

@modem

The undecidability results (Church, Gödel, Turing, etc.) are not valid proofs, because they rely on actual (or completed) infinities.

Maybe you do not understand mathematical proofs?

Mathematicians do not doubt the three proofs. So I wonder why you think they are all incompetent? That you object to the use of “infinity” on principle is rather odd. You do not seem to object to calculus with infinitely large and small numbers, irrational numbers with infinitely many decimals, or the fact that there are infinitely many numbers.

There seems to be some very subtle difference between Euclid using the fact that there are an infinite number of points on every line and a diagonal argument or a Turing machine with an infinite length tape. A subtle difference that makes all mathematicians fools in your eyes.

It seems to boil down to the fact that, somehow, axiomatic systems are wrong because some axioms are not allowed for reasons of dogma. Mathematicians beg to differ and judge axioms on rigor and consistency only. With brilliant results.

In my experience, people claiming that all scientists are incompetent generally have lost connection to the field they are complaining about.

Clive Robinson May 15, 2023 3:42 AM

@ modem phonems,

“But the construction doesn’t seem to define anything.”

What you’ve given all be it confusingly is the definition of a polinomial to base 10 where each successive number is reduced in range by the base.

It is a way to approximate a real in a way that is understandable, but is not the real.

That same progression of 0…9 digits could just as easily be an encoding of a four hour long film in black and white or colour with or without multi-D sound. The plans for building the great pyramid of Giza. It is just a way of presenting any data in a range between 0…1 to an arbitary resolution of choice.

It has no meaning to what are “real world” objects without multiple layers of meta-data. Something I would have expected you to be taught in K12 or high school education.

If you missed it then I would have expected you to have it reiterated at various points in higher education depending on what you speciallised in. Certainly it should have been reiterated when dealing with the basics of Abstract Data Notation.

All you’ve realy described is a very basic way to “present” numerical information in a scalable way of arbitary precision.

You could instead draw a line from left to right and give it a “natural number” scale, then at one end an up and down line again scaled with “natural numbers” then draw a third line from a natural number point on the first scale to a natural number point on the second scale. How long would that third line be?

What if I picked the same natural numbers on both lines such as 1:1, 2:2, 3:3… It would be obvious that the length of the third line scaled by a simple ratio as well. But that would not give you the length of the line. But “n x 1:1” is a very compact way of giving an infinity of such lines. I could change it to “n x 1:3” for a different infinity of lines. How many of the lines in the “n x 1:1” set, match for length in the “n x 1:3” set?

Obviously all those third lines have a precise length, but how to present it in a meaningful form? The use of what is a right angle triangle alows concise presentation of line lengths that never ever fall on natural numbers on the two scales no matter how long you make them.

But what of that third line that passes through two natural number points one on either natural number scale can you reason further?

The simple answer is yes. The two scales at right angles form a geometric plane which is assumed to be flat. If you draw any line through either point they will at some point meet unless they all fall into the same set of “N x X:Y” “gradient sets” this is a consequence of what is called John “Playfair’s Axiom”.

From there you can reason a great deal about the relationships of points in a flat infinite plane, and such interesting things such as any three or more lines in three different “N x X:Y” gradient sets will form an enclosed area.

The thinking of Cantor or Dedekind simply followed on from such thinking and formalised it, such that further deductions can be made.

One of which is to do with intervals. If you draw a line between two points, does the interval include or exclude the points? Of the four resulting lines are they the same length or not, and if not by how much are they different?

modem phonemes May 15, 2023 3:45 AM

@ Winter

Re: who’s a mathematician

Mathematicians do not doubt the three proofs … why you think they are all incompetent?

Nobody said anyone was incompetent. The point was that completed infinities are accepted widely but such things do not exist. This then has implications for proofs that use them. A more vivid and easily grasped example is provided by mathematical induction. Proofs by mathematical induction are not proofs; they correspond to a completed infinite syllogism.

If such results are true there must be another way not invoking completed infinities to establish them. This point of view was basically universal from Greek times through Gauss, and up to Cantor. Doubt about Cantor and complete infinity has always been present among mathematicians. The author of reference [2] above is an example; he is a late professor of mathematics at Princeton. His papers online at [3] contain further development of the critique.

There is no problem with incomplete infinity. There are infinitely many numbers in the sense that given any numbers another number can always be found. But there is no totality of all numbers. Likewise, given an irrational number, the decimal expansion can be carried out to any point, but the total expansion cannot be exhibited.

Euclid using the fact that there are an infinite number of points on every line

Euclid only uses this in the sense of potential infinity. However much a line is divided, it can always be divided again. The line is never actually divided infinitely. “A line is not made up of points.”

axiomatic systems are wrong because some axioms are not allowed for reasons of dogma.

What is an axiom ? It is a true first principle from which the subject area proceeds by reasoning. Modern systems are sometimes wrong because they accept things for which the truth is in doubt or is lacking altogether, such as existence of completed infinity. This is not “dogma”.

lost connection to the field

I encourage you to take a look at the papers online at [3]. They are quite readable and mostly elementary not requiring extensive background.

  1. https://web.math.princeton.edu/~nelson/

Winter May 15, 2023 3:52 AM

@modem

But the construction doesn’t seem to define anything

It is part of one definition of probability in Kolmogorov complexity for fixed N.

For N->∞ it produces every real.

Winter May 15, 2023 4:06 AM

@modem

The point was that completed infinities are accepted widely but such things do not exist.

We are back to the fact that mathematics is a language of ideas. Ideas exist when they are in the mind, not in material reality. Nothing in mathematics exists as such in the real world.

You keep insisting that mathematicians are not “allowed” to investigate ideas when Aristotele or Aquinas don’t believe they exist. Mathematicians do not care what Aristotle and Aquinas thought. Actually, most people don’t care.

modem phonemes May 15, 2023 2:05 PM

@ Winter

… mathematics is a language of ideas. Ideas exist when they are in the mind

This is incomplete. Mathematics is not primarily a language, although it makes use of language. Also, the mind can entertain impossible self-contradictory concepts, such as “five sided triangle”. Mathematical existence is more than just ideas, it is certain kinds of ideas.

not “allowed” to investigate

Nobody is insisting things are not allowed, as if by someone’s arbitrary will. The point is that some things don’t exist and this means they aren’t part of mathematics.

lurker May 15, 2023 2:26 PM

@Winter

For N->∞ it produces every real

is sleight of hand to obscure the fact that N can never get to ∞

thus there may be reals that are not produced.

Winter May 15, 2023 3:33 PM

@lurker

thus there may be reals that are not produced.

You do if you run the procedure an infinite times.

But this whole discussion is ridiculous anyhow. Any proof regarding number systems, or points, will have to deal with infinity. Calculus is another subject that had to take infinity.

Mathematics has found ways to deal with it. Successful ways. The three incompleteness proofs are very successful examples. If you want to disprove them, you will have to do more than complaining about types of infinities.

I have not seen any compelling arguments yet.

Clive Robinson May 15, 2023 3:41 PM

@ lurker, modem phonems, winter,

Re : How long was the question 😉

“is sleight of hand to obscure the fact that N can never get to ∞”

Which is the point partly raised by my questions about intervals above of,

“If you draw a line between two points, does the interval include or exclude the points?”

“Of the four resulting lines are they the same length or not, and if not by how much are they different?”

Which in a curious mind raises the queation of what is between any set of points on a line.

If you argue that there is an infinity between any two points, then having any number of points on a line implies you get an infinity of infinities and,

“Wheee down the turtles you slide” 😉

Unless infinity has the old properties of the answer to,

“How long is a piece of string?”

Question which can realy only be,

“As long as you need it to be, no more no less”.

Which is also the answer to the question of how many digits do you need for a “real”…

@ Modem phones, Winter, ALL,

I’ve never been keen on Thomas Aquinas, his answers were trite and worse easily used to justify theft and tyranny.

For instance he firmly believed that every one should pay to what he decided was “society” and that only that “society” should pay to those they considered “the great and the good… Where of course “the church” was “society” thus decide how that was redistributed to the select few “worthies”. Hence rampant patronage and worse corruptions, we still see claimed today.

He also believed that “the great and the good” had the right to murder without guilt or retribution. Likening it to a surgeon removing infected body parts for the good of society –by the worthies arbitarily deciding who was good or bad– hence justifying any kind ot terrorism and tyranny…

To say Aquinas was “not a good man” would be a bit of an understatement. Further they say you can tell who a person truely is by the company they chose to keep. The fact that the Roman Catholic Church hold Aquinas with such reverence and set church doctrine on his musings says much about “the church”.

Winter May 16, 2023 1:50 AM

@modem

Mathematical existence is more than just ideas, it is certain kinds of ideas.

That certain ideas are “real” and others not makes sense if we go back to Plato and the original coining of the Idea (form).[1] This usage of Idea corresponds not to our common usage, it is better translated as ideal concept (or perfect concept) and was originally introduced to describe the perfect geometric forms.

The believe that these perfect forms, Ideas, really existed was linked to the conviction that the real world was an imperfect reflection or projection of a perfect, sacred, ideal world of the gods. In this “heaven”, these perfect forms existed as originals of the worldly imperfect forms we humans encounter.

Plato indeed thought that it was the task of the philosopher (mathematician) to reconstruct these perfect forms. After the Renaissance, the church molded these Platonic ideals to their image of God.

As far as I know, there are few mathematicians today that believe this anymore. Mathematics is beautiful, but it’s task is not to reconstruct and study sacred, perfect forms that “exist” in heaven to study the nature of God.[2]

More generally, Mathematics studies ideas and their relationship, ultimately based on logical reasoning. Logic forces rules onto mathematics. The beauty of Mathematics is that it has found a way to use and relate ideas in ways that are not just very beautiful, but also have shown time and again to be very useful.

The insistence that mathematics should be the study of some special perfect forms that exist in some sacred realm at the exclusion of other, “non-existing” ideas has always meant that religion and ideology curtail mathematics and science.

[1] ‘https://en.wikipedia.org/wiki/Idea

[2] This idea is reflected in an anecdote where e^πi=−1 has been inserted into a story where Euler proclaimed this formula to prove the existence of god. It probably never happened, and he used a different, nonsense formula.

Winter May 16, 2023 5:43 AM

@Clive, modem

To say Aquinas was “not a good man” would be a bit of an understatement.
To say Aquinas was “not a good man” would be a bit of an understatement.

That holds for more church fathers and Saints idolated by the Church:
Augustine concluded it is good to torture and murder heretics:
“There is the unjust persecution which the wicked inflict on the Church of Christ, and the just persecution which the Church of Christ inflicts on the wicked.”

Saint Cyril of Alexandria was the fanatic behind the lynching of Hypatia (and other lynchings)

modem phonemes May 21, 2023 9:39 PM

@ Winter @ Clive Robinson

Re:

Scarecrow : We may be trapped here forever
Patchwork Girl : How long is forever?
Scarecrow : That is what we shall soon find out.

you will have to do more than complaining about types of infinities.

The work of Steven G Simpson [1] addresses what part of mathematics can be recovered using finitistic methods, that is methods using only potential infinity. He gave a talk for general audiences in 2016 [2].

Quotes from the talk:

… Hilbert’s program … finitistic reductionism

… the popular wisdom or “settled science” is that Gödel’s theorems meant the end of Hilbert’s program and the end of objectivity in mathematics.

… some results of modern research, dating from the 1970s to the present. The big discovery here is that, despite Gödel, a large and significant part of Hilbert’s program can in fact be carried out …

… studies known as reverse mathematics . The goal of reverse mathematics is to determine which levels of the Gödel hierarchy are essential for the formalization of which parts of mathematics.

… we now know that the bulk ( ~ 85 %) of mathematics, including a great deal of non-finitistic mathematics, can in fact be formalized at the “weak” levels of the Gödel hierarchy.

  1. https://www.personal.psu.edu/t20/
  2. https://www.personal.psu.edu/t20/talks/nus1601/

Winter May 22, 2023 2:31 AM

@modem

The work of Steven G Simpson [1] addresses what part of mathematics can be recovered using finitistic methods, that is methods using only potential infinity.

I am not a mathematician, but from your formulation and quotes I conclude that it is still not possible to recover all of Hilbert’s program using finitistic methods.

But this is only about Gödel’s work. Church and Turing came to essentially the same conclusion as Gödel using different methods. Turing’s Halting proof can also be based based in a simply contradiction, not necessarily an infinity.
‘https://dgrozev.wordpress.com/2021/03/08/turing-vs-godel/

modem phonemes May 24, 2023 6:02 PM

@ Winter @ Clive Robinson

Turing’s Halting proof can also be based based in a simply contradiction, not necessarily an infinity.

The argument by D. Grozev you linked, although informsl, does seem to avoid any appeals to actual infinity. So I was in error about about the proof. Thanks for the correction.

I can see I have a lot more reading to do !

modem phonemes May 26, 2023 3:56 PM

@ Winter @ Clive Robinson

You might find interesting this treatment [1] of diagonal phenomena, from a unified point of view using category theory inspired arguments.

The paper does the Halting problem in a very clear way.

The terminology used seems to refer to the natural numbers ℕ as a set i.e. as a complete infinity, but it may be possible to interpret this as a potential infinite à la Simpson.

  1. Yanofsky, Noson S. “A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points”.

https://arxiv.org/abs/math/0305282

Winter May 27, 2023 6:15 AM

@modem

So I was in error about about the proof. Thanks for the correction.

I must admit that I am not sure whether the paradox, a variant of the barber or set-paradox about the set of all sets that do not contain themselves, does not implies an infinity after all.

So you might not have been wrong. I simply do not know.

I will look at your link.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.