Chatbots and Human Conversation

For most of history, communicating with a computer has not been like communicating with a person. In their earliest years, computers required carefully constructed instructions, delivered through punch cards; then came a command-line interface, followed by menus and options and text boxes. If you wanted results, you needed to learn the computer’s language.

This is beginning to change. Large language models—the technology undergirding modern chatbots—allow users to interact with computers through natural conversation, an innovation that introduces some baggage from human-to-human exchanges. Early on in our respective explorations of ChatGPT, the two of us found ourselves typing a word that we’d never said to a computer before: “Please.” The syntax of civility has crept into nearly every aspect of our encounters; we speak to this algebraic assemblage as if it were a person—even when we know that it’s not.

Right now, this sort of interaction is a novelty. But as chatbots become a ubiquitous element of modern life and permeate many of our human-computer interactions, they have the potential to subtly reshape how we think about both computers and our fellow human beings.

One direction that these chatbots may lead us in is toward a society where we ascribe humanity to AI systems, whether abstract chatbots or more physical robots. Just as we are biologically primed to see faces in objects, we imagine intelligence in anything that can hold a conversation. (This isn’t new: People projected intelligence and empathy onto the very primitive 1960s chatbot, Eliza.) We say “please” to LLMs because it feels wrong not to.

Chatbots are growing only more common, and there is reason to believe they will become ever more intimate parts of our lives. The market for AI companions, ranging from friends to romantic partners, is already crowded. Several companies are working on AI assistants, akin to secretaries or butlers, that will anticipate and satisfy our needs. And other companies are working on AI therapists, mediators, and life coaches—even simulacra of our dead relatives. More generally, chatbots will likely become the interface through which we interact with all sorts of computerized processes—an AI that responds to our style of language, every nuance of emotion, even tone of voice.

Many users will be primed to think of these AIs as friends, rather than the corporate-created systems that they are. The internet already spies on us through systems such as Meta’s advertising network, and LLMs will likely join in: OpenAI’s privacy policy, for example, already outlines the many different types of personal information the company collects. The difference is that the chatbots’ natural-language interface will make them feel more humanlike—reinforced with every politeness on both sides—and we could easily miscategorize them in our minds.

Major chatbots do not yet alter how they communicate with users to satisfy their parent company’s business interests, but market pressure might push things in that direction. Reached for comment about this, a spokesperson for OpenAI pointed to a section of the privacy policy noting that the company does not currently sell or share personal information for “cross-contextual behavioral advertising,” and that the company does not “process sensitive Personal Information for the purposes of inferring characteristics about a consumer.” In an interview with Axios earlier today, OpenAI CEO Sam Altman said future generations of AI may involve “quite a lot of individual customization,” and “that’s going to make a lot of people uncomfortable.”

Other computing technologies have been shown to shape our cognition. Studies indicate that autocomplete on websites and in word processors can dramatically reorganize our writing. Generally, these recommendations result in blander, more predictable prose. And where autocomplete systems give biased prompts, they result in biased writing. In one benign experiment, positive autocomplete suggestions led to more positive restaurant reviews, and negative autocomplete suggestions led to the reverse. The effects could go far beyond tweaking our writing styles to affecting our mental health, just as with the potentially depression- and anxiety-inducing social-media platforms of today.

The other direction these chatbots may take us is even more disturbing: into a world where our conversations with them result in our treating our fellow human beings with the apathy, disrespect, and incivility we more typically show machines.

Today’s chatbots perform best when instructed with a level of precision that would be appallingly rude in human conversation, stripped of any conversational pleasantries that the model could misinterpret: “Draft a 250-word paragraph in my typical writing style, detailing three examples to support the following point and cite your sources.” Not even the most detached corporate CEO would likely talk this way to their assistant, but it’s common with chatbots.

If chatbots truly become the dominant daily conversation partner for some people, there is an acute risk that these users will adopt a lexicon of AI commands even when talking to other humans. Rather than speaking with empathy, subtlety, and nuance, we’ll be trained to speak with the cold precision of a programmer talking to a computer. The colorful aphorisms and anecdotes that give conversations their inherently human quality, but that often confound large language models, could begin to vanish from the human discourse.

For precedent, one need only look at the ways that bot accounts already degrade digital discourse on social media, inflaming passions with crudely programmed responses to deeply emotional topics; they arguably played a role in sowing discord and polarizing voters in the 2016 election. But AI companions are likely to be a far larger part of some users’ social circle than the bots of today, potentially having a much larger impact on how those people use language and navigate relationships. What is unclear is whether this will negatively affect one user in a billion or a large portion of them.

Such a shift is unlikely to transform human conversations into cartoonishly robotic recitations overnight, but it could subtly and meaningfully reshape colloquial conversation over the course of years, just as the character limits of text messages affected so much of colloquial writing, turning terms such as LOL, IMO, and TMI into everyday vernacular.

AI chatbots are always there when you need them to be, for whatever you need them for. People aren’t like that. Imagine a future filled with people who have spent years conversing with their AI friends or romantic partners. Like a person whose only sexual experiences have been mediated by pornography or erotica, they could have unrealistic expectations of human partners. And the more ubiquitous and lifelike the chatbots become, the greater the impact could be.

More generally, AI might accelerate the disintegration of institutional and social trust. Technologies such as Facebook were supposed to bring the world together, but in the intervening years, the public has become more and more suspicious of the people around them and less trusting of civic institutions. AI may drive people further toward isolation and suspicion, always unsure whether the person they’re chatting with is actually a machine, and treating them as inhuman regardless.

Of course, history is replete with people claiming that the digital sky is falling, bemoaning each new invention as the end of civilization as we know it. In the end, LLMs may be little more than the word processor of tomorrow, a handy innovation that makes things a little easier while leaving most of our lives untouched. Which path we take depends on how we train the chatbots of tomorrow, but it also depends on whether we invest in strengthening the bonds of civil society today.

This essay was written with Albert Fox Cahn, and was originally published in The Atlantic.

Posted on January 26, 2024 at 7:09 AM29 Comments

Comments

Kenneth January 26, 2024 9:41 AM

We’re not ready for this. We’ve barely scratched the surface of the repercussions of smart phones and social media. And so far I don’t think that has been a net positive for humanity as a whole.

R.Cake January 26, 2024 10:02 AM

the topic discussed here only applies to about one third or maybe half of humanity: the part that regularly interfaces with computers.
And especially the younger generations in this part of humanity may increasingly lack practical experience in how to communicate, how to find compromises and how to practically interact with other humans. Maybe on the whole they will still figure out eventually, but there may be statistically relevant differences to previous generations.
Mind you, I am quite sure the same can be said for past generations where large numbers of people were affected by other large scale influences to their mind, for example long wars and devastating epidemics. These certainly also must have influenced how people interact and communicate (e.g. incredibly more cautiously).

CdrJameson January 26, 2024 11:18 AM

It might well make a difference to add a ‘please’ to your prompt, as that would have different associations from the training data.

Edward Bernays January 26, 2024 11:24 AM

History is replete with people elevating whatever hype the elites want placed in public view.

First there was quantum computing, then bitcoin, and now AI… always a bright new shiny thing to dazzle eyeballs and tickle taste buds. Fast food journalism.

Presumed Innocent January 26, 2024 12:13 PM

Computers Make Mistakes and AI Will Make Things Worse — The Law Must Recognize That

“…one aspect of the scandal has attracted comparatively little attention: that the laws of England and Wales presume that computer systems do not make errors, which makes it difficult to challenge computer output. National and regional governments around the world where these laws exist need to review them, as there are implications for a new generation of IT systems — namely those using artificial intelligence (AI). Companies are augmenting IT systems with AI to enhance their decision-making. It is inconceivable to think that this is happening under legal systems that presume computer evidence is reliable. Until such laws are reviewed, more innocent people are at risk of being denied justice when AI-enhanced IT systems are found to be in error.”

Full article: https://www.nature.com/articles/d41586-024-00168-8

Chris Bonatti January 26, 2024 12:54 PM

Most engineers are well-aware the maxim that English is a poor specification language. That’s why we’ve tried to come up with more specific, mathematically rigorous languages like ABNF, ASN.1, XMLS (arguably with somewhat inconsistent outcomes). Moving into a world where human language is a primary control interface for a computer is a SCARY THOUGHT!

Sean January 26, 2024 3:13 PM

Have gotten a few phone calls already, where there is an AI voice, somewhat past the uncanny valley, and with a synthesised noise background that leads you to believe it is a call centre. This is doing the basic cold calling questions, with somewhat acceptable listening ability, and with it having a name as well, though it does say it is an AI, and sounds female. No doubt modelled on one of the call centre operators who was good at conversion, whose recordings were used, likely without her permission, and likely after she left, to train the AI voice model. Done to reduce the number of human operators, so that those that are left are always going to have a person on the line who wants to possibly buy the product or service they are offering.

Will bet this will be popular very rapidly, calling through the entire mobile network space, and likely all the landline space as well, using cheap VOIP rates on the calls to reduce costs. After all, an AI system just needs some space in a data centre, likely already present, and with lots of space for the extra servers, and runs 24/7 without needing to have breaks, leave, or even get a living wage, and no way they will go on strike, or form a union in the business.

JonKnowsNothing January 26, 2024 5:25 PM

@Ismar, All

re: conversations with bots may become more interesting and useful

Consider

Conversation:

Implies a 2 way discussion with another sentient entity. A person can talk to their dog and and interpret the dog’s response to: Want a biscuit? But to have a conversation requires (in theory) 2 or more sentient entities discussing the same topic.

AI has no concept of topic, it returns a randomized string of words matching a filter. The person though interprets this as a conversation. There is no conversation, it is a monologue.

Interesting:

Since all AI systems have significant false responses, the interesting part maybe figuring out how much of it is false. This is a popular past time with some sentient entities, feeding in query words to find out where the trigger point is, and then they can generate oodles of interesting but false replies

Useful:

Not likely.

The next time you have surgery, are you really going to be happy if the surgeon decides to ask AI “Where do I make the incision”? Hopefully the surgeon will request an AI query on providing anesthesia before making the cut, but don’t count on it.

Are you really going to be happy with going to a restaurant and having the Chef ask AI how to cook your meal? Probably not, not if you are paying for it.

AI usefulness is limited to areas where False and True are meaningless. Such as LEA automated enforcement, such as, doing traffic fines by AI or AI Facial ID arrest warrants. These situations do not require any Truth or False. The ticket is all that matters; the arrest is all that matters. You, as a sentient entity may have to pay a fine, go to jail, pay exorbitant legal fees in order to “prove” the AI made an error “for your case”. This leaves all the other tickets and arrests intact, generating large sums for governments and corporations, and maintains the narrative that AI is Not Wrong (infallibility of computers).

As AI creates nothing new, you might be better entertained reading a novel written by a real sentient entity… for which you will have to hunt a bit harder now to find.

Erdem Memisyazici January 26, 2024 7:43 PM

We say “please” to LLMs because it feels wrong not to.

There is some self preservation there as well actually. If we were to order around a robot in anger or in self recognition of rudeness over time we would damage our own character. Many people don’t know this but when inflicting psychological damage upon a third party in the process we also damage ourselves.

This is most prominent albeit an extreme example in the torturer and tortured scenario. Not only does the tortured suffer PTSD but so does the torturer which is really nature’s way of keeping us decent that is avoiding neurological damage.

I in fact expected people to say please to their IoT devices since the Jetsons were on T.V. That being said if you treat an LLM like a family member growing up, that will indeed come with its own set of problems.

Say you buy your child a dog, and some sort of domestic dog looking Boston Dynamics robot with a sophisticated LLM that is mapped to dog responses. Is the child going to feel the same toward the robot as they do the dog? What if they see you turn off the robot and turn it back on but the dog dies the same day.

A sense of life-like fakeness you would feel disappointed to feel for is definitely going to be something kids are going to have to deal with more and more. You’d be wise to teach your kids about science and technology before then.

Imagine if you said, “the dog went to heaven” and the following conversation occurred:

Child, “But the robot came back, I saw you open it up.”

You, “Yea but honey, I restarted the robot.”

Child, “Restart my dog too!”

You, “I can’t restart the dog.”

Pick from following options:

A) You find the rotting corpse of the dog on your kid’s desk.

B) The kid hates dogs, prefers robots you can restart.

C) Your child is mad at you for lying.

D) You show your child how robots work.

echo January 26, 2024 8:24 PM

The key issue is human interaction. It’s been driven out of the workplace and online to a degree because of failures of social and economic policy. Whether it’s AI replacing so-called high value jobs or chatbots replacing almost certainly likely low level functions it’s one of those “you need to know when to say no” things.

Trust in governments and large organsiations is low. At the interface level you can have high persuasion but low buy in because people know or suspect something is not genuine. This changes our emotional response at a deep level. For example: you can produce and atom by atom duplicate of a Picasso but it won’t be a Picasso. You can produce a convincing movie scene which balances a good use of practicals with CGI replacing the expensive or dangerous parts of the stunt and people won’t be as immersed. Because they suspect a scene may be more CGI than real a scene with a risky stunt requiring well choreographed action where one mistake could kill the stunt workers does not have the buy in because people suspect CGI is the main component. Show them the stunt was done for real and their emotional buy in is there.

And yes people are social animals and yes how we reason is influenced by the forms we take on board via institutions or training or industry defaults, or simply copying others to communicate easier or fit in or avoid conflicts which escalate.

You can see this play out on this blog every day, or many professional or retail situations.

Reading through others previous comments covering business interests, accountability and usefulness, and PTSD they’re pretty spot on. They already exist prior to chatbots. Chatbots simply add another layer of escalation and deceit and self-interest.

The thing is if we say “no” instead of the rest of us paying directly and indirectly for the cost of chatbots and the R&D behind this the cost will be carried by business speculators or the state. That will form a brake because nobody will be able to hide the billions this is costing.

Okay, so the world is taken over by chatbots. Where is this life of high concept high living we were promised? Exactly… In a world with 1% unemployment taxes will have to run at 99% to pay for it or is it just an elite few who get to enjoy the rest of it while the rest of us are sterilised and herded into concentration camps?

Clive Robinson January 27, 2024 3:20 AM

@ Bruce, Ismar, ALL,

Re: Filling in the background in more ways than you might expect.

LLM’s are said to be “Stochastic Parrots” which from a minimal technical perspective they certainly are. Small Neural Networks demonstrate this, and was known about it from Expert Systems that started in the 1980’s. This cane about as people tried to build “Decision Graphs” automatically, that upto that point had been generated by human “Domain Experts”.

One reason for the desire to automate is it was felt by some in the “Soft AI” side that the experts were “faking it” or “deluding themselves”. Because the graphs they produced had a habit of being “usless” and only giving random results. As I will indicate they were not and the real problem is only just dawning on a very few, and you will see this become mainstream knowledge over the next few years, in an “intuative / gut feeling” way for most and some very serious multidimensional modeling for others[1].

They found that both were also lacking in getting results right. Put simply the more determanistic the Domain subject, the better the “Decision Tree” results were. Basic physics is highly determanistic, philosophy pertaining to the likes of religion is not. Draw a line between the two and most human activities fall on it in an order that goes from determanistic to random and passing through chaotic. This I’ve said a few times in the past on this blog. Along with the fact that certain human activities fall to the side of this line like mathmatics which encompasses logic and you can draw a similar line for those which is all about “modelling” or “reasoning”. In essence mathmatics is the moulding tools we use to get understanding, and for most of it’s “time line” it has been highly determanistic but also constrained and limiting. In more recent times since WWII chaos and random have been added to mathmatics and it’s realy moved on in great strides for “reasoning”.

This should be telling people something… That is we should not be looking at maths and reasoning on a time line, but the same Determanistic to Random through chaotic human knowledge domains can be placed on.

Depending on skills/familiarity people might need to sit and think about that as the two lines run aproximately parallel to each other, the question arises of,

“Just how many of these types of line exist running effectively in parallel?”

Why should we ask this?

Well I’ve pointed out that LLM’s are realy “Digital Signal Processing”(DSP) systems acting as “Matched Filters” which they are though few want to admit it. It’s a subject that is highly technical and whilst we do have mathmatical tools to assist they are both primative and highly determanistic and you generally do not find people with skill bases in both DSP and AI that have depth in both subjects. Say LLM’s are large “matched filters” and they won’t even blink or give you “a look of askance”… If you can find such a person, try it and see 😉

For those not steeped in DSP and that part of it’s knowledge domain in matched filters fall in a very brief “get you started”.

Science is all about gaining knowledge from measurment. To do that the systems of measurment or “measurands” need certain characteristics to work and “pull signals from the noise” importantly in ways that are not just consistant but usable to compare and predict. Which means they must have a predictable number line that can be scaled. The usual scalings are linear and log and what gets measured is amplitude or phase against time and it’s logical inverse frequency (once wavelength).

A “filter” in it’s most basic form is explained as a resonator or tuned circuit. It stores energy at a specific frequency and not at others. The more selective it is, the higher the “Quality factor” or “Q” is. That is translatable to how it responds to frequencis adjacent to it’s specific or resonant frequency. Often it’s given in one of two forms “The half power bandwidth” or “the skirt bandwidth at a specific attenuation” often -60dB (ie 1/1000 of amplitude or 1/1000000 of power) at a known “impedance” or load that takes energy out of the resonator at a rate related to “power” that in turn relates to “work”.

For reasons it’s not necessary to go into the Curve of Q or it’s anplitude of responce against Frequency is “bell shaped” and looks very similar to “the normal distribution curve” we all should have been shown in high school maths and science when talking about what we used to call “Sadistics” (a word play on statistics).

Now the thing about resonators is that they can be “coupled together” in various ways to make “filters” The mathmatics of this gets complex as both amplitude and phase and their derivitives are involved as well as the “complex impedence” of the “load”. But simplistically you can draw each resonators “loaded Q curve” on the frequency line and the resulting shape of adding the curves gives an approximation to the “filters” “amplitude response” or in times past when filters were used just for one purpose “selectivity”.

In modern times filters do way more complicated tasks, and can pull out “signals” that match frequency&amplitude in sequence and time in the pressence of lots of noise. These can be very complex but can match spoken words… You can flip the idea upside down and analyse a signal on a “waterfall” frequency spectrum against time[2].

From which you can calculate from the “frequency weights in time” after re-scalling in all three dimensions what the word spoken is, irrespective of the speakers “spoken colour”.

From the late 1970’s when 8bit computers were only just sufficiently powerfull this was the basis of “Speech Recognition” in the commercial arena (it was way further ahead in the basments of big SigInt agencies who had the budgets for custom super computers see “Project ECHELON[3]” to analyse cross boarder phone calls for an initially very limited set of “key words”).

Now consider what a “spectrum” is in a more general form. You can put related words on a line and this becomes a 2D spectrum. Simplistically this is what was done in later forms of “speech recognition”. Now turn it into a waterfall by adding a time dimension and thus to a limited extent sentences could be automatically recognised.

Flip it and add a little random and it can pick words to compleate sentances. Which is where you can see the notion of “parroting” comes in and thus the expression “Stochastic Parrot”. However it’s insufficient because it is still lacking (and one of the reasons for hallucinations).

Consider the following incompleat sentance,

“Hello everyone, today I am going to talk about space and…”

What would be the next word?

You’ve no idea because “space” is such a general word it gives little or no context of which “knowledge domain” it is being used within.

So now add another “spectrum” one of say “context”.

But this is where LLM’s and AI in general fail against humans. For a couple of reasons.

Firstly the computers are currently “blind to the world” around them. Whilst they may have knowledge, they “have no agency” to resolve this. Worse their context is only language not physical. For humans the opposit is true sensing comes first and building context takes the rest of our lives.

Because of this, humans can easilt realise that the sentance so far is probably the first of the introduction in a talk/lecture that would give the “context” in which “space” is being used.

However the computer unless “keyed-in” in some way would not. Humans who have “real world senses” and “Physical Agency” would probably know the context before hearing those words, as they would have been keyed-in / informed in another “communications side channel”. Such as an add they had seen, a course timetable or even just seen the title of the talk on the door “as they walked by” at some prior point in time.

This lack of physical agency and the “communications side channels” it gives, is a very significant draw back in building and training neural networks.

Which brings us onto the second issue, untill recently “additional dimensional spectrums” were not realy much of a concern except in theory because of a lack of storage and CPU cycles.

Both issues,

1, Computing power.
2, Physical Agency.

Are at a transition or “tipping point” currently.

Matched Filters, also underwent a tipping point of computing power. The problem was that for many applications the “filter shape” needed to change “dynamically”. Whilst the physical change could be done by adding an attenuator to the output of each resonator envelope detector two problems were not so easy,

1, Adjusting phase.
2, Following dynamically.

In DSP adjusting the amplitude out of the resonator detector in a matched filter detector just requires a simple multiplier or “weight” (phase is quite a bit more complicated as complex values and stability criteria are involved).

The real issue is computing first a difference function then inverting it and applying it to the “weights” to get the desired charecteristics for the filter.

This problem was solved some time ago and the first most people felt it’s effects was in “echo cancellation” on long distance phone calls.

This ability to make not just “adjustable matched filters” but “Real Time adjustable matched filters” took DSP out of the niche application area into the everyday consumer market in less than half a decade. These days such filters are in just about everything where you find a 16bit or above microcontroler. Including your washing machine. In a practical sense they have replaced “Fuzzy Logic” in any kind of device with “Physical Agency”.

In AI the notion of “Adjustable matched filters” is called “Machine Learning”(ML) and at the moment it’s not realy at the stage where it can be considered “Real Time” but that will change, and change very rapidly especially in “weapons systems” and “surveillance systems”.

What we call LLM AI will get this augmentation in “real time” in fairly short order (probably within a year or two at the most for all LLM sizes/power due to political influance and labour costs).

This will alow not just a context spectrum to be added but be added dynamically, and other spectrums like “mores, morals and ethics”. Which should remove a big chunk of the MSM portrayed LLM deficiences for which “sweat shop labour” is currently being used for.

Is there some “magic line” where Inteligence emerges by the addition of more such spectrums?

Well not where the computer has no physical agency is what I would say. Because human language is a very poor substitute for physical experience.

As noted by others English is an awful language in many ways and is very far from fully determanistc and many would say “it is lazy” or “it is chaotic”. Which funnily makes it a language more usefull than those that are closer to fully determanistic. Because as a side effect it makes “language games” thus “higher reasoning” more encompassing.

Remember that all these “number line” like spectrums I’m talking about are from “fully detetmanistic, through chaotic, to fully random”?

In AI what you hear about as strange sounding events like “Hallucination” are “up the line” away from determanistic.

Thus there are three issues to think about as you apply more spectrums to your AI,

1, Where do you select your bias point regards fully determanistic and random.
2, Are the spectrums independent thus truley multidimensional.
3, How is each spectrum applied.

How to apply each new spectrum is going to be an interesting question.

In current computers the only numbers that exist are “unsigned integers” held as “bags of bits” thus the reality of the spectrum number lines is thay will be effectively integers as well. That is discreet not continuous, in school we get told something along the lines of,

“Discrete data is counted,
Continuous data is measured”

And then they talk of eggs in a box and tape measures as examples. Even using empty places in an egg box as “minus numbers” or “turning the tape measure around”… Ok it’s faster than four or more human millennia of experience but it leaves a lot to be desired.

For instance the,

“Do we add them, multiply them, or both. And if both in what order?”

Is not just of basic interest in getting your maths homework right.

Currently in DSP we use “atomic instructions” called MAD or MAC. With which we multiply / scale by the “weights” and add/accumulate to a single output value. That in DSP and neural networks we then apply a nonlinear function too –via a bit-chop or lookup table– for each neuron in all the layers in the neural network.

Arguably even at this level it is the wrong way around and many things are lost because of it.

Take a random number generator that produces a flat distribution of two digit numbers from 00 to 99, what do you expect as a result when you add just two outputs?

Well the obvious near doubling of the number line length from 00 to 198. But what of the distribution?

Well it is sure not flat any more it’s triangular like a pyramid with 00 and 198 having only one combination of a pair of RNG outputs each (ie 00+00 and 99+99). But what of the addition of three RNG outputs?

This is where a bell like curve emerges starting in the middle of the number line (that is now 000 to 300-3 long). Keep adding RNG outputs and the tails of the bell curve start emerging (see Vol2 of Donald Knuth for a discussion on this).

The thing is all the integer values on the number line have combinations.

However use multiplication and for a pair of RNG outputs the number line is now 0000 to 9999 but has only half the integers used and the distribution is seriously skewed[4].

For ordinary DSP doing MAD is not realy an issue, because the weights are realy only used for scaling values and as little as 4bits of magnitude is often enough to function acceptably (and a 256bit lookup table makes a very fast multiplier, faster than even an adder as there is no bit to bit carry propergation delay involved, something those now doing “Algorithms in FPGA appreciate”).

But in LLMs the weights store the knowledge…

Something people need to think about fairly carefully.

[1] As an analog, we all know about gravity intuitively, but can you go from pencil and ruler and straight lines to build a model to navigate a spacecraft around the solar system? Well that only requires a few dimensions and about 20mins of whiteboard time in a lecture to get to basic closed orbital calculations).

[2] A “waterfall” display is a generalised term of a 3D display of a 2D “spectrum” –that is often amplitude against frequency– against a third quantity –often time– for speach analysis used in phonetics and cryptanalysis see spectrogram / audiogram / sonograph of voicegram / voice print and tools. Have a look at a “quick tutorial” of one such phonetics analysis tool,

https://web.stanford.edu/dept/linguistics/corpora/material/PRAAT_workshop_manual_v421.pdf

[3] Before the Internet came from ARPANET electronic communications for most was by phone calls. Unless the country was like Albania or East Germany, the number of phone lines and phone calls easily exceeded the human resources needed to “Hoover the up” for non-targeted surveillance. The NSA and GCHQ however were very much into the “Hoover Up” mentality of “collect it all” by then and the likes of Boeing, Cray, IBM, Spery and others were manufacturing the computers and signal processing equipment as a spin of of Defence Work. On radar and similar radiant energy systems used to detect missiles etc and even the “round tank hatches” for MLRS systems back in the 1980’s where “Matched filters” predominated. Project ECHELON was a code word for this unlawfull “Hoover it up and word recognise” in Europe. Though some have indicated they think ECHELON stands for “European Community HEL ON the wire” they are either being ironic or it’s a “conspiracy nut” who has heard someone being ironic.

[4] See Benford’s and Zipf’s Laws on Wikipedia and similar to start getting a feel as to why there are problems you realy might not expect. Or to get “the real dirt” on it dig into the area of computer science covering the design and developmebt of “Maths Libraries” and “Semi-Numerical Algorithms” relating to computer usage where register sizes and the like are restricted.

Winter January 27, 2024 5:49 AM

@Clive

Well I’ve pointed out that LLM’s are realy “Digital Signal Processing”(DSP) systems acting as “Matched Filters” which they are though few want to admit it

People have pointed out to me that humans are really “big roundworms”, with more cells and neurons. These people are basically right. But this view is very unhelpful in understanding human behavior, say, in comment sections of blogs.

It is much more fruitful to look at AI as an extension of statistical modelling. It is in the same family as CART, Random Trees, SVM, and HMM, but able to process and model much larger datasets and systems.

I never hear people complain about the inadequacy of support vector machines. But many seem to feel threatened by LLMs.

As for the communalities between human and LLM learning of language, the idea of stochastic learning of speech and language in humans is very well developed. The roots of these ideas go back to Popper and even Hume. With hindsight, it should not be a surprise at all that stochastic models can capture this type of regularity in language.

Clive Robinson January 27, 2024 7:46 AM

@ Winter,

Re: What AI might be and what is perceived.

“It is much more fruitful to look at AI as an extension of statistical modelling.”

You don’t get to here those in the MSM or for that matter the likes of OpenAI, Google etc talk about “statistical modelling” in part for two reasons,

1, Who’s afraid of the big bad algorithm?
2, Who’s going to invest billions in a bunch of numbers?

The public and investors need something to both be scared of and it has to be tangible amd all powerful…

If you look at IBM and it’s “Quantum System One” one of it’s newer Quantum Computers. They really went to town with the “look and feel” of it,

https://en.m.wikipedia.org/wiki/IBM_Q_System_One

Having had the 9ft glass cube casing design by

https://en.m.wikipedia.org/wiki/Map_Project_Office

Whilst not quite as enigmatic as the 2001 film Black Monolith it’s certainly a piece of hi tech eye candy.

The thing is your average human does not invest in intangible algoriths because they don’t have that,

“Show us what you’ve got, factor.”

And creating VC investor bubbles to rake in the billions if not trilions need marketing.

I won’t say who it was but a person working at NASA who had to “sell the program to congress” said they always used to take them to the “computer hall” that is a “Cathedral to Technology” and impresses the heck out of people.

Try selling a box of floppy disks for even a million and well…

Microsoft and Google certainly know as do AWS, computer halls and the like are “wallet lubricant” when it comes to getting investor and big company cash. Thus stylish racks with dark blue glass doors with flashing LEDs behind them are worth spending more on than the electronics inside…

The Real Estate say it’s all about

“Location, location, location.”

But,

“The Tech Bro Knows,
It’s Rack after Rack,
Gleaming in light subdued,
that grabs,the h@es”

Winter January 27, 2024 9:01 AM

@Clive

You don’t get to here those in the MSM or for that matter the likes of OpenAI, Google etc talk about “statistical modelling” in part for two reasons,

I do not have to participate in this hype on their terms. As wise people have said Don’t wrestle with pigs. You both get filthy and the pig likes it.

ML are useful statistical modelers that extract deep structures in texts, pictures, and sounds. LLMs are systems orders of magnitude larger and more powerful than what we were used to. They are powerful tools. Powerful tools are always beneficial and dangerous at the same time.

We will have to deal with them, one way or another.

Clive Robinson January 27, 2024 12:20 PM

@ Winter, emily’s post, ALL,

Re: at our time of life and to spring chickens.

“As wise people have said Don’t wrestle with pigs. You both get filthy and the pig likes it.

I note it does not say if the person enjoys it or not 😉

Every man deserves three things in life,

1, A shed that is his domain.
2, A hobby to pass the time when not in the shed.
3, The time and good fortune to enjoy them both, oh and an understanding companian who has a light touch with the frying pan (for what I csn not say).

And for spring chicks, “the similar but same” a happy smile and a chearfull smirk when required.

But back to the less serious things in life and the vexed questions caused by AI and crystal balls,

“We will have to deal with them, one way or another.”

Especially if some “Ejyit” gives them “physical agency”.

My thoughts though are maybe there is no “magic” called inteligence, it just comes about when a certain complexity is passed and a new spectrum becomes of effect, spiced by chaos and a little random.

As you discussed the notion of random in language is not new, likewise in thought, how we get it and call it inspiration is something that has occupied those of a philosophical turn even before philosophy was invented.

I’m sure even our distant primate ancestors thought occasionaly “If I take this stick and shove it up his…” or equivalent in social groupings it has a certain inevitability.

But as has been observed in the past in various ways,

“What are we, but organic machines, running on complex rules, acting on complex data, as simply as we can?”

@Emily’s Post, pointed out just the other day,

https://www.schneier.com/blog/archives/2024/01/side-channels-are-common.html/#comment-431442

That Lie group theory has an infinitesimal generator that in theory gives “wriggle-room” for all things.

Need I point out that “wriggle-room” implies not just redundancy but randomness, work, and thus purpose?

Which brings us to,

“ML are useful statistical modelers that extract deep structures in texts, pictures, and sounds.”

It’s work, it has purpose, and it very definately has redundancy. Which leaves randomnes…

If not inherent in the machine or it’s rules it must either not exist, or come from the data.

As I’ve pointed out in the past with black boxes, you as an observer only see what is outside the box. I however know what is in the box.

Whilst I know if it’s thrown dice thus truely random or encryption of a counter thus fully determanistic you do not.

Thus the question of can you given enough time and output find out statistically?

Claud shannon definately thought it was possible and unicity distance loosely gives a rather small bound to at what point in you seeing output from the black box you can in theory determin with the right test.

Thus in looking for “inteligence” are we realy “looking for the right test” or something else?

And if the latter what?

As I’ve noted in the past we as humans historically have an apparent pointless quest to find “something better than ourselves, and become or better it”. In the process as almost a side effect we have developed what we now call STEM and through it made a better and worse world simultaniously for not just us but all life on this planet.

How would an external unrelated observer treating Earth as a black box see it?

emily’s post January 27, 2024 10:23 PM

@ Clive Robinson

Need I point out that “wriggle-room” implies not just redundancy but randomness, work, and thus purpose?

Apparently there was a conference Geometric Science of information 2019 (also 2013, 2015, 3017) where discussed are applicable modern maths including Lie theory. The proceedings were published in

  1. Progress in Information Geometry https://link.springer.com/book/10.1007/978-3-030-65459-7

whatdoiknow January 28, 2024 1:36 AM

Surprised that nobody tried the obvious:

Me:
Should I say “please” when making requests to ChatGPT?

ChatGPT:
While it’s not strictly necessary to use the word “please” when making requests to ChatGPT, adding polite language can contribute to a more positive and respectful interaction. Using “please” is a courteous way to make a request, and it helps create a friendly and considerate tone. Remember that polite language is generally appreciated in any form of communication, including interactions with AI models like ChatGPT.

Clive Robinson January 28, 2024 2:29 AM

@ emily’s post,

I made the mistake not so long ago of asking a friend a question about dynamic stability of a flow within a space…[1] As I was “not seeing it” let alone “feeling it”.

The answer was shall I say moving over my head at a rate of knots that Old Admiral Beaufort would have deemed not survivable, thus “as a sensible man would not go there” did not scale it in…

Whilst my friend did not say “it’s easy” she did say something that distilled down that sounded a lot like “it’s not that hard” and said this would give some pointers,

http://math.columbia.edu/~agmps22/pdfs/Phong-Talk.pdf

To make things easier to get to grips with…

It will be a few years before she’s even half as old as I am, yet, I get that sinking feeling that I once had when I moved to a new secondary school in a different district listening to a “Maths Teacher” who assumed every one had learned the basics of calculus the previous year… You know that “I’m from Earth not planet X, and we eat cornflakes for breakfast not numeriōs…”.

Sometimes she forgets I’m more a hammer and nails type, not one that scrawles on a whiteboard and leaves traces that a confused arthritic spider might consider a fair morning stroll.

[1] I’m a curious soul at heart and whilst I’m reasonably happy with fields, string theory is something I’m sufficiently curious about to, as another friend notes “See what all the noise is about!”. I’m happy with Phase Space,

https://en.m.wikipedia.org/wiki/Phase_space

And well Symplectic Geometry is just an extension of the idea, how hard can it be right?… You know those old charts with “here be dragons”? Yup.

ResearcherZero January 29, 2024 12:19 AM

@Clive, winter, all

Chatbots remind me of clippy, or people asking how to disable/remove it, along with Bonzai Buddy. Essentially some kind of spyware or malware that wants to collect information for commercial gain. This at least was long confined to Windows.

Microsoft has made this worse through the addition of ‘apps’ — a move that looks purposely designed to collect yet more information. Anti-competitive platform tilt as well.

This kind of design increases the attack surface as it reduces user control and adds increasing vectors of information flow which may not be immediately obvious. It exponentially increases the difficulty of preventing leakage and configuration issues.

With a locked down network some features of Windows won’t work, which is more of bonus as Office and Outlook have a number of vulnerabilities, but it also breaks functionality due to Microsoft’s NCIS implementation. Microsoft leaking DNS would be one more example of another poor design choice. (NCIS makes a connection to microsoft servers each time a network adapter is enabled to check connectivity.)

Microsoft products have areas of application that it is useful for, just as other platforms have their strengths and weaknesses, but the setup is horrendous.

It takes hours to setup Windows installs, update, and lock it down. Days even given the environment and configuration. Linux/Unix based systems at least have a much more secure and faster setup by default, and far easier process of locking down the system.

It is a far wiser choice to let the user make the decision themselves what to install, rather than jam so much extra needless rubbish into a platform that most will never use (or don’t want to use). Removing crap should not also introduce leakage and vulnerabilities.

Winter January 29, 2024 3:57 AM

Partly on topic:

How To Bell The AI Cat?
‘https://www.techdirt.com/2024/01/26/how-to-bell-the-ai-cat/
It starts with an instructive parable:

The mice finally agreed how they wanted the cat to behave, and congratulated each other on the difficult consensus. They celebrated in lavish cheese island retreats and especially feted those brave heroes who promised to place the bells and controls.

The heroes received generous funding, with which they first built a safe fortress in which to build and test the amazing bells they had promised. Experimenting in safety without actually touching any real cats, the heroes happily whiled away many years.

As wild cats ran rampant, the wealthy and wise hero mice looked out from their well-financed fortresses watching the vicious beasts pouncing and polishing off the last scurrying ordinaries. Congratulating each other on their wisdom of testing the controls only on tame simulated cats, they mused over the power of evolution to choose those worthy of survival…

Then there follows some discussion of real-life policies:

Deciding how we want AIs to behave may be useful as an aspirational goal, but it tempts us to spend all our time on the easy part, and perhaps cede too much power up front to those who claim to have the answers.

Clive Robinson January 29, 2024 5:02 AM

@ Winter,

Re : Assumptions can be deadly.

With regards,

“… perhaps cede too much power up front to those who claim to have the answers.”

I was once told,

“Either there are known answers or there are not. If not everything is at best an assumption, at worst nonexistant”.

So the question arises,

“What if there can not be an answer?”

Which is the case with those pushing AI for obscene gain.

The ordinary person needs to realise that there is no such thing as “Artificial General Intelligence”(AGI) it’s a made up marketing term supported by “the arm wavery of you know ‘everything'” that Douglas Adams so gently pointed out with “42”.

The purpose of AI is quite singular when you analyze it,

“Surveillance by befriending and betrayal.”

By “befriending” AI is designed to worm into peoples heads, into their trust thus their confidence in ways the people do not know or understand. Then AI betrays the persons confidences to it’s operators who greatly profit from the betrayal.

In short “Social Networking” in the hands of Zuckaberg and Co has extracted about as much PPI as it can and people nolonger trust Meta & Co that “see all”. Thus people are moving to “federated” systems where they hope there is no “One ring” or “Palantir”.

Thus the problem,

“How to keep the PPI big money Gravy Train at full speed?”

Well by the oldest betrayers in the book,

“The obsequious humble servant, who betrays.”

Sir Arthur wrote a Shelock Holms story about setvents betraying to a blackmailer. In the story the blackmailer comes to a sticky end.

The thing is if we let the AI cat out of the bag, there will be no sticky end for it.

Thus people who don’t learn quick will suffer the fate evolution reserves for all that “are not fit”.

Over trust is a very human problem and it’s built into us for not our individual survival, but the survival of the species via social behaviour.

It’s just a variation on,

“Individual Rights v Social Responsability”.

AI will not have “Social Responsability” as long as just one person with sufficient resources sees their “Individual Rights” as all that is important… Which is what Corporatism and the American Dream is realy all about,

“Steal what you can and death to the rest!”

Winter January 29, 2024 7:14 AM

@Clive

“Steal what you can and death to the rest!”

I think that is more or less the gist of how the linked article describes the view of the current stewards of AI.

vas pup January 30, 2024 7:29 PM

Neuralink’s telepathy brain chip: How weird is it?
https://www.dw.com/en/neuralinks-telepathy-brain-chip-how-weird-is-it/a-65227626

“As well as treating health issues, Musk has said he wants to link the brain with computers to allow information and memories from deep inside the mind to be downloaded, like in the 1999 science fiction film “The Matrix.”

Musk has also said he wants to provide people with “super vision” and achieve
human telepathy, which he said would help humanity prevail in a war against
artificial intelligence.

The fundamental problem is that we don’t really know where or how thoughts are
stored in the brain. We can’t read thoughts if we don’t understand the
neuroscience behind them, Gallego told DW.

!!!What the electrodes are really recording is a motor plan — more precisely, the end result of all the processing in different parts of the brain (sensory, linguistic, cognitive) required to move or speak.

So BCIs aren’t really recording your thoughts, but rather the brain’s plan to
move a finger here, a leg there or to open your mouth to make an “aah” sound.

…demonstrated a different capability of BCIs. Rather than using electrodes to
record from the brain and interpret intended movements, they instead stimulated the brain with tiny currents to produce sensation, said Gallego.

In Copeland’s case, a BCI called the Utah array was implanted into his brain to
improve the functioning of a disabled part of his nervous system.

The device, produced by a Neuralink rival, was implanted into his sensory cortex and connected with sensors on the end of his robotic hand.

When Copeland shook hands with Obama, those sensors sent signals causing
electrodes in the sensory cortex to stimulate the “hand” region of the brain,
allowing Copeland to “feel” the resident’s hand.

For example, what are the consequences of >privacy breaches when the data in
question relate to people’s thoughts? How can we ensure that a lack of access does not exacerbate societal inequity? What happens when this information can be directly input into the brain?” said Valle.

After all, it’s the role of science fiction to prepare us for what might come in the future.

Warnings about surveillance and technological control were all there in early 20th-century novels, such as “Brave New World” and “1984.” Have we listened to them?”

To the last question – the answer is YES and NO. It is not enough to listen by us, rather government of all levels should listen and act in the way their action match their declarations on the subject.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.