Personal AI Assistants and Privacy

Microsoft is trying to create a personal digital assistant:

At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.

I wrote about this AI trust problem last year:

One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.

And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.

You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—­or at least like an animal. It will interact with the whole of your existence, just like another person would.

[…]

And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.

It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.

We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.

[…]

All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.

The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.

We are going to need some sort of public AI to counterbalance all of these corporate AIs.

EDITED TO ADD (5/24): Lots of comments about Microsoft Recall and security:

This:

Because Recall is “default allow” (it relies on a list of things not to record) … it’s going to vacuum up huge volumes and heretofore unknown types of data, most of which are ephemeral today. The “we can’t avoid saving passwords if they’re not masked” warning Microsoft included is only the tip of that iceberg. There’s an ocean of data that the security ecosystem assumes is “out of reach” because it’s either never stored, or it’s encrypted in transit. All of that goes out the window if the endpoint is just going to…turn around and write it to disk. (And local encryption at rest won’t help much here if the data is queryable in the user’s own authentication context!)

This:

The fact that Microsoft’s new Recall thing won’t capture DRM content means the engineers do understand the risk of logging everything. They just chose to preference the interests of corporates and money over people, deliberately.

This:

Microsoft Recall is going to make post-breach impact analysis impossible. Right now IR processes can establish a timeline of data stewardship to identify what information may have been available to an attacker based on the level of access they obtained. It’s not trivial work, but IR folks can do it. Once a system with Recall is compromised, all data that has touched that system is potentially compromised too, and the ML indirection makes it near impossible to confidently identify a blast radius.

This:

You may be in a position where leaders in your company are hot to turn on Microsoft Copilot Recall. Your best counterargument isn’t threat actors stealing company data. It’s that opposing counsel will request the recall data and demand it not be disabled as part of e-discovery proceedings.

Posted on May 23, 2024 at 7:00 AM46 Comments

Comments

Daniel Popescu May 23, 2024 7:33 AM

Hmm…:), and pun intended: I wonder if Mr. Schwartzenegger would have anything to say about this, as we all know how Scarlet Johansson’s voice was missused with ChatGPT.

noname May 23, 2024 7:56 AM

Abject horror: 😱 this is how many feel about the Recall ‘feature’ on Copilot+PC.

Recall takes a screenshot of your active screen every few seconds and saves it locally on your PC. Your screenshots will by analyzed by an AI model and searchable. What could possibly go wrong?

jbmartin6 May 23, 2024 8:10 AM

it’s the same problem I have with the idea of brain implants. The benefits are potentially huge, but who do you trust to write the software for it?

Chris J Rose May 23, 2024 8:36 AM

Another problem, related to trust in the space of a personal AI, is how to trust it to be safe in the face of adversarial input. Imagine a personal AI given responsibility over your email, that can send and receive on your behalf as some kind of executive assistant.

Prompt injection takes on entirely new kinds of danger in this case; malicious senders could achieve phishing at incredible scale.

Michael Singer May 23, 2024 8:51 AM

“… we need trustworthy AI”. — I am not convinced that we understand trust well enough yet.

“We are going to need some sort of public AI to counterbalance all of these corporate AIs.”. — that hypothesis has so many assumptions I’m going to assume it’s a joke.

Rene Bastien May 23, 2024 9:08 AM

I see many issues with what Microsoft is proposing. Can we trust Microsoft that data will solely be stored locally? Which encryption algorithm will be used to encrypt the data? Where will the encryption key be stored? Will the key be encrypted, and how/where? Storing the key locally makes things easier for someone with nefarious intentions. Storing the key in the cloud makes things easier for Microsoft, if it were to copy the data outside of the local device. Yes, the notion of trust is essential in this application, and I for one do not trust Microsoft.

Winter May 23, 2024 9:28 AM

Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.

There is a wonderful Dark Mirror episode The Entire History of You about this “record everything” [1]. It does not end well.

I would like to remind everyone that a perfect memory is not a boost to your happiness, on the contrary. It leads to exhaustion and depression [2].

[1] ‘https://en.wikipedia.org/wiki/The_Entire_History_of_You

[2] ‘https://time.com/5045521/highly-superior-autobiographical-memory-hsam/
‘https://www.weforum.org/agenda/2015/12/why-perfect-memory-might-not-be-so-perfect/

Jeff May 23, 2024 10:35 AM

I really don’t think this so-called “AI” push has anything to do with providing a working service to people. In my opinion, it’s just Yet Another Way to extract personal information which can be sold on. People world-wide were starting to demand companies desist from harvesting their personal information. Laws have been passed. But renaming it “AI” has convinced millions of people to voluntarily give up that same data, and more.

The fact is that there is no evidence that the “zero task” problem will ever be solved. So all the fanciful descriptions about a robot that can anticipate your desires will remain science fiction. In the meantime, tech companies have performed a lateral arabesque to dodge the current privacy restrictions.

noname May 23, 2024 10:40 AM

@Rene Bastien

Those are excellent questions about how Microsoft will manage Recall (encryption, key storage, access, trust).

At an even more basic level, I am worried some users may not even be aware of this feature, its risks, and how to pause Recall or turn it on and off.

According to MS, at the moment:

https://support.microsoft.com/en-us/windows/privacy-and-control-over-your-recall-experience-d404f672-7647-41e5-886c-a3c59680af15

Recall will be turned on by default unless users elect otherwise during Copilot+ PC setup.

MS says there will be a Recall snapshot icon on the system tray letting you know when Windows is saving snapshots.

They say users can turn Recall on or off by going to Settings > Privacy & security > Recall & snapshots. Users can also pause snapshots using the icon in the system tray.

Enterprise IT admins can disable Recall using group policy or mobile device management policy. In this case, users can’t re-enable it.

Here's looking at your work kid May 23, 2024 10:45 AM

@ALL

It’s been said on this blog back in Feb that AI is a fraud, and a new way to steal peoples personal and private information

https://www.schneier.com/blog/archives/2024/02/microsoft-is-spying-on-users-of-its-ai-tools.html

Also that the Microsoft, OpenAI and Google business plan with AI is

“Bedazzle, Beguile, Bewitch, befriend, and Betrayal.”

Does anyone see anything that makes it untrue?

But think on this carefully

“To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited”

How long do you think it will be before employers will demand access to the data for “reviews” and get AI to decide the worth of an employee?

pauline May 23, 2024 11:24 AM

I wrote about this AI trust problem last year […] You’re going to want it with you 24/7, constantly training on everything you do.

No, I won’t, and I don’t see this as an “A.I.” problem. This is just more data being stored without user knowledge or consent, which I’ve always objected too; ever since I found out Windows 95 and its programs were keeping “most recently used” lists of my files. I was glad when I got an NT-based version and could write-protect those registry branches. Then I moved away from Windows, but found Linux programs did the same thing: .bash_history, for example (and quite a bit later, with no warning, .lesshst; I run most stuff under BubbleWrap these days, to avoid such surprises).

There’s also always been talk that programs could eventually be tracking changes constantly, so users wouldn’t “have to” save files. I think all such things are a bad idea, and contribute to a general uneasiness: nobody really knows who’s tracking what, but we feel like someone’s watching “over our shoulder”, that there’s a good chance what we do will be recorded.

If people did actually want such data stored, and did explicitly enable these features… well, the Microsoft feature encrypts its data and keeps it local, so I don’t really see a big problem. Maybe it’ll be reluctant to say certain words, like “Linux” and “Belgium”, but such things are true even of real “fiduciaries”; my lawyer’s probably not gonna tell me that their practice is a cheap version of the one across the street, and I’d be better off there if I could afford it. Maybe the co-pilot will use the data to show ads; it’d be annoying, but I’m told that Windows is already showing ads, so it wouldn’t really be an “A.I.” betrayal.

vas pup May 23, 2024 4:27 PM

How to find hidden cameras in an Airbnb, according to a security expert
https://www.yahoo.com/lifestyle/hidden-cameras-airbnb-according-security-162855421.html

“Our rental house had an overwhelming number of places to conceal a camera – in books, in musical instruments, in the eye of a giraffe sculpture. But LaSorsa said these are not realistic hiding places because they don’t have an enduring power source.

“Furniture and decorative items are much less of a concern because they would be battery operated,” he said, “and most battery-operated devices only last a matter of hours.”

LaSorsa suggested checking the devices accessing internet in the rental home by using a free app such as AirPort Utility, which manages and displays WiFi networks. To demonstrate, he stood by the carbon monoxide detector and scanned the list of connections on his phone. The homeowner’s Netgear network appeared, but so did several outliers that contained a gobbledygook of letters and numbers, such as “G419637LGWMW.” The jig was up.

“Why would a carbon monoxide detector have WiFi?” he asked. “There again is a telltale sign that it’s more than what it appears to be.”

After identifying dubious objects, LaSorsa performed several investigative procedures that would confirm – or deny – the presence of hidden cameras.

He unplugged the items and turned them over, looking for a mysterious QR code.

“This isn’t a manufacturer’s sticker with a serial number that you’re going to register with a company for a warranty,” he said. “So what is the purpose of this? The QR is to connect the WiFi to the apps.”

To confirm his suspicions, he pulled out his cellphone flashlight. He waved the light over the face of the alarm clock and noticed a glimmer inside a tiny hole left of the time display.

“As I move the light around, it’s glistening at me,” he said. “And when I hold the light right in front of it, you can see a lens right there.”

I had to squint to see the lens that was barely larger than the period at the end of this sentence. For easier viewing, LaSorsa pulled out a radio frequency finder with a lens detector and aimed it at the clock.

“It’s going to alert you to the lens that’s in there and confirm that it is a lens,” he said as I peered through the viewfinder.

A red light in the lens detector blinked, exposing the alarm clock’s ulterior motives.

If you need one more piece of proof to close the case, you could use the same palm-size RF detector as LaSorsa. The gadget, which costs from $20 to hundreds of dollars, determines an object’s radio frequency output. It will recognize RF energy from 20 megahertz to six gigahertz, which is a blessing and a curse.

Nearly every type of electronic device – cellphones, walkie-talkies, baby monitors, Bluetooth speakers, hidden cameras – transmits radio frequency. But if you remove or unplug all of the competing gadgets and the RF count is still high, you can assume a surveillance camera is in your midst.

A more surefire test is to use a thermal detector (about $250) to gauge the amount of heat the suspected item emits. Hidden cameras are cauldrons, apparently. LaSorsa affixed an InfiRay, which resembled a doll-size digital camera, to his phone. For a baseline, he held it up to a legitimate smoke detector. The image on his screen was a cool green. When he positioned the gadget by the USB charger and air freshener, the blob burned bright red.”

Here's looking at your work kid May 23, 2024 6:24 PM

@vas pup

Nothing in what you’ve put up above has not already been said on this blog multiple times years ago in fact over a decade for some of it.

Search for ‘cats eyes’, ‘lamping’, ‘red eye’, ‘180 degree reflection’, ‘internal reflection’ and the use of IR LEDs in TV controllers along with digital cameras / mobile phones that do not have IR filters and ‘thermal imaging’ / FLIR devices.

Along with tricks to make any RF signal give it’s self away.

All to ve found in a search of Internet Archives, in which it has also been noted that this blog is usually eight years or more on average ahead of other sources on practical security information.

echo May 23, 2024 7:27 PM

Data privacy and also AI judgement issues have already been mentioned by people. One thing which bugged me which I didn’t post about is the possible impact on severe breaches of privacy, the impact on regulated professions, and especially the impact on vulnerable people. Courtesy of @noname pointing out EU AI Act Article 5 (which I haven’t read) it explicitly raises these issues.

https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240408-prohibited-ai-practices-a-deep-dive-into-article-5-of-the-european-unions-ai-act

* Subliminal, manipulative and deceptive systems. AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully use manipulative or deceptive techniques that materially distort people’s behavior by appreciably impairing their ability to make informed decisions. Such systems cause people to make decisions that they would not have otherwise taken, [likely] resulting in significant harm.

* Exploiting vulnerabilities. AI systems that exploit people’s vulnerabilities due to their age, disability, or social or economic situation. Such systems also distort people’s behavior, [likely] resulting in significant harm.

This may make people jumpy. Lawyers in general, psychiatric medical professionals, and NGO’s representing vulnerable people might object to AI without more research into its consequences.

* Biometric categorization. AI systems that categorize individual natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Importantly, the processing of biometric data for the purpose of uniquely identifying an individual is subject to strict restrictions under the GDPR. Such processing is prohibited unless one of the limited exceptions applies, such as the data subject’s explicit consent.

Ooof. This is going to be fun.

echo May 24, 2024 3:05 AM

In a post that appears to have vanished @noname asked about the EU AI act Article 5.

Their post didn’t vanish because their post was never in this topic because I copied MY contents across to put in THIS topic after I replied to a question THEY asked ME.

And what’s with the constant name changing and sounding like a sneery AI who is replicating already known information? I also don’t appreciate being talked past like I don’t exist when I am the one who brought the issue up. If you are such a smartypants why can’t you come up with something original yourself, or am I going to have to put with another week of bot attack whose AI is built on a database of Clive quotes and text extracts for asking this? Who in god’s name puts scare quotes around treadmill? There’s definately something funny going on here.

Winter May 24, 2024 4:28 AM

It is clear how your privacy is valued by Microsoft Recall:

Recall respects user privacy by not capturing content from InPrivate web browsing sessions or digital rights management (DRM) protected material.

That is it, DRM is safe, your data are not. Note, by default Recall only excludes InPrivate web browsing in Edge. Whether it will honor privacy in other apps is still an open question.

Microsoft claims you can change what is not stored the settings, but we all know how changing settings works: it never does what you think it does.

‘https://www.cnbctv18.com/technology/explainer-how-does-microsofts-recall-feature-work-19415392.htm

Winter May 24, 2024 4:54 AM

@echo

am I going to have to put with another week of bot attack whose AI is built on a database of Clive quotes and text extracts for asking this?

I too noticed that Clive lives on in the comments section of this blog.

echo May 24, 2024 6:11 AM

@Winter

It is clear how your privacy is valued by Microsoft Recall:

Looking at all of Microsoft’s OS and internet initiatives as a whole it seems they have been inching towards creating a platform which is able to lock everything down. This isn’t a surprise I know. It’s just interesting seeing what dropped this past few weeks. DNS lockdowns. Web browser lockdowns. They whiff of being experiments or trial balloons to see what they can get away with.

Some “security” environments favour top down hierarchicalism and zero ability to leak anything. That’s fine to a degree in military and security agencies but security also depends on employing staff with certain levels of decency and social acceptance. These are protection mechanisms which stop tyrannical management behaving in egregious ways. That’s what bothers me about the march towards “security”. Microsoft management being eager to facilitate “security” for big business et al is the 1% talking to the 1%. So again the question “security but security for whom?”

“Perfect security” is fine in theory but if you lack a culture of human rights or whistleblower protection or where hire and fire business practices can be used to intimidate people someone will abuse it. This last item also makes me feel very twitchy about AI “personal assistants”. Nobody wants a snitch but also they lack judgement and sophisticated tacit knowledge – one size fits all rote learned echo chambers never end well.

If “perfect security” and “AI personal assistants” stopped children in warzones having their arms and legs blown off or exposed miscarriages of justice I’d be more convinced. I just don’t get this sense of priority from the “1%” and their hangers on. The world has enough dual use amoral toys. Why can’t we have something good for a change with a clear social benefit?

echo May 24, 2024 7:09 AM

https://techcrunch.com/2024/05/22/metas-new-ai-council-is-comprised-entirely-of-white-men/
https://www.oversightboard.com/meet-the-board/
https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities

TechCrunch notes that the council features “only white men on it.” This “differs from Meta’s actual board of directors and its Oversight Board, which is more diverse in gender and racial representation,” reports TechCrunch.

“It’s telling that the AI advisory council is composed entirely of businesspeople and entrepreneurs, not ethicists or anyone with an academic or deep research background. … it’s been proven time and time again that AI isn’t like other products. It’s a risky business, and the consequences of getting it wrong can be far-reaching, particularly for marginalized groups.”

All the robotic minds the sociopathic entrepreneurs are commissioning need are a powdered wig and clay pipe and they’re done if they get away with it! Mind you given Meta employed Nick Clegg as an “ethics” advisor what else does anyone expect? Now he’s president of global affairs? I wouldn’t trust that slippery shape shifter with my lunch money!

The lack of experts in ethics and DEI or with lived experience is a problem. I can spot some problems with AI development and policies straight away so people who know ten times more than I do must be tearing their hair out. You also need to be really careful not just to tick boxes with representation. Just because someone is from a protected category doesn’t mean they’re not reinforcing the system themselves. They are about and that’s another bun fight where oppressed classes can play a role in perpetuating oppression. Academia also has its share of polarised politically motivated charlatans who will write any old nonsense if you pay them enough or it aligns with their biases. You need to watch that too.

I’m glad I’m not the only one bringing these problems up. There is some good policy making and journalism about and expertise. Just not always where it’s most needed though and that’s the problem…

Gert-Jan May 24, 2024 7:25 AM

There are trust issue with this, at different levels.

Several have already been mentioned or hinted to:
– can we trust this software to do what it is supposed to do? (probably depends on who you ask, I suspect the field is still too immature)
– can we trust that updated versions will be just as secure as the previous version? (no, there’s a history of mistakes, scope creep and opportunism)
– can we trust that the data stays protected and stored locally (time will tell)

But it goes further. Your private data only remains private as long as it remains private. If the police raids your house, or your government (or the US government in the case of Microsoft) has some other (legal) reason to demand your data, it is no longer private. In court, you might not even be able to refute whatever nonsense “your” AI will spout. Your friend just became your enemy. Your spouse may have to right not to incriminate you, but that right does not extend to any software system.

Bruce calls for public AI. That could potentially address some of these concerns, as it would allow the user to keep control of the AI and drive requirements to mimize risk.

Anonymous May 24, 2024 8:57 AM

“AI will help people navigate bureaucracy”
AIs could be the solution to administrative burdens that have become passive-aggressive forms of benefit denial.

“Government is who enables trust”
We need clear AI laws before we lose trust in AI technology completely because of the “murky consent” of the current privacy policies.

Here's looking at your nonsense kid May 24, 2024 2:19 PM

@echo
@moderator

@echo you always start things of by attacking someone, not the other way around.

Usually you start by imagining some fake-crime or faux-slight against yourself.

The psychiatric profession has a couple of descriptions for your pathology.

The fact you start and try to pretend to be a victim would be comical if you were not so utterly predictable.

You believe incorrectly that you should have the entitlement of making any false accusation you chose and not being called on it.

So you then feel entitled to

  1. Shout people down
  2. Make wildly false allegations
  3. Have the last word
  4. Be as bullying as you can behind your faux identity.

When that bullying behaviour fails you scream for the moderator to

  1. Bail you out
  2. Expunge your falsities
  3. Not have to change your behaviour

So the fact that someone who stands up and rebuts your stream of invective and will not let you get away with false accusations is a crime in your eyes.

I suppose you are going to deny you said the following

“I’m getting fed up with this troll openly posting disinformation and harassment. I’m also getting fed up with moderation doing nothing and other people beginning to copy this beyond blatant troll in to conversation like nothing is happening.”

Or the much worse that can ve found in the archives and brought back for you to try to lie or worm your way out of.

It is your bullying and accusations that have driven away valued longterm thoughtful and well respected posters.

You are clearly harming this blog, it’s reputation and that of it’s host with your diatribe of invective and other nonsense.

The fact that you won’t see this and just carry on with any old Marxist nonsense by ream after ream says much about you.

My view is bullies should be stood upto on all occasions, that such false behaviour and false facts should be called out for the protection of others and that it should be left clear for everyone to see what a phony you really are.

You are without doubt not deserving of any respect and many have pointed this out, but your mental pathology will not let you look in the mirror honestly.

vas pup May 24, 2024 4:42 PM

MIT
https://www.technologyreview.com/2021/08/25/1032111/conscious-ai-can-machines-think/

“Machines with minds are mainstays of science fiction—the idea of a robot that
somehow replicates consciousness through its hardware or software has been around so long it feels familiar.

Such machines don’t exist, of course, and maybe never will. Indeed, the concept of a machine with a subjective experience of the world and a first-person view of itself goes against the grain of mainstream AI research.

It collides with questions about the nature of consciousness and self—things we
still don’t entirely understand. Even imagining such a machine’s existence raises serious ethical questions that we may never be able to answer. What rights would such a being have, and how might we safeguard them? And yet, while conscious machines may still be mythical, their very possibility shapes how we think about the machines we are building today.

As Christof Koch, a neuroscientist studying consciousness, has put it: “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.”

…what goes on inside other people’s heads is forever out of reach. No matter how strong my conviction that other people are just like me—with conscious minds at
work behind the scenes, looking out through those eyes, feeling hopeful or
tired—impressions are all we have to go on. Everything else is guesswork.

“Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it.”

…it is logically possible that a being can act intelligent when there is nothing going on “inside.”

But intelligence and consciousness are different things: intelligence is about
doing, while consciousness is about being. The history of AI has focused on the
former and ignored the latter. If a machine ever did exist as a conscious being, how would we ever know? The answer is entangled with some of the biggest
mysteries about how our brains—and minds—work.

One of the problems with testing a machine’s apparent consciousness is that we don’t really have a good idea of what it means for anything to be conscious.

Emerging theories from neuroscience typically group things like attention,
memory, and problem-solving as forms of “functional” consciousness: in other words, how our brains carry out the activities with which we fill our waking
lives.

But there’s another side to consciousness that remains mysterious. First-person, subjective experience—the feeling of being in the world—is known as “phenomenal” consciousness. Here we can group everything from sensations like pleasure and pain, to emotions like fear and anger and joy, to the peculiar private experiences of hearing a dog bark or tasting a salty pretzel or seeing a blue door.

Philosophers like Chalmers suggest that consciousness cannot be explained by
today’s science. Understanding it may even require a new physics—perhaps one that includes a different type of stuff from which consciousness is made.

Information is one candidate. Chalmers has pointed out that explanations of the
universe have a lot to say about the external properties of objects and how they interact, but very little about the internal properties of those objects.

A theory of consciousness might require cracking open a window into this hidden
world.

Today’s AI is nowhere close to being intelligent, never mind conscious. Even
the most impressive deep neural networks —such as DeepMind’s game-playing AlphaZero or large language models like OpenAI’s GPT-3—are totally mindless.

…as Turing predicted, people often refer to these AIs as intelligent machines, or talk about them as if they truly understood the world—simply because they can appear to do so.

There is a lot of hype about natural-language processing, says Bender. But that
word “processing” hides a mechanistic truth.

For all their sophistication, today’s AIs are intelligent in the same way a
calculator might be said to be intelligent: they are both machines designed to convert input into output in ways that humans—who have minds—choose to
interpret as meaningful. While neural networks may be loosely modeled on brains, the very best of them are vastly less complex than a mouse’s brain.

Another way of approaching the question is by considering cephalopods, especially octopuses. These animals are known to be smart and curious—it’s no coincidence Bender used them to make her point. But they have a very different kind of intelligence that evolved entirely separately from that of all other
intelligent species. The last common ancestor that we share with an octopus was
probably a tiny worm-like creature that lived 600 million years ago. Since then, the myriad forms of vertebrate life—fish, reptiles, birds, and mammals among them—have developed their own kinds of mind along one branch, while cephalopods developed another.

It’s no surprise, then, that the octopus brain is quite different from our own.

Instead of a single lump of neurons governing the animal like a central control unit, an octopus has multiple brain-like organs that seem to control each arm separately. For all practical purposes, these creatures are as close to an alien intelligence as anything we are likely to meet. And yet Peter Godfrey-Smith, a philosopher who studies the evolution of minds, says that when you
come face to face with a curious cephalopod, there is no doubt there is a
conscious being looking back.

In humans, a sense of self that persists over time forms the bedrock of our
subjective experience. We are the same person we were this morning and last week and two years ago, back as far as we can remember. We recall places we visited, things we did. This kind of first-person outlook allows us to see ourselves as agents interacting with an external world that has other agents in it—we understand that we are a thing that does stuff and has stuff done to it.

Whether octopuses, much less other animals, think that way isn’t clear.

In a similar way, we cannot be sure if having a sense of self in relation to the world is a prerequisite for being a conscious machine.

Machines cooperating as a swarm may perform better by experiencing themselves as parts of a group than as individuals, for example. At any rate, if a potentially conscious machine were ever to exist, we’d run into the same problem assessing whether it really was conscious that we do when trying to determine intelligence: as
Turing suggested, defining intelligence requires an intelligent observer. In other words, the intelligence we see in today’s machines is projected on them by us—in a very similar way that we project meaning onto messages written by Bender’s octopus or GPT-3. The same will be true for consciousness: we may claim to see it, but only the machines will know for sure.

If AIs ever do gain consciousness (and we take their word for it), we will have
important decisions to make. We will have to consider whether their subjective
experience includes the ability to suffer pain, boredom, depression, loneliness, or any other unpleasant sensation or emotion. We might decide a degree of suffering is acceptable, depending on whether we view these AIs more like livestock or humans.

Many researchers, including Dennett, think that we shouldn’t try to make conscious machines even if we can. The philosopher Thomas Metzinger has gone as far as calling for a moratorium on work that could lead to consciousness, even if it isn’t the intended goal.

Could an AI be expected to behave ethically itself, and would we punish it if it didn’t? These questions push into yet more thorny territory, raising problems about free will and the nature of choice.

Animals have conscious experiences and we allow them certain rights, but they do not have responsibilities. Still, these boundaries shift over time. With conscious
machines, we can expect entirely new boundaries to be drawn.

As Dennett has argued, we want our AIs to be tools, not colleagues. “You can turn them off, you can tear them apart, the same way you can with an automobile,” he says. “And that’s the way we should keep it.”

44 52 4D CO+2 May 24, 2024 7:30 PM

We recall places we visited, things we did. This kind of first-person outlook allows us to see ourselves as agents interacting with an external world that has other agents in it—we understand that we are a thing that does stuff and has stuff done to it.

Whether octopuses, much less other animals, think that way isn’t clear.

This is either taken out of context or very bad phrasing. Anyone with a close personal relationship with many different kinds of animals will know that this is clear. They recall places they visited yesterday and years ago. They know how to get what they want from their human companions, and they know when they are being taken to the vet.

Whether we could say the same of an autonomous machine may be a different question because one could claim they are just programmed to do so. But then, could you ask the same about humans and their animal companions?

Here's looking at your work kid May 24, 2024 8:00 PM

@pauline

My apologies as nobody picked up on your

“No, I won’t, and I don’t see this as an “A.I.” problem. This is just more data being stored without user knowledge or consent, which I’ve always objected too”

It is an AI problem in as much as it is that which is

“Seeing all and saving all to reveal all.”

But the reality is as you note,

“There’s also always been talk that programs could eventually be tracking changes constantly, so users wouldn’t “have to” save files. I think all such things are a bad idea, and contribute to a general uneasiness: nobody really knows who’s tracking what, but we feel like someone’s watching “over our shoulder”, that there’s a good chance what we do will be recorded.”

There are actually two issues to think about,

  1. The storage of data
  2. The communication of data

Because even if stored in plaintext it is useless to another party / entity unless it can be communicated to them.

So bad as it is, it is not the saved data that is the real issue as far as “betrayal” goes but the ability of it being communicated.

Back when servants saw their masters and mistresses 24*365 they were mostly prevented from communicating sensitive information by not being allowed off of the estate etc. But with the use of Town Houses this segregation was nolonger possible, and the inevitable happened.

Thus the masters and mistresses had to learn that they actually lived in a large goldfish bowl with piranhas surrounding them all the time. And much like modern Royalty surrounded by newspaper editors and owners on the make and behaving unlawfully they have no freedom to be normal people which is extraordinarily stressful even for very short periods of time.

For us normal mortals living surrounded by assistants is both novel and the dangers mostly not considered or just unknown. Which is why we inevitably get stories of criminals being betrayed by Fitness bands / watches and the like.

People need to realise that first Google, then Amazon, and now Microsoft want “agents of betrayal” to

“whisper in their ear all the secrets of the incautious.”

As that is where they see their see their primary income stream coming from.

But “incautious” hides in all sorts of places and ways. Take for instance your thought,

“If people did actually want such data stored, and did explicitly enable these features… well, the Microsoft feature encrypts its data and keeps it local, so I don’t really see a big problem.”

You are not seeing the actual problem. Yes the data might be stored on your hard drive and it might be encrypted. But to work the encryption keys have to be available to the software etc.

Those keys are just data that can be communicated back to those Corporates along with copies of the encrypted files.

It’s the communications path you need to break and Microsoft is doing everything it can to stop you achieving that break…

pauline May 24, 2024 8:38 PM

@Here’s looking at your work kid,

So bad as it is, it is not the saved data that is the real issue as far as “betrayal” goes but the ability of it being communicated.

That’s technically correct, but I don’t think this distinction is worth much. It’s like Bruce’s point that personal data is a “toxic asset”: the mere existence of the data leaves the possibility of accidental “communication”. Maybe a family member walks up to an unlocked computer and sees something that I’d been very careful to not save. Maybe malware leaks it. Maybe Windows will itself become malware (…moreso…) in a future release, and will decide to send it to “the cloud” for some reason. Maybe a court demands my encryption keys.

I know I’m not good enough at operational security to protect such extremely intimate data. As such, I don’t want it to ever be stored, even “securely”. I disable browser history and encrypt my swap partition for a reason.

You are not seeing the actual problem. Yes the data might be stored on your hard drive and it might be encrypted. But to work the encryption keys have to be available to the software etc.

Well, okay, what I meant was that I see a giant problem, but the problem is more in the goal than the implementation. Once you’ve decided to run un-auditable software from a company with a long history of anti-user activity—and, for some reason, record everything you do—it’s not clear how such a data-logging feature could be designed much better. I could throw around vague buzzwords like “homomorphic encryption” or “hypervisor sandboxing”, but till someone comes up with a concrete design, what can we do but encrypt this data that should never exist? (There are actually people who want it, though; Stephen Wolfram’s been storing every keystroke since the 1980s, for example.)

echo May 25, 2024 1:02 AM

@moderator

https://www.schneier.com/blog/archives/2024/05/personal-ai-assistants-and-privacy.html/#comment-437294

A page of troll/AI personal attack.

https://www.schneier.com/blog/archives/2024/05/personal-ai-assistants-and-privacy.html/#comment-437306

Someone tell me this isn’t a mentally ill person or an AI. It’s just weird communication recycling archived blog comment content and part of a pattern of behaviour, and not the same entity who carried out a week long attack previously.

Here's looking at your work kid May 25, 2024 4:31 AM

@44 52 4D CO+2

(One day I’ll look it up 😉

But back to the question at hand.

“They recall places they visited yesterday and years ago. They know how to get what they want from their human companions, and they know when they are being taken to the vet.”

Yes they do

‘But why do they do?’

Recalling places and the location of things is a survival mechanism. Some humans excell at it and can navigate just by visual clues even an unknown environment. However a few others are terrible, in the most part because they have not needed to develope the skills.

As for getting things out of those “who care for them” even babies develope such skills long prior to being able to crawl or talk.

As for being ‘taken to the vet’ that is an interesting one

‘As they know before they go out the door’

Suggesting they ‘read those who care for them’ in some way.

These are all “Survival skills” for creatures with exercised freedom of movement and some measure of free will. What others call “agency”.

Which brings us around to AI and

“Whether we could say the same of an autonomous machine may be a different question because one could claim they are just programmed to do so.”

No not yet, as has been mentioned on this blog in the past ‘current AI systems do not have perception nor environment aware agency’.

Thus they can not do those survival things that creatures can.

“But then, could you ask the same about humans and their animal companions?”

The funny thing is the evidence shows that the ‘animal companions’ actually quite rationally chose to become ‘domesticated’ and live within human societies. Because it gave them advantages they would not otherwise get.

A funny thought for you. I used to know a woman who had what appeared to be two entirely separate vocations, training show jumping horses and teaching the use of software to groups of corporate drone workers. She was very good at both, and we got chatting one day during a lunch break on a training course on project management we were both teaching on.

She made an observation that in training horses and humans were very similar. That is you pushed gently to get them to want to do what you were training them to do. And the secret technique in both cases was a little competition and food treats… Yup she baked cookies for both horses and humans. One got them from her pocket the other from a tin. I asked if she ever got the cookies mixed up. To which she replied

“often, but a little extra roughage has never done anyone any harm.”

Here's looking at your work kid May 25, 2024 5:07 AM

@Pauline

With regards “storage” and “communication” they are two of the three things you can do with “information” the third being “processing”.

As far as encryption is concerned and especially on ICT Systems “storage” or “data at rest” and “communications” or “data in transit” are two very different things. Because they have very different characteristics thus methods.

Which brings us to your

“That’s technically correct, but I don’t think this distinction is worth much. It’s like Bruce’s point that personal data is a “toxic asset”: the mere existence of the data leaves the possibility of accidental “communication”.”

Consider things not as ‘side by side’ with an either or choice but layered like an onion where you have to address both.

Which brings us to,

“I know I’m not good enough at operational security to protect such extremely intimate data. As such, I don’t want it to ever be stored, even “securely”. I disable browser history and encrypt my swap partition for a reason.”

Firstly nobody and I mean nobody is good at OpSec, and anyone that claims they are either knows to little or is a charlatan on the make (of which there are a lot).

So on balance you are probably a lot better than most.

The easy but pointless advice is “Never do XXX” for two reasons

  1. Life always throws up exceptions
  2. New or improved attacks happen

The only way to behave sensibly is

“To plan for what you can foresee in a general not specific way.”

As has been pointed out on this blog in the past

“They are not fire drills but evacuation plans in practice.”

Whilst Fire has been the most likely cause to evacuate a place, there are other reasons such as flood, earthquake, bomb threat, terrorism attack, etc.

Rather than make plans for just fire make a plan that covers as much as you can of all of them.

The chances are “General” will also cover what you have not foreseen where as “Specific” often does not.

Look at it this way, you do not want to ve going into air-raid style bomb shelters if the threat is flooding or earthquake. General planning would pull that up at the planning stage, specific planning not so much.

Whilst Bruce is right “Data is toxic” there are times such as information processing where you have no choice but to have it around. An in-depth knowledge of the way ICT-systems currently work tells you that to “process” information you implicitly do the other two of the information triad of “storage” and “communication”.

So as all three are foreseeable, a general plan would cover all three.

ResearcherZero May 25, 2024 6:28 AM

@noname

Memory often leaves data sitting in blocks for years before it is written over. Many operating systems also leave data in place to optimise different applications which defeats many of the benefits of ASLR.

AI can be used to address many issues. It can be part of a layered defence. It will take time for it to be more effective.

CoPilot can be switched off. That is always something to keep in mind. Other solutions are also usually available.

Paper records have long remained vulnerable to physical threats, theft, ageing, misplacement, and image capture.

The classified documents that ended up on a thumb drive and laptop belonging to an aide working for Trump’s PAC, were scans.

They probably were a little concerned when they realised that it was top secret info.

lurker May 25, 2024 2:31 PM

@echo

It seems obvious that Clive is running some sort of an experiment where he is using several different AI models to “formulate a response to this post in the style of Clive Robinson.” The machines have a varying ability to correct his typos and rambling grammar, but they have absorbed enough material to well copy his boring repetition and inability to suffer fools gladly. I hesitate to suggest a motive for this experiment.

44 52 4D CO+2 May 25, 2024 11:04 PM

An autonomous machine, such as a drone, both “does stuff and has stuff done to it.”

It is directed by a human to go to specific target locations.

It has in it’s memory banks, topological maps (and other sensory data) to ensure it is in the correct place without additional human input. Even though it may never have been there before. (written language)

It has the ability to avoid harsh weather and deploy countermeasures if locked-on. It will automatically return to base if it doesn’t have enough fuel to continue the defined mission.

It also has records of the impact it made – before imagery of the target location and after the missile launch.

I am pretty confident that the drone does not have a sense of self

Here's looking at your work kid May 26, 2024 12:09 AM

@44 52 4D CO+2

“I am pretty confident that the drone does not have a sense of self”

Or many other attributes we see in mammals and other higher functioning creatures.

As I noted above

‘No not yet, as has been mentioned on this blog in the past ‘current AI systems do not have perception nor environment aware agency’.’

The implication of what you say about drones above is they do have perception and are environment aware and have self protection mechanisms.

But first off the drones are not AI systems. They might be the analogue of a creatures body and sensory organs but do not have the capabilities of mind.

The drone is in many ways is not much of an advancement over the 1980’s maze solving “Micro Mice” That in sone cases were ZX81 8bit home computers with the keyboard sawn off and mounted over a battery, a couple of motors, and some sensing feelers.

In otherways those 1980’s micro mice are ahead of the drone you describe in that they have no inbuilt maps to follow.

Something you might not know about early non ballistic path missiles that came before what we now call “cruise missiles”. They navigated in an almost identical way to that I used to do in a boat in the dark with the echo sounder depth gauge.

In some ways they had it easier as they did not have to account for tides and their changing hights.

What you do is establish your hight above the “chart datum” and your current true speed and direction and your assumed position on the chart. then use the echo sounder or radar altimeter to determine the distance to the seabed or ground directly below.

As the seabed distance changes you use this to update your assumed position on the chart. To make things easier you “contour follow” that is you navigate by following a contour line by zig zaging across it to hunt it out.

The first few times you navigate a boat in the dark this way is nerve wracking but you get the feel of it and your confidence in it grows.

Well those pre cruise missile systems worked by contour following. They had no knowledge of the environment just the distance below and what direction and speed they were logging up.

long time obeserver May 26, 2024 5:28 AM

@lurker

It seems obvious that Clive is running some sort of an experiment where he is using several different AI models to “formulate a response to this post in the style of Clive Robinson.” The machines have a varying ability to correct his typos and rambling grammar, but they have absorbed enough material to well copy his boring repetition and inability to suffer fools gladly. I hesitate to suggest a motive for this experiment.

If any AI is involved (I doubt it) it even creates new spelling errors, e.g. “exceptance” for “acceptance”. Some others remain, like Clive’s classic “it’s self”. Some seem to be avoided, like the archaic “compleat”.

Anyway these posts are easy to detect, not only the spelling but the also the layout, punctuation, subject matter and intent are quite typical.

What motivates this ? Apparently the same narcissistic self-entitlement that has been harming decent conversation on this blog for years.

Winter May 26, 2024 9:38 AM

@ long time obeserver
@lurker

What motivates this ?

We can never be sure what motivates a person. But if we are right about who and what is going on, we can speculate.

What I did observe in the past is that he was unable to admit he was wrong or had “lost” an argument.[1]

His long running feud with @echo was never resolved and at one point he suddenly stopped posting under his customary handle. For some reason he could not or didn’t want to post under this handle anymore.

There is the possibility that he switched to continue his feud under different names and possibly using automatically generated content.

Using such technically generated text would also allow for a certain deniability. Because, there could always be some other entity that grabbed the opportunity to wreak havoc on the blog using the existing published texts.

This is all pure movie-threat speculation which tickles my conspiracy and spy-vs-spy sense.

[1] The idea an argument is “lost” if you learn you were wrong, or you had incorrect information, is childish. You live and learn, which means you once knew less and drew the wrong conclusion. Now you come to better conclusions.

fib May 27, 2024 9:25 AM

The entity hides behind many handles stoking a personal feud targeting one of the regular commenters on this blog

It’s the other way round. A very regular poster, a person who stands out for his knowledge and culture, in addition to civility and excellent humor, demonstrated for almost two decades now, is forced to use a tactic of obfuscation to defend himself from the incessant harassment of a commentator whose mission is clearly to disrupt the operation of this peaceful blog.

This blog used to be my refuge from the Internet barbarities. Now I see the barbarians finally approaching the gate.

Here's looking at your work kid May 27, 2024 11:06 AM

@fib
@moderator
@ALL

“This blog used to be my refuge from the Internet barbarities. Now I see the barbarians finally approaching the gate.”

And many are turning up their coat collars as a “bitter wind from the east”[1] blows and all it brings with it.

Likewise many are now using thread specific subject lines or similar in the Name field, which is perfectly acceptable behaviour based on the “posting guidelines” for this blog.

Which not only allows anonymous posting, but ‘to prevent confusion’ prefers that people not use “Anonymous” but use something more individual or unique.

Obviously to also reduce confusion you have to have consistency in the unique anonymous identity you use within a thread, further such consistency stops accusations of inadvertent “sock puppetry”.

But there is also the now very evident issue of Cyber-Stalking to carry out Cyber-Bullying.

If someone uses a unique identifier across threads then they are all to easily Cyber-Stalked across the blog and as you note certain individuals have done just this for years. Their only intent in this Cyber-Stalking is Cyber-bullying.

Both of which are very much real crimes in several jurisdictions and can carry tariffs of fines and or imprisonment. This is known by and applies to the criminal Cyber-Stalker and Cyber-Bully neither of which are protected by US Free Speech legislation as has been made abundantly clear by the US Government going after whistleblowers and those who “put in public” by what they thought was free speech but is not.

In fact changes in legislation in the US and EU also call into doubt Section 230 in the Communications Decency Act 1996 provisions for web site owners when Cyber-Stalking and Cyber-Bullying and other such crimes are involved.

[1] From Charles Dickens, Bleak House

The wind’s in the east… I am always conscious of an uncomfortable sensation now and then when the wind is blowing in the east.

Winter May 27, 2024 11:13 AM

@fib

A very regular poster, a person who stands out for his knowledge and culture, in addition to civility and excellent humor, demonstrated for almost two decades now, is forced to use a tactic of obfuscation to defend himself from the incessant harassment of a commentator whose mission is clearly to disrupt the operation of this peaceful blog.

We don’t know what happened. For me, there are two likely hypotheses:

  • This person was banned for bad behavior and now resorts to underhanded tactics to fight out his feud while evading the moderator
  • This person stopped writing on this blog for personal reasons and someone else is impersonating his style to continue his feud

Which hypothesis is better supported by evidence is not clear to me.

Whatever the real reason behind all this, digital harassment is wrong either way.

If some commenter is behaving out of bounds, you notify the moderator and ignore them. That is what the comments policy tells us to do. Do not feed the Trolls.

In all cases, the moderator decides unilaterally. Any tactics to evade moderation are to me a clear sign you should be banned.

Winter May 27, 2024 11:35 AM

@fib

This blog used to be my refuge from the Internet barbarities. Now I see the barbarians finally approaching the gate.

I too deplore how the comment section of this blog has become a battle ground for personal feuds. It seems only two entities are enough to flood the comment section with their mutual hate.

But we have seen this before in even a more massive wave.

I, for myself, will limit my responses to notifying the moderator of personal attacks, insults, and disinformation. That way both the objectionable content and their responses can be easily found and removed. The moderator can see who uses a genuine identifier and act accordingly.

Grima Squeakersen May 28, 2024 8:21 AM

noname said: “They say users can turn Recall on or off by going to Settings > Privacy & security > Recall & snapshots.”

Do you think that MS can be trusted to do even that much? Could there be an obfuscated service and a db encrypted with a non-user-accessible key that will continue to record this information, and make it available (only) to MS?

Winter May 28, 2024 10:14 AM

@Handle Hopper

to make people think a crime is being committed by the person he choses to hang it on.

As you are anonymous and have no fixed handle, not even Anonymous, and the various handles you do use tend to be aggressive and/or insulting, I prefer to refer to you as “Handle Hopper”. It describes you well as you are the only Handle Hopper around, and it identifies your comments without repeating attacks or slurs.

Winter May 28, 2024 11:47 AM

@query

Have you thought about disabling comments?

I think he has. But that is probably one of the aims of the troll. If it cannot have the comments section the way it wants it, no one should have it.

I hope @Bruce will have some patience.

query May 28, 2024 12:32 PM

@Winter

If a leading expert like Bruce can’t maintain order on his website, what does that say?

Maybe it is time to require logins with some sort of verified identity to ward off people who want to hide behind their keyboards while talking big.

query May 28, 2024 12:39 PM

Maybe Clive, echo, etc should have theur own blogs instead of using the comments here as a platform for their own content. That would be more fair than fighting here to see who can dominate.

Winter May 28, 2024 1:35 PM

@query

Maybe it is time to require logins with some sort of verified identity

I remember @Bruce writing that this would be against his ethics as he is a champion of privacy. I support him in this.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.