AI and the Evolution of Social Media

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviors.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal gray area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

This essay was written with Nathan Sanders, and was originally published in MIT Technology Review.

Posted on March 19, 2024 at 7:05 AM20 Comments

Comments

kiwano March 19, 2024 9:53 AM

In terms of regulating the influence of advertisers, I have a hunch that a very simple, but also very effective policy change would be to explicitly identify algorithmic content generation and any sort of curation as sufficiently creative not to enjoy section 230 protections. Because of the potentially chilling effect on speech this would have, immunity could still be obtained by maintaining certain basic standards of journalistic integrity, the most obviously applicable of which would be an ethical screen (AKA chinese wall) between ad sales and editorial/curatorial/algorithm-development decisions.

As a side-effect, I’m reasonably confident that such a requirement for an ethical screen would do more to limit the harms caused by online pornography than any of the ID-checking requirements currently being debated (without even having to open up debates around the boundaries between pornography and sex ed material or art, which would also be improved by being subject to the same chinese wall requirements).

jelo 117 March 19, 2024 9:58 AM

SocialMedia™
is to human social relations as AstroTurf™
is to real grass.

SocialMedia™ lived shows that the Enlightenment™ presumption that one can talk at anytime to anyone about anything is false.

AI™ adds a layer of modern glossy automotive MilkyPastel™enamel and a HelloKitty™ cast of KutesyNesse™ as a disguise to hide the house of cards.

Alan Kaminsky March 19, 2024 11:17 AM

Not that long ago, the U.S. Supreme Court ruled that corporations are “persons”, that political contributions are “speech”, and therefore, by the First Amendment, corporations can spend unlimited amounts of money to buy elections.

The same thing will happen with AI, especially given the current extreme right-wing makeup of the court. AI chatbots will be declared “persons”, their output will be declared “speech”, and therefore, by the First Amendment, government cannot impose any restrictions or regulations on what AI does.

JonKnowsNothing March 19, 2024 12:20 PM

@Alan Kaminsky, All

re: [AI] output will be declared “speech”

It is speech, possibly under the USA First Amendment, but not in the why many would think.

AI knows nothing. It is a grab back of data bits scraped from websites and pirated from uploaded audio and video.

So, the stuff that AI regurgitates is a scramble bag of randomized data bits that came from somewhere else. Since AI is not intelligent, it cannot create new speech, it can only rearrange data bits.

Those data bits, as originally created and placed online, are protected speech, depending on local laws, as not all speech is protected in the USA or other countries.

So one can expect the legal folks to argue that each chunk of data bit is protected and therefore the entirety of the item, is protected too.

  • 100 large works scraped by AI
  • 10 words selected from 10 of these larger works
    • the, the, the, and, in, cat, dog, rat, corner, ate
  • each word was protected in situ
  • concatenating the 10 words by sum () is protected
    • The cat and rat ate the dog in the corner

It’s a hallucination (HAIL).

William March 19, 2024 2:23 PM

With time and hindsight, I do hope the truth about social media limps in.

First, I know I was reading studies at least as far back as 2010 and 2011 that showed how Facebook use cultivated OCD in users. The signs were there, we just chose to ignore them. I know this because I shared them with my wife having noticed changes in her well-being I associated with her social media use. She angrily dismissed my observations, but by the end of 2012 she was completely incapacitated by OCD. How many lives were shattered in the following decade before anyone started to take this threat seriously?

Second, while social media may have assisted people in communicating, I’m pretty sure the Arab Spring would have happened regardless of social media use once Obama cut the spending that had been propping up the regimes. As soon as their guards’ paychecks stopped clearing word would have spread like wildfire, no technology required. Unfortunately, the Arab Winter is probably a more realistic harbinger for the promise of social media’s ability to enable people to come together for the common good.

Looking forward, while AI provides many of the same promises and hazards offered by social media, probably the worst is what it will do to people’s agency as the use of AI replaces literacy and critical thinking. What world will be born by a generation raised to use AI to do their homework taught by teachers who use AI to do their grading and everyone coming together in a social media dictated by advertisers?

Wannabe techguy March 19, 2024 4:58 PM

“current extreme right-wing makeup of the court.” Oh here we go. How about “extreme left wing”?

Winter March 19, 2024 5:53 PM

@Wannabe techguy

current extreme right-wing makeup of the court.

The court bans abortion. 69% of Americans support abortion. Sounds like the court is at the right end of the spectrum.

Other decisions were also at that part of the political spectrum.

vas pup March 19, 2024 7:00 PM

@Bruce said “These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.” That is how money really rules versus electorate on legislators.

For the same reason suggestion: “In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.” required assign more power to FTC.

Check this out: Federal Trade Commission Chair on U.S. Innovation
https://www.c-span.org/video/?534204-1/federal-trade-commission-chair-us-
innovation

Moreover some companies reasonable feel shielded from antitrust law implementation by FTC due to cooperating with all set of LEAs and IC in collecting, selling, passing data without search warrant on US citizens, e.g. AT&T. Just opinion.

Clive Robinson March 19, 2024 8:23 PM

@ Bruce, ALL,

Re : Dark Tetrad and falsehoods.

“Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.”

As I’ve commented over the past few years,

1, Technology is agnostic to use
2, It is the Directing mind that decides how it is used.
3, It is the Observers of the use that decide if it is good or bad.

The US in particular has taken an issue from Europe and made it worse a lot worse.

It does not matter what you call it but “self entitlement”, “greed”, and many other names or words exist for it.

As others have noted some sovereign nations have enshrined “self entitlement / greed into the law and say the US has done this with “shareholder benefit”.

Actually it’s not what the legislation says… But how it’s ambiguousness has been interpreted, by various people.

I’m aware people do not like the idea of the “Dark Tetrad” of what are effectively mental defects when compared to the norm.

But the simple fact is that as the old saying has it,

“The lunatics are running the asylum.”

Or if others prefere more moderate terminology,

“The hawks own the dove cote”

History has shown over and over that such people are “allowed” to grab control by the general population which is why people very much need to consider the spectrum implied by,

“Individual Rights v Social Responsibilities”

The “self entitled” view their “individual rights” as paramount and act in a way where the benefits of others “Social Responsibilities” become the “self entitled’s right”.

Consider the use of roads and waterways. They are a “social good”, that should be funded by all for the benefit of all, thus bring society forward, again for the benefit of all.

That is not how the self entitled view it. They assume it is their benefit to use as they want, but that not only should they not pay for it, they should have the right to dictate who else should have benefit or not.

So roads payed for from taxation that should be for the benefit of society get used by the lorries of the self entitled doing a thousand times the damage a family car does. Yet the self entitled not only should they get priority on road construction, but they should not pay for it.

Likewise the use of waterways. The self entitled pollute and claim it as a right for themselves yet insist that others should not contaminate their inwards water flow but should clean up their outward water flow.

They also claim as theirs water that falls on other peoples land and buildings and use “Guard Labour” paid for from the “public purse” they do not contribute to enforce this.

This is the reality of the “free market” that “neo-cons” and “capitalists” espouse every way they can including the indoctrination of children in the education system, long before the children’s developing minds can understand right / wrong, good / bad.

Knowing this why would you expect the “Venture Capitalist”(VC) created bubbles to be anything but “self entitled” greed?

A greed that has historically been seen before with so many other technologies going going back long before wind powered sailing vessels. Look at the history of “water rights wars” and the like as dams and sluices were invented.

We know darn well every new technology will be used for harm / bad but we happily allow it to be so. As long as “we” can delude ourselves we can benefit at the expense of others. Not learning that almost every time we will get “scalped” in some way by the Hawks.

lurker March 19, 2024 11:46 PM

@Bruce

Stan Freberg would have breezed past the FDA tobacco advertising regulations when he used the Marlboro Man as Santa Claus, both sleeves rolled up, and a tattoo on each arm. One said Merry Christmas, the other said Less Tars.

The descendants of Freberg’s satirical targets are alive and well in Silicon Valley. They’ll weasel and winkle and get around any regulations brain-dead legislators can come up with.

ResearcherZero March 20, 2024 12:24 AM

Senators claim they won’t repeat mistakes made with social media, when regulating AI.

‘https://www.technologyreview.com/2024/03/13/1089729/lets-not-make-the-same-mistakes-with-ai-that-we-made-with-social-media/

Tech companies spend millions in Washington to get their way.
https://fortune.com/2024/02/01/social-media-senate-hearing-mark-zuckerberg-facebook-whistleblower-families/

Meta’s job is to “build industry-leading tools” and “make money.”

‘https://edition.cnn.com/2024/02/01/tech/social-media-regulation-bipartisan-support/index.html

So what happens when you get this wrong and fail to legislate or regulate?

“Shockingly, both sides placed their opponents about 20 to 30 points below fully human, on average. When asked how they thought the other side viewed them, people said their rivals would put them 60 points below fully human. Their perceptions of the other side’s contempt were grossly exaggerated.”

‘https://www.gsb.stanford.edu/insights/many-americans-dont-see-their-political-rivals-people-can-be-fixed

One fear is that this kind of dehumanisation leads to violence.

Another is that it leads people to believe in conspiracy theories that further demonise the people they disagree with.
https://www.npr.org/2020/10/18/925069809/the-consequences-of-politics-dehumanizing-language

Clive Robinson March 20, 2024 5:55 AM

@ ResearcherZero, ALL,

Re : More respect for the mutt.

“When asked how they thought the other side viewed them, people said their rivals would put them 60 points below fully human. Their perceptions of the other side’s contempt were grossly exaggerated.”

Exaggerated but not wrong…

And that is the nub of the problem.

If I think you think I’m sub human, then it’s understandable that I would think the less of you.

And so the downward spiral picks up speed and eventually drills out a new “ground zero” with each pointing the finger at each other.

We’ve seen these “blood feuds” before in history going back thousands of years in Europe and the Middle East, right through to the current day, and we know where they go and what happens. The bullets go faster and the bombs get bigger, and the piles of corpses get higher.

Part of the reason the Muslim faith exists was to bring an end to blood feuds… Caused by the sane problems in Christianity before it and what went before that. “The one God, the True God, My God” nonsense of authoritarian control by certain types of people, for whom others faith is just another way to subjugate and gain power over them. We see it with many cults big and small that pretend to be religions.

Did the Muslim faith succeed well to a certain extent and for a time yes it did… but then so did others before it. But in all cases though, when the greed and the rivalry that brings out in the worst of us, that which most can not imagine, it fails and it fails badly. It happens when we did not question, did not hold to account, and thus allowed the worst of us to get preeminent, and so we now have problems we do. The same applies to the “orthodox” or “Eastern” version of the Holy Roman Empire and what came before that.

This is not a new observation, nor am I the only one who has made it over the centuries.

As has been noted by many in nany ways down the years,

“The price of freedom is eternal vigilance”

For if we fail at that, and we apparently always do, then the equivalent of slavery, serfdom, or worse oppression befalls us. With the observed result of,

“The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants”

Will come into play and worse than our worst nightmares befalls us. And as history shows revenge begets revenge and the whole vicious cycle repeats with the turning of the wheel of history in what appears an endless rut. Aided in the past century by technical sophistication and mastery of things that scant years ago were thought impossible.

Can mankind destroy it’s self by way of greed and self entitlement?

Yes but not in the way most imagine. Our greed causes us to do many stupid things, destroying wild areas to ranch beef or other food inefficiently and mostly irresponsibly we come into contact with creatures that carry pathogens we have no defences against and we get pandemics and people die. They are becoming more frequent and they are getting worse and as it turns out our “cures are worse than the curse” but highly profitable for some.

The solution is actually quite simple, which is we need to change the way we live not just with respect to each other but the environment we live in.

“Will we change?, Will we be allowed to change?”

I suspect “No” in both cases, thus the rut will get ever deeper…

cybershow March 20, 2024 6:33 AM

I enjoyed this and it inspired me to finish a post by way pf reply to
Bruce and some other authors that’s been on my mind. It’s about how we
talk about – or rather “around” – these social problems.

Words Betray Us

Peter March 20, 2024 9:49 AM

@Bruce and @All.
Regarding this subject: Maybe just maybe for the first time the EU has done something on time:

“On Wednesday March 13, EU Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation. “

https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law

Also the USA has started to adopt laws regarding AI, federal and state level. Although not as far reaching as the EU’s:
https://www.thomsonreuters.com/en-us/posts/legal/legalweek-2024-ai-regulation/

So, where this will get us and is this enough and on time?

vas pup March 20, 2024 7:13 PM

@Clive said “As I’ve commented over the past few years,

1, Technology is agnostic to use
2, It is the Directing mind that decides how it is used.
3, It is the Observers of the use that decide if it is good or bad.”

On 3, Observers sometimes so biased and not objective so we can’t trust their judgment on what is good and what is bad.

It is not like observation of elementary particles which change their behavior by observation them regardless (or no test was prove otherwise?) demographics, ideology, religion you name it. Let say Judge (male) have hemorrhoids or have bad morning argument with his wife) could he being more rough on decisions? Or female Judge during different parts of menstrual cycle (that objectively affects emotional state)?

Recently in day-by-day life it is often we could see how right is this statment ““The lunatics are running the asylum.” and
“The end times will come, when nine sick people will come to one healthy person and say: – You’re sick because you’re not like us.”

Clive Robinson March 20, 2024 8:42 PM

@ vas pup,

Re : The 90% Rule.

“The end times will come, when nine sick people will come to one healthy person and say: – You’re sick because you’re not like us.”

Back in the early 1950’s a US SciFi author made an observation based on the criticisms by the proponents of other types of fiction, that SciFi was rubbish or equivalent there of.

He made the statement that has a universality about it,

“Ninety percent of everything is crap!”

Whilst you might disagree with the 9/10ths the simple fact is in nearly every creative domain by far the majority is bad.

What many don’t realise is that the inverse by being self similar gives you the percentage of the percentage rule,

“Ten percent of everything is good, of that ten percent, ten percent is very good, and of that ten percent of ten percent to percent is excellent and so on”.

This percentage of a percentage rule is actually the exponential growth curve, and due to being self similar it is actually a universal curve where you scale either the X, Y, or both axis to fit it. It has other curious properties one being “log scales” that enable multiplication to be done by addition. It also pops up in maths as a fundamental almost everywhere. It’s been said that,

“Nature is all about circles and growth”

And “e” is found underneath them both.

But the thing is it does not require “when nine sick people” just one will do if we give them time to grow their power.

It’s a sickness endemic in politics where the minority take time but subjugate the masses bit by bit. We give it fancy names like Gerrymandering but the point is the same they gain power in little steps. So when they do come knocking on your door as they will, you no longer have either your rights, or the opinion of the majority to defend yourself with.

It is the true tyranny of the minority who like all cowards are bullies they come against you an individual like a pack of feral dogs. They may not out number the masses but they always out number an individual, thus one by one the majority get picked off by the minority.

As @Winter has pointed out a number of times in the US currently it is women that are the target of a very few best described as mentally sick individuals. Who work through others that are as sick mentally but in different ways. Yes they claim they are doing “God’s Work” or similar nonsense but their real purpose is to be seen to be better than their peers. Some call it “prideful” I call it dangerous, because they are usually just foolish puppets for others who really are socially undesirable in just about every way imaginable by ordinary people.

Winter March 24, 2024 11:59 AM

@R White

It’s extreme to say elections in the United States can be stolen

Without evidence, yes.

It is extreme to say your candidate won the elections if the count and recount showed he lost. It is extreme to storm Parliament violently shouting “hang Pence” to prevent the winning candidate from being installed.

It is also extreme to threaten to kill the health advisor of the president.

Anonymous March 27, 2024 9:39 AM

“yep, 11 figures”

I am not sure if this is an error but 300 billions has 12 figures.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.