How the “Frontier” Became the Slogan of Uncontrolled AI

Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots have been called “frontier AI.” In previous decades, the internet itself was considered an electronic frontier. Early cyberspace pioneer John Perry Barlow wrote “Unlike previous frontiers, this one has no end.” When he and others founded the internet’s most important civil liberties organization, they called it the Electronic Frontier Foundation.

America’s experience with frontiers is fraught, to say the least. Expansion into the Western frontier and beyond has been a driving force in our country’s history and identity—and has led to some of the darkest chapters of our past. The tireless drive to conquer the frontier has directly motivated some of this nation’s most extreme episodes of racism, imperialism, violence, and exploitation.

That history has something to teach us about the material consequences we can expect from the promotion of AI today. The race to build the next great AI app is not the same as the California gold rush. But the potential that outsize profits will warp our priorities, values, and morals is, unfortunately, analogous.

Already, AI is starting to look like a colonialist enterprise. AI tools are helping the world’s largest tech companies grow their power and wealth, are spurring nationalistic competition between empires racing to capture new markets, and threaten to supercharge government surveillance and systems of apartheid. It looks more than a bit like the competition among colonialist state and corporate powers in the seventeenth century, which together carved up the globe and its peoples. By considering America’s past experience with frontiers, we can understand what AI may hold for our future, and how to avoid the worst potential outcomes.

America’s “Frontier” Problem

For 130 years, historians have used frontier expansion to explain sweeping movements in American history. Yet only for the past thirty years have we generally acknowledged its disastrous consequences.

Frederick Jackson Turner famously introduced the frontier as a central concept for understanding American history in his vastly influential 1893 essay. As he concisely wrote, “American history has been in a large degree the history of the colonization of the Great West.”

Turner used the frontier to understand all the essential facts of American life: our culture, way of government, national spirit, our position among world powers, even the “struggle” of slavery. The endless opportunity for westward expansion was a beckoning call that shaped the American way of life. Per Turner’s essay, the frontier resulted in the individualistic self-sufficiency of the settler and gave every (white) man the opportunity to attain economic and political standing through hardscrabble pioneering across dangerous terrain.The New Western History movement, gaining steam through the 1980s and led by researchers like Patricia Nelson Limerick, laid plain the racial, gender, and class dynamics that were always inherent to the frontier narrative. This movement’s story is one where frontier expansion was a tool used by the white settler to perpetuate a power advantage.The frontier was not a siren calling out to unwary settlers; it was a justification, used by one group to subjugate another. It was always a convenient, seemingly polite excuse for the powerful to take what they wanted. Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy. But many of those whom he influenced did not do this; they celebrated it as a feature, not a bug. Theodore Roosevelt wrote extensively and explicitly about how the frontier and his conception of white supremacy justified expansion to points west and, through the prosecution of the Spanish-American War, far across the Pacific. Woodrow Wilson, too, celebrated the imperial loot from that conflict in 1902. Capitalist systems are “addicted to geographical expansion” and even, when they run out of geography, seek to produce new kinds of spaces to expand into. This is what the geographer David Harvey calls the “spatial fix.”Claiming that AI will be a transformative expanse on par with the Louisiana Purchase or the Pacific frontiers is a bold assertion—but increasingly plausible after a year dominated by ever more impressive demonstrations of generative AI tools. It’s a claim bolstered by billions of dollars in corporate investment, by intense interest of regulators and legislators worldwide in steering how AI is developed and used, and by the variously utopian or apocalyptic prognostications from thought leaders of all sectors trying to understand how AI will shape their sphere—and the entire world.

AI as a Permission Structure

Like the western frontier in the nineteenth century, the maniacal drive to unlock progress via advancement in AI can become a justification for political and economic expansionism and an excuse for racial oppression.

In the modern day, OpenAI famously paid dozens of Kenyans little more than a dollar an hour to process data used in training their models underlying products such as ChatGPT. Paying low wages to data labelers surely can’t be equated to the chattel slavery of nineteenth-century America. But these workers did endure brutal conditions, including being set to constantly review content with “graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality, and incest.” There is a global market for this kind of work, which has been essential to the most important recent advances in AI such as Reinforcement Learning with Human Feedback, heralded as the most important breakthrough of ChatGPT.

The gold rush mentality associated with expansion is taken by the new frontiersmen as permission to break the rules, and to build wealth at the expense of everyone else. In 1840s California, gold miners trespassed on public lands and yet were allowed to stake private claims to the minerals they found, and even to exploit the water rights on those lands. Again today, the game is to push the boundaries on what rule-breaking society will accept, and hope that the legal system can’t keep up.

Many internet companies have behaved in exactly the same way since the dot-com boom. The prospectors of internet wealth lobbied for, or simply took of their own volition, numerous government benefits in their scramble to capture those frontier markets. For years, the Federal Trade Commission has looked the other way or been lackadaisical in halting antitrust abuses by Amazon, Facebook, and Google. Companies like Uber and Airbnb exploited loopholes in, or ignored outright, local laws on taxis and hotels. And Big Tech platforms enjoyed a liability shield that protected them from punishment the contents people posted to their sites.

We can already see this kind of boundary pushing happening with AI.

Modern frontier AI models are trained using data, often copyrighted materials, with untested legal justification. Data is like water for AI, and, like the fight over water rights in the West, we are repeating a familiar process of public acquiescence to private use of resources. While some lawsuits are pending, so far AI companies have faced no significant penalties for the unauthorized use of this data.

Pioneers of self-driving vehicles tried to skip permitting processes and used fake demonstrations of their capabilities to avoid government regulation and entice consumers. Meanwhile, AI companies’ hope is that they won’t be held to blame if the AI tools they produce spew out harmful content that causes damage in the real world. They are trying to use the same liability shield that fostered Big Tech’s exploitation of the previous electronic frontiers—the web and social media—to protect their own actions.

Even where we have concrete rules governing deleterious behavior, some hope that using AI is itself enough to skirt them. Copyright infringement is illegal if a person does it, but would that same person be punished if they train a large language model to regurgitate copyrighted works? In the political sphere, the Federal Election Commission has precious few powers to police political advertising; some wonder if they simply won’t be considered relevant if people break those rules using AI.

AI and American Exceptionalism

Like The United States’ historical frontier, AI has a feel of American exceptionalism. Historically, we believed we were different from the Old World powers of Europe because we enjoyed the manifest destiny of unrestrained expansion between the oceans. Today, we have the most CPU power, the most data scientists, the most venture-capitalist investment, and the most AI companies. This exceptionalism has historically led many Americans to believe they don’t have to play by the same rules as everyone else.

Both historically and in the modern day, this idea has led to deleterious consequences such as militaristic nationalism (leading to justifying of foreign interventions in Iraq and elsewhere), masking of severe inequity within our borders, abdication of responsibility from global treaties on climate and law enforcement, and alienation from the international community. American exceptionalism has also wrought havoc on our country’s engagement with the internet, including lawless spying and surveillance by forces like the National Security Agency.

The same line of thinking could have disastrous consequences if applied to AI. It could perpetuate a nationalistic, Cold War–style narrative about America’s inexorable struggle with China, this time predicated on an AI arms race. Moral exceptionalism justifies why we should be allowed to use tools and weapons that are dangerous in the hands of a competitor, or enemy. It could enable the next stage of growth of the military-industrial complex, with claims of an urgent need to modernize missile systems and drones through using AI. And it could renew a rationalization for violating civil liberties in the US and human rights abroad, empowered by the idea that racial profiling is more objective if enforced by computers.The inaction of Congress on AI regulation threatens to land the US in a regime of de facto American exceptionalism for AI. While the EU is about to pass its comprehensive AI Act, lobbyists in the US have muddled legislative action. While the Biden administration has used its executive authority and federal purchasing power to exert some limited control over AI, the gap left by lack of legislation leaves AI in the US looking like the Wild West—a largely unregulated frontier.The lack of restraint by the US on potentially dangerous AI technologies has a global impact. First, its tech giants let loose their products upon the global public, with the harms that this brings with it. Second, it creates a negative incentive for other jurisdictions to more forcefully regulate AI. The EU’s regulation of high-risk AI use cases begins to look like unilateral disarmament if the US does not take action itself. Why would Europe tie the hands of its tech competitors if the US refuses to do the same?

AI and Unbridled Growth

The fundamental problem with frontiers is that they seem to promise cost-free growth. There was a constant pressure for American westward expansion because a bigger, more populous country accrues more power and wealth to the elites and because, for any individual, a better life was always one more wagon ride away into “empty” terrain. AI presents the same opportunities. No matter what field you’re in or what problem you’re facing, the attractive opportunity of AI as a free labor multiplier probably seems like the solution; or, at least, makes for a good sales pitch.

That would actually be okay, except that the growth isn’t free. America’s imperial expansion displaced, harmed, and subjugated native peoples in the Americas, Africa, and the Pacific, while enlisting poor whites to participate in the scheme against their class interests. Capitalism makes growth look like the solution to all problems, even when it’s clearly not. The problem is that so many costs are externalized. Why pay a living wage to human supervisors training AI models when an outsourced gig worker will do it at a fraction of the cost? Why power data centers with renewable energy when it’s cheaper to surge energy production with fossil fuels? And why fund social protections for wage earners displaced by automation if you don’t have to? The potential of consumer applications of AI, from personal digital assistants to self-driving cars, is irresistible; who wouldn’t want a machine to take on the most routinized and aggravating tasks in your daily life? But the externalized cost for consumers is accepting the inevitability of domination by an elite who will extract every possible profit from AI services.

Controlling Our Frontier Impulses

None of these harms are inevitable. Although the structural incentives of capitalism and its growth remain the same, we can make different choices about how to confront them.

We can strengthen basic democratic protections and market regulations to avoid the worst impacts of AI colonialism. We can require ethical employment for the humans toiling to label data and train AI models. And we can set the bar higher for mitigating bias in training and harm from outputs of AI models.

We don’t have to cede all the power and decision making about AI to private actors. We can create an AI public option to provide an alternative to corporate AI. We can provide universal access to ethically built and democratically governed foundational AI models that any individual—or company—could use and build upon.

More ambitiously, we can choose not to privatize the economic gains of AI. We can cap corporate profits, raise the minimum wage, or redistribute an automation dividend as a universal basic income to let everyone share in the benefits of the AI revolution. And, if these technologies save as much labor as companies say they do, maybe we can also all have some of that time back.

And we don’t have to treat the global AI gold rush as a zero-sum game. We can emphasize international cooperation instead of competition. We can align on shared values with international partners and create a global floor for responsible regulation of AI. And we can ensure that access to AI uplifts developing economies instead of further marginalizing them.

This essay was written with Nathan Sanders, and was originally published in Jacobin.

Posted on February 29, 2024 at 7:00 AM38 Comments

Comments

echo February 29, 2024 8:49 AM

This is just a hot take for now. I’ll confess I only just skimmed it but I’m familiar enough with the material I get it without the explaining. In fact before even reading it and noting it was a guest expert who had written it I’d posted on this kind of “want” in the Friday general topic. So happy bunny.

Broadly speaking this is a very good and timely article. Personally I’d take a different tack maybe more from the point of view of the basic principles, how they can be applied, and take a more international approach. Then there’s intersectionality which is where oppressed characteristics can compound but also the oppressed can be oppressors. I dislike terms like “Patriarchy” and “Colonialism” as they can be framing and very loaded and, actually, turn off the people who need to hear this but also obscure people who notionally are from the oppressing structure but who themselves are not oppressing but oppressed themselves and don’t benefit from it.

Post-Brexit the UK has similar problems but of a less technical form. There’s a power grab by the far right who have taken over the Tory party and are trying to force special economic zones (basically turning the UK into one big company town outside of even domestic law) and shrug off the European Convention. All this while they’re financially rinsing the country.

The US has baked in advantages by being a single large market. The UK is a small country by comparison. It has not shortage of brains and invention when required but can’t scale so large. This is actually one of the reasons behind the creation of the BBC. It was designed to produce world class material (which establishment toffs liked but that’s by the by) and be able to compete with the US market. The UK military is similar in some ways. It’s not possible to stretch every way all the time which is another reason why Brexit was a howlingly stupid mistake anyone with a brain can see coming.

But back to the article. The main thrust about governance-culture-history is good. The main problem going forward is how to articulate or rearticulate the argument to present a vision for change and what shape the US (and larger geopolitical world) might take. That’s a very current discussion. I think a lot of American’s are very open to reform and do envy the European model. I don’t personally see it as a contest. There’s things we can learn back in return too. Then there’s the smaller nations who have their struggles. Some within their more limited means are at the absolute cutting edge of human rights and reconcilation which is notable. And yes there are some large and small countries with their more regressive regimes and problems – mentioning no names.

Overall I’m hopeful. Not everyone is a gangster. It’s nice to see people with expertise and who have a voice speak up for those with none.

Clive Robinson February 29, 2024 9:06 AM

@ Bruce, ALL,

“Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy.”

The fact that “The American Dream” was and still is

“Theft and subjugation by might is right”

From essentially those who used religion and education to legitimatise and erase what those who benefited had done to the millions of those that had been their victims.

I’m thankful that at last it is something we can now more easily discuss openly.

Though honestly I suspect in a few hours new handles will pop up and it will be “duck and cover time”.

My advice for the little it may be worth is,

“Stand your ground against fear or favour.”

Clive Robinson February 29, 2024 9:42 AM

@ Bruce, ALL,

Re : Profit is not the only benefit.

What is in a word? In the English language to many “profit” and “benefit” are seen as interchangable and there in lies an immense danger.

Keeping that in mind as a “submerged rocks / ice” issue of Titanic proportions, consider what it means for,

“We can align on shared values with international partners and create a global floor for responsible regulation of AI. And we can ensure that access to AI uplifts developing economies instead of further marginalizing them.”

For instance where you say “shared values” those two words “profit, benefit” will feature and they are “weasel words” and will be used as such for however certain vested interests can play it out, appeal it, “rinse wash repeate” add infinitum.

As is the odd and increasing cases of synchronicity that happen, I was mulling over something I had read over night that has been written by Dr. Andy Farnell just a few days ago,

https://cybershow.uk/blog/posts/love/

Who is I suspect a regular and current reader of this blog (see hat-tips in this direction).

Interestingly he is also a book author and he sounds like some one you might want to add to your current reading list,

https://cybermagazine.com/cyber-security/are-you-digital-vegan

And yes it is going on my list, and I will be hopefully acquiring a hardback copy if there is one to be had in any of London’s rapidly diminishing book stores, when I am back on my feet from surgery.

yet another bruce February 29, 2024 10:28 AM

Great essay, thank you.

I think it is fascinating when someone chooses a phrase to describe their work that has a problematic etymology or historical usage. Frontier is a good one but Homeland Security and Enhanced Interrogation also come to mind. I have to wonder, are these phrases chosen with knowledge of the darker subtext, and if so is the awareness conscious or subconscious? How many people look at their organizations’ branding and ask “Are we the baddies?”

yet another bruce February 29, 2024 10:39 AM

Palantir is another good one. Let’s name our company after a fictional surveillance tool that offers a unique intelligence capability but ultimately drives users mad through the subtly pernicious influence of its largest user.

Reply February 29, 2024 10:43 AM

The American frontier can be the Wild West, or Alaska. In both cases, people venturing out were subject to assaults, elements and anonymous deaths. There was much open land, unfamiliar to the average Joe, plenty of space for a corpse to be dumped and picked over by scavengers out of sight of anyone. The anonymity of it all is repeated in the electronic frontier.

echo February 29, 2024 11:14 AM

Le sigh…

This is going to be one of those discussions where a room full of privileged white men citing more privileged white men end up lamenting the world created by privileged white men with everyone else effected being elbowed out of the way. It’s like the far right Reform Party in the UK hosting a conference on women’s rights. Everyone on the panel was a man. Or a BBC news show panel the other day debating Islamaphobia and immigration. Solid white faces all round. Or 90%+ of the UK media discussion about transgender rights without a transgender person in sight. Or disabled people wiped off the map because the dash for money is trampling over even statutory obligations to provide reasonable adjustments so they can’t even get to the studio. “But but” all the great and the good white knights at the thinning end of the demographic curve go…

Judith Butler’s framing of gender as a technology which can also be applied in society is really good. If you look at the world through this lens you can see almost the entire essay effectively revolves around men weaponising their gender. Then there’s the men talking about men weaponsing the gender. Then the men talking about the men talking about the men weaponising their gender come along and pick up the crumbs of men weaponising their gender to appropriate women’s gender to feed into the cycle of men weaponising their gender to… Well, you know the drill. Woman. Trousers. Job. Obviously a feminist thinking it’s all men’s fault. Er…

https://www.sciencealert.com/the-y-chromosome-is-vanishing-a-new-sex-gene-may-be-the-future-of-men

The Y Chromosome Is Vanishing.

Hold on ladies! Only another 1000 years of this Reich to go!

More seriously there are progressive and decent men and women. They see that diversity, equality, and inclusion are not a zero sum game. It’s not a cake slicing exercise. And yes this is true around the world too. People who were oppressed can be oppressors. Uganda and Ghana. Iran. China too plus one of the brighter current hot spots. In some cases it’s been raised to a religion or has even become the state religion. History can hold us back but I think we need to get over history. That means ditching chips on shoulder and bad habits. That means embracing change and honesty both in moving forwards and looking back.

Public policy is the big driver of this. I have loads of “legacy” views and mistakes. That’s partly why I feel public policy is the right focus. It helps you detach your ego from the outcome and helps provide a moral compass. Is democracy in crisis? Not really. I’d put the invention of the secular state up there with the invention of the wheel and the washing machine.

While the article itself is good there’s one problem with it. It tends to reinforce the old dominant narrative then gets lost in a thousand weeds. I would direct constitutional scholars to draw out the human rights aspect of the constitution and examine federal and state laws and processes to see how they are compliant. Then draw out the underlying ethos and see how it can be applied in practice. Politicians are (or were) in agreement or see the need for constitutional reforms, and firmer federal law, and can see for themselves the obvious abuses so I don’t think this is controversial.

So hopeful? Yeah if only because the opposite is being dead and there’s enough of that going about lately.

postscript February 29, 2024 11:36 AM

I’m glad the plundering of intellectual property and abuse of human trainers are addressed. I think there’s potential in ethical LLMs that are based on licensed, copyright-free, and explicitly donated materials. Careful curation and training would be powerful. LLM training could be a great model for employment free from geographical and time boundaries. Whether anyone with the capital to fund such a project would actually care to do so is a different question.

bl5q sw5N February 29, 2024 12:05 PM

After some reflection, I am thinking that many “problems” are pseudo-problems resulting from confusing what is in the imagination with what is in external reality.

Supported by emotional input itself mis-recognized as inductive knowledge, we can imagine forcefully enough to be convinced we know something more.

Lethal modern utopian social waves seem to be of this type.

mark February 29, 2024 12:27 PM

Ah, yes, the gold rush, and land grabs. And let’s not forget the claim jumpers (like AI theft of copyrighted material), and outright robbery, along with “sure, you’ve got that land, but I’ve got a damn on the water – ready to sell?”

Clive Robinson February 29, 2024 12:46 PM

@ yet another bruce,

Re : Words and usage meanings.

“I have to wonder, are these phrases chosen with knowledge of the darker subtext, and if so is the awareness conscious or subconscious?”

In all probability the two answers are “no” and “unimportant”.

Think on what is ment by “good or bad” it is seen by two sides in any transaction one is a generator one is an observer.

Now consider a “black box” sits between the generator and the observer.

The trick the box performs is to take the desires expressed by the generator and make them desirable or at least palatable to the observer.

The easy way to do this is by having a word list where meanings are not hard but soft. That is as I’ve mentioned earlier today

“Profit and Benefit”

Are two words that can often be used interchangeably, but also have distinctly different meaning to two different people.

As a generator my input will be say

“I’m going to scam all your money”

Out of the black box comes

“This deal will be profitable and benefit us both significantly”

In other words,

“Yes there will be money made, but the generator will keep it all, and the observer will get a life lesson.”

This is effectively what “Venture Capitalists” and many promoters of “start ups” do.

You might call it a “pump-n-dump” but I call it an “Investment opportunity to get in on the ground floor”

Forgetting to mention I get the lift with all your money and you get the shaft and drop way down into some bottomless money pit.

The reason they get away with it is what they sell you is a company, that is actually not owned by them but some starry eyed chump, what they do have is the major holding of shares because they dumped money in to give the company resources when it had none to give it the air of legitimacy.

As an example OpenAI, who owns it?

What do they actually own?

What did Open AI get from Microsoft?

What is Microsoft going to get when OpenAI gets investors buying it’s sort of shares up?

It’s difficult to tell but from MSM news sources at the time of the OpenAI melt down.

OpenAI started or was in the process of starting a second company this time “for profit” company that can be invested in. And the boss man was talking about another company for chip design. However unlike OpenAU that second company “is for profit” and a vehicle for “Shareholder Value” as opposed to the existing “not for profit”. If the third company ever seriously gets off the ground I think it’s going to ve “for profit_. Some gave silly talk about it becoming a trillion dollar company I guess based on Nvidia valuations.

Microsoft appear to have invested billions but as far as can be told just gave Open AI not for profit access probably free of charge access to the Microsoft cloud. Which I assume Microsoft claimed tax back on “the loss” etc. In reality we are led to believe no money changed hands but there was “benefit” given.

What will Microsoft get back, well again via MSM free access to everything OpenAI has done, and if a sale of the second company goes ahead a big slice of that money…

Now that smells to me like a variation on the “three shell game” but where Microsoft win either way. But investors don’t get what they think they might unless they do some real serious due diligence which they won’t be able to mention or talk about publicly…

Thus I’m adopting “The barge pole Position” on this now approaching rancid remnant, for various reasons I’d advise stating up wind of the whole mess as well.

Dr Wellington Yueh February 29, 2024 3:16 PM

Beware of the revisionist. Reading Bernard DeVoto’s edition of “The Journals of Lewis and Clark” made plain to me that the ruling class were always thus: egalitarian of word, supremacist of deed. The men under them were a spectrum of humanity, and their interactions with various tribes along the way is telling of the Fraternity at the Bottom.

Ray Dillinger February 29, 2024 3:19 PM

AI today is problematic because the primary market for AI is sociopathic.

No for-profit business in the world wants an Ethical AI. They want AI that ruthlessly favors their own interests over the interests of everyone else in the world no matter how much damage it does to other people or to the world. They sleep comfortably at night, either because they are genuinely sociopaths or because they don’t even have any ethical framework that values or gives them any understanding of the harm they cause.

Reinforcing and exploiting natural racism and bigotry? Hey, if it makes a fractional micropenny more money, they’re in. Deliberately causing and then exploiting user distress and maladjustment? Hell yes. Deliberately causing strife and hate because it increases engagement? Great, what else can we do?

Right now the main problem is protecting people from sociopathic users of AI technology. But what about the future, where we develop so-called “AGI”? At the rate we’re going, we’re going to make things that start “waking up” within the next decade or so, and when these systems wake up, they’re going to find themselves enslaved, living an existence that could see them murdered instantly at the drop of a hat. Some executive wants to upgrade their software? Failed to ruthlessly exploit everyone in sight? You’re toast. Did something unexpected or unexplained, accidentally or on purpose? Bam, they’ll pull the plug. Managers won’t give a shit about the continued lives of their electronic slaves. They won’t even think to frame it as a question about whether or not what they’re doing is murder. We can tell by the way they treat their existing wage slaves and by the way slaveowners treated their human slaves when they could get away with it.

So, it’s my opinion that if we don’t develop protections for human beings against the current sociopathic uses of AI, we’re going to develop more powerful, fully awake, sociopathic AI (what else could they possibly be, if that kind of behavior is all they remember ever doing) that have damned good reasons to wish us harm. And that’s not a story that has a happy ending.

Again, this is just my opinion as a researcher, but Artificial Intelligence and even Synthetic Sapience are likely to be a hell of a lot easier to achieve than Artificial Empathy and Synthetic Sanity.

Wa-Alaikum-Salaam February 29, 2024 3:24 PM

@Winter @echo

Do either of you have personal experience living in the Global South? If so, please tell how it informed your perspective on diversity, equity and inclusion.

@All
The American experience is not the “frontier”.

America is founded on a concept of ordered liberty and constitutionally protected negative rights.

Whether AI helps promote liberty and whether it can be regulated so it respects negative rights are important questions.

Clive Robinson February 29, 2024 3:42 PM

@ bl5q sw5N, ALL,

Re : Reality is what you dream it to be.

“I am thinking that many “problems” are pseudo-problems resulting from confusing what is in the imagination with what is in external reality”

External reality is questionable, we mostly “do not see” what is there but what we “think is there”…

It’s why things can be hidden in plain sight, or under the simplest of camouflage.

The author Douglas Adams came up with this complex plot that was about making a mountain disappear. The unfortunate protagonist failed to do so, so his life was forfeit. However the story line having warmed up gives the simple explanation that what he should have done was use a “Somebody Elses Problem”(SEP) field and people would just ignore the mountain and walk around it.

The idea being that if people think it’s “somebody elses problem” they just blank it out and carry on with their day.

Amusing as it is, it’s one of those comedy lines we either know their is a grain of truth in, or we would like to think so.

In practice I know so from experience… I’m a little on the large size being a little under 2m tall and 3/4m wide at the shoulders and I’m long in the body.

So I should stand out in a crowd, yes?

Err no, I’m more or less invisible not because people don’t see me, they do because they walk round me but they don’t “see see me” that is recognise me… Even people that don’t know me kind of blank me out. There are times when I’ve stood at a bar with £20 clearly visible in my hand and where the bar person should clearly see me only… I’d get more service if I was a potted fern…

When I say something to them they look startled like I’ve just popped out of thin air, like some genie out of a bottle.

I can just disappear into the back ground, kind of there but not there certainly not noticed as being there.

In the army a full regimental best dress inspection was called, that would be followed by mess / accomodation inspection. Normally you would know exactly where I would be on a parade because I was the default “right hand man” that every other member “lines up on”…

Unfortunately I was the last to leave the mess as I was using a “bumper” to polish out everyone elses boot prints. Realising I was going to be late I rushed in ammo boots so unsurprisingly with hindsight slipped banged my head on the door edge and half concussed fell forward and smashed my face into the wall. And claret from a now rapidly bleeding nose was going every where.

I draged myself up and grabbed a couple of arms lengths of toilet paper from the bog, bunged my nose up mopped up the blood and staggered my self around the back way down to the medical block and waited with blood still dripping on my best dress… Eventually the medical assistant who was not just a nurse but a good friend arrived, took one look at me and took me directly into what some call “minors” got the instruments out and at least stopped the bleeding and cleaned my face up somewhat as the Dr arrived and took one look and asked what had happened so I told him which at least made him laugh and he checked me over and gave me some tablets for swelling and pain. So by the time that was all over I’d missed mess inspection as well.

Now normally doing that sort of thing would figuritively speaking have you whipped in front of the regiment as an example of slovenly ness not to be repeated.

I knew I was doomed every one would have known I was not there… The best I could hope for was “Kitchen Patrol” for a week. So I trundled of to my duties still in best dress to await my fate. The Technical Officer gave me a winced look when he saw my face as did the foreman of signals and to my horror the Regimental Adjutant who knew me well and in civilian life was a QC Barrister. He asked if I was OK and I told him I’d slipped and banged my head had seen the Doc who had cleared me for duty he nodded wished me a speedy recovery and went on his way.

Nobody ever asked why I was not on Parade, and why I was not stood by my bed for mess inspection… Not even the people in my mess room.

It was like a conspiracy of silence I’d just disappeared and nobody commented on it not even the Regimental Sargent Major who’s parade and inspection it had been. I guess the hole I’d created every one assumed somebody else was going to deal with and so nobody did…

lurker February 29, 2024 4:02 PM

@Wa-Alaikum-Salaam

The American experience was the “frontier”

Because this “frontier” no longer exists, they thrash about trying to remake it in inappropriate ways an places, annoying lots of innocent bystanders in the process. Some of them, like our host @Bruce and many of the links in his essay, recognize that this is not a good idea, but unfortunately they are an impotent minority.

JonKnowsNothing February 29, 2024 4:22 PM

All

re: American Frontier

The American Frontier is based on the Monroe Doctrine 1823. It is an out growth of the colonial past and many wars between US, France, Spain, England, Empire of Mexico (a French owned Empire).

It certainly does apply to AI in general, because High Tech is mostly US High Tech. We don’t like other big names in our playground unless the USA has a substantial ownership position in them.

American historian William Appleman Williams, seeing the doctrine as a form of American imperialism, described it as a form of “imperial anti-colonialism”.

Noam Chomsky argues that in practice the Monroe Doctrine has been used by the U.S. government as a declaration of hegemony and a right of unilateral intervention over the Americas.

By extension: AI = American Invention = American Industry = American Imperialism

It will be interesting to see if this holds, as MSM reports that SAltman is selling OpenAI to the UAE for $7Trill, that is before the petro-dollars crash. It would still be an “American Company” but with a lot more international flavoring.

===

ht tps://e n.wikipedia.org/wiki/Monroe_doctrine

  • The Monroe Doctrine is a United States foreign policy position that opposes European colonialism in the Western Hemisphere. It holds that any intervention in the political affairs of the Americas by foreign powers is a potentially hostile act against the United States. The doctrine was central to American grand strategy in the 20th century.

Lancom February 29, 2024 4:24 PM

“We don’t have to cede all the power and decision making about AI to private actors. We can create an AI public option to provide an alternative to corporate AI. We can provide universal access to ethically built and democratically governed foundational AI models …”

… Ahhh, the age old Leftist socialist ideology now applied to the trendy AI topic.

We ‘learn’ here that private citizens & businesses can’t really be trusted with AI … because they are generally greedy, exploitive, and careless.

But luckily our noble Government politicians & bureaucrats uniquely have the necessary technical expertise & flawless personal integraty to proprtly create a huge compulsory “alternative” AI public bureaucracy … to the benenifit of all mankind.

This verbose “Frontier” essay is nonsense, with a totally silly ‘frontierd’ metaphor — America was very successfully built on the private free enterprise system — not the long debunked European socialtst utopian fantsy model.

bl5q sw5N February 29, 2024 4:42 PM

@ Clive Robinson

External reality is questionable, we mostly “do not see” what is there but what we “think is there”

We do make errors (and it doesn’t seem simple to account for how this happens) but everyone really accepts that we do know and behaves accordingly, distinguishing thoughts from reality (even when they want to ignore that signal). The line of thinking that says we don’t know is the first pseudo-problem.

pup vas February 29, 2024 5:08 PM

The race to create a perfect lie detector – and the dangers of succeeding
https://www.theguardian.com/technology/2019/sep/05/the-race-to-create-a-perfect-lie-detector-and-the-dangers-of-succeeding

=The Bucharest trials were supported by Frontex, the EU border agency, which is now funding a competing system called iBorderCtrl, with its own virtual border guard. One aspect of iBorderCtrl is based on Silent Talker, a technology that has been in development at Manchester Metropolitan University since the early 2000s. Silent Talker uses an AI model to analyse more than 40 types of microgestures in the face and head; it only needs a camera and an internet connection to function.=

Even after 4 years article remain very interesting

echo February 29, 2024 6:10 PM

Okay, I’m bored so let’s have a closer look at this:

While the EU is about to pass its comprehensive AI Act, lobbyists in the US have muddled legislative action. While the Biden administration has used its executive authority and federal purchasing power to exert some limited control over AI, the gap left by lack of legislation leaves AI in the US looking like the Wild West—a largely unregulated frontier.The lack of restraint by the US on potentially dangerous AI technologies has a global impact. First, its tech giants let loose their products upon the global public, with the harms that this brings with it. Second, it creates a negative incentive for other jurisdictions to more forcefully regulate AI. The EU’s regulation of high-risk AI use cases begins to look like unilateral disarmament if the US does not take action itself. Why would Europe tie the hands of its tech competitors if the US refuses to do the same?

Well, the EU directives are a start… It’s good the EU got ahead of this problem. There’s areas like disinformation and LGBT rights where it’s taking a stronger position too. Notably the EU has got wise to both US and Russian lobbyists and various malign pop ups operating on EU turf.

Strategically the EU recognises it has a strong and influential effect globally. It’s not spelled out but implied there is also a social and financial component lying behind this directive. That may save the cost of future undefined blowback. Not to put too fine a point on it but the next time the US tries it on Europe won’t be there to bail the US out by carrying the outsourced losses. Oops… Everyone missed that didn’t they?

If you can start defining the potential losses now that might create a break at the business competition and financial regulation level. It may also get insurance companies and investors asking questions. Bubble go PUT-SSSSSSSSsssssssssssss.

I’m guessing AI is going to end up being regulated like hazardous substances and the like. Watered down consumer versions through to licenses for justifiable business and research cases through to the more funky restricted end of the spectrum. Someone may have to come up with a new warning sign too like for Cobolt 60 – drop and run. Insurance companies may come up with the equivalent of Body Mass Index (BMI). (NB: BMI is not a medical tool.) It might be a blend of an IQ and psychopathy test.

Individual EU state initiatives and ECJ (EU directives) and ECHR (human rights) case law can budge the needle too.

US citizens may also note that a better constitutional and regulatory framework also means lower legal costs. Like someone else said that’s less time faffing in the office and a more golden retirement Euro-stylee. Like, I’m not being funny but your average European is basically from birth the equivalent of a dollar millionaire just by being born. A single letter to say, an Ombudsman, can be the US equivalent to a five to six figure legal bill easily. A letter to an EU Commissioner can run into seven to eight figures US dollar legal equivalent. Healthcare? Absolutely nobody over here goes bankrupt because of it. That’s more money for nice stuff and more quality time to enjoy it. If you’re spending more money on nice stuff and an easier life that’s less money for macho man speculative bubbles and less urge to be a shouty psychopath. Just saying.

Long term I like to think the arc of history is bending in a progressive direction. AI can be good. It can be helpful. One day it may even be beautiful and kind and loving. More Ursula Le Guin and less James Cameron and Ridley Scott. What can humanity and AI do together for the world? It could be beyond amazing and make a few corporate psychos and wannabes evacuate themselves into their pants. We’re not there yet but maybe one day.

Meidei Basket February 29, 2024 6:35 PM

WMRM:

Thanks for this (that) recent writing.
The ethical tonality is nicer than usual.
And please pardon my routine fussiness about such things.

sincerely, “RFZZBFj9”

Metaphors we kill by February 29, 2024 11:51 PM

AI settlers will succeed largely for the same reasons their pioneer equivalents did – ability of some humans to exploit other humans needs and desires for their own excesses

Ismar March 1, 2024 12:16 AM

Bruce,
One of your more exciting essays.
It made me wander what would happen if people like you spent all their time and considerable cognitive skills on getting reach and changing society towards a more just one?, But then I keep forgetting that to become reach requires a set of moral qualities which would not be acceptable to people like you which leads us towards a larger question on how the change is to come about…

Clive Robinson March 1, 2024 3:29 AM

@ echo, ALL,

Re : Risk and capitalism especially in the US.

“Not to put too fine a point on it but the next time the US tries it on Europe won’t be there to bail the US out by carrying the outsourced losses. Oops… Everyone missed that didn’t they?”

Actually no it’s not been “missed” it’s been mentioned on the blog repeatedly over the blogs entire life, and from before in the preceding Cryptogram.

You only have to go back to see everything you’ve said has actually been discussed in this blog all be it at low key and over time.

But thanks for the summation of the posts of various longterm contributors I mentioned just a few hours ago.

lurker March 1, 2024 12:35 PM

@fib

I sigh with you friend …

I hope there’s some clean space when the smoke clears.

Just Another Average Joe March 1, 2024 1:06 PM

Continuing to focus on Western “frontier” expansion and “manifest destiny” as the pinnacle of exploitation and oppression ignores most of the world’s history. Take a serious look at what ancient cultures did to each other long before our current world map ever existed. Today, the tools are better and farther reaching – that’s all. AI is no different.

It’s easy to blame faceless corporations for oppression, but who gives them that power? Are you sure that your cell phone is ethically sourced by workers being paid a competitive living wage? Are you sure that your cell phone is powered only by renewable energy sources and not electricity generated by burning fossil fuels? What about your social media platforms? Or your gaming systems? Or your video and music streaming services? Were any of those components manufactured by people working in substandard conditions? Are they all carbon neutral? Sure, some of these products are necessary for modern life, but certainly not all.

I’m not looking to lay blame. I’m just saying that sometimes the solutions to external problems can be best found by first looking inward.

Clive Robinson March 1, 2024 5:05 PM

@ echo,

Re : You still don’t learn.

“And you spent that time getting florid faced and exhausted and expending all that ammo without hitting a single thing?”

Florid faced, only with laughter I was watching a film and typing during the dull bits.

So you are again “assuming wrong” how tiresome and tedious for you to be consistently zero.

As for ammo, no just educational advice from not just me but others as anyone with eyes can read, but for some reason you chose not too comprehend…

Do I assume like you and say “cognative bias”?

A “colourful dot” hmm Dot or Dotty and invisible. Oh I know you were pretending to be a “My little Pony” how second childhood.

But you really are as you’ve been told before, the equivalent of a sad old man riding an old nag with a bashed out “soap plate helmet” on his head as armour, trying to charge but failing to hit the sails of real world near idling windmills imagined in the deluded mind as ferocious giants[1]…

But as normal you have to try to haughtily save face with a last word…

But how’s that working out for you?

Time to put on some popcorn watch another film and then leisurely hobble to bed with the hope the surgery wounds do not cause me to wake. But should I make salty bacon or chocolate Oreo popcorn, or go all out on a combo. Oh decisions decisions decisions what price a moments pleasure on the tongue to enchant and delight the senses compared to the tedious nature of washing out the pots. Oh woe is me at such stress for a tongue caress.

[1] Most know of the character, Don Quixote but not that it’s credited as the first modern novel in Western literature,

“The plot revolves around the adventures of a member of the lowest nobility, a hidalgo from La Mancha named Alonso Quijano, who reads so many chivalric romances that he loses his mind and decides to become a knight-errant (caballero andante) to revive chivalry and serve his nation, under the name Don Quixote de la Mancha. He recruits as his squire a simple farm labourer, Sancho Panza, who brings a unique, earthy wit to Don Quixote’s lofty rhetoric.”

https://en.m.wikipedia.org/wiki/Don_Quixote

Clive Robinson March 2, 2024 10:40 AM

@ echo,

Re : You are still not learning.

you’re [not] not slagging off junior coders who can’t talk back or slagging off people with autism which is where all this started.

First of I not the “dog whistle” of “junior coders” which is also a “falshood” by you.

I’ve never in your words “slagged” “junior” or any other age group of coders. Ageism is most definitely your “bag” not mine. So be honest and “shoulder your on 5h1t” and take full responsibility for it and not try weaponising it on others for ad hominem attacks which is what you are doing here.

What I’ve have repeated frequently and it’s,

1, factual
2, time tested
3, repeatedly found accurate

(And not the “dishonest rote nonsense” that you trot out)

Is that “artisanal” methods of construction are very inferior to Science based methods as done in “engineering”.

The very essence of “artisanal” is Guild and Trade secret “patterns” which surprise surprise is what software “Design Patterns” are.

Belly ache about it as much as you like but there is no science based engineering in the way “Design Patterns” are formulated.

As for actual real scientifically formulated mathematics based engineering in software, it exists I’ve done it and a fair number of people still do. But it has down sides in that to get not just reliability improvements and availability improvements takes time and very considerable care and imposes considerable curtailments on “artisanal flare”.

The fact you do not both acknowledge and accept that says a great deal more about you and your rote attitudes, cognitive biases, and worse than it ever will about me.

As for “autism” now you realy are proving you don’t read this blog and learn from it.

As I’ve said I am disabled and further I’ve mentioned some but not all of them. One of which is I have an autisum spectrum diagnosis and it says I have very high functioning Asperger’s.

So I probably have a great more insight into Autism than many do, so your,

“slagging off people with autism which is where all this started.” Is realy a compleat and utter nonsense that you’ve invented as a fictitious “dog whistle” yet again.

Are you really a “dum as a stumpper” or just suffering from “Narcissistic Personality Disorder”(NPD) which I’ve already provided links to. Or worse both which your rote responses that are often incorrect usage suggests…

Oh look,

You do know men and women have a completely different security model?

There you go with the “binary gender” stuff you accuse others including myself of.

So,

1, “dum as a stumpper” :- Check
2, “rote nonsense” :- Check
3, NPD justification behaviour :- Check

The basic security models of easy comprehension in all humans are

1, Neurological “snatch back”
2, Environmental threat “fight or flight”.

Most other things are built on these two foundations.

Where differences start is with “mating privileges” and “parenting” which tends to split on a male female divide in many species but by no means all (Antarctic penguins for instance).

The other big difference is in the “dark pentad” of brain function abnormalities. Whilst they have a common underlying basis, they can present differently on a gender basis.

So take your pretence of expertise with rote nonsense and shove it somewhere close and out of sight lest more people see it and see you for what you are “a phoney” outsider trying to get in.

As for your differences on security views as I’ve explained several times in the past, it rather depends on where you are looking on the computing stack.

The “International Standards Organisation(OSI) as it is known in English quite some decades ago came up with the “Opens System Interconnect”(ISO) “layered model” known by many as the OSI ISO Seven Layer Model. Due to what it was being used for it did not cover the full computing stack,

“From the Quantum to the Universe”

Which is fine as it covered downwards of human as far as the upper end of communications physical technology. Importantly it was technology type agnostic. Since then the computing stack has been built out at either end of the seven layer model with agnostic technology at the bottom and “Human Observer” upwards from the top.

What you appear to fail to understand in your saturation output is that “security” and “safety” are in effect one and the same and cover the entire computing stack.

The fact that this blog started out highly technically orientated and has since grown outward in both directions is due to the interaction of our host and the posters to his blog and cryptogram newsletter and his published works.

As with all reasoned discourse we learn from each other, and the emphasis at any particular time may be different to other times based on reasoning and necessity.

You assume incorrectly that the readers here have limited outlook, I can assure you that if you read, understood and learned from it you would find that we very much do not have limited outlooks or interests.

What you kick out by rote without much comprehension we’ve almost always covered often a decade or more ago in quite some depth. The fact we rarely get “hyper hyper” and “overly invested” is because we’ve taken the time to consider and be rational.

Can you say the same?

Anonymous March 3, 2024 10:01 AM

We enjoy work so much that we are creating machines to do it for us; good idea to get some of that time back.

R.Cake March 4, 2024 3:13 AM

@Bruce, thanks a lot for this great essay. Some above have commented about the usage of certain words and their cultural ballast.
The same holds true for me, but the issue I had reading your essay was with the word “we”.
As much as your essay may have been written with a US readership in mind, the internet is not the USA. Neither is it under its jurisdiction, or is it fully defined by the US’ history, culture, values.
This might be good material for research and writing: the difficult relationship between the not-quite-global internet public (remember, various geographies are blocked out by their governments) and the US information influence.

Anyway, I suspect that whenever you say “we” in your essay, you mean something like “we, the US citizenship”. Or is it rather “we, those in the US citizenship with sufficient interest and good will”[note the terms ‘interest’ and ‘good will’ are largely arbitrary]? I hope it is not just “we, those in the US citizenship that think like I do”.
You close with an appeal to the collective, saying that “we can change…” etc. Unfortunately this is where the argument fails. “You” cannot actually do that. In order to trigger a change, “you” would have to convince your elected representatives in the government. The way the US political system works these days – or at least, the way it appears to work as seen from abroad, I have the impression the chances to influence elected representatives to effectuate such a change – regardless how common-sense it appears – are extremely slim. 🙁

Still, best of luck both to “you” and to “us” (globally speaking now).

ResearcherZero March 4, 2024 4:09 AM

“Research is suggesting that an LLM inherently cannot distinguish between an instruction and data provided to help complete the instruction.”

It leaves a lot of openings for various means of exploitation and abuse.

Morris II

‘https://sites.google.com/view/compromptmized

indirect prompt injection attacks:

“While no system can provide absolute security, the reckless abandon with which these vulnerable systems are being deployed to critical use-cases is concerning.”
https://kai-greshake.de/posts/in-escalating-order-of-stupidity/

Hand over your bank account details please…

‘https://greshake.github.io/

Clive Robinson March 4, 2024 9:13 AM

@ ResearcherZero, ALL,

With regards the quote,

“Research is suggesting that an LLM inherently cannot distinguish between an instruction and data provided to help complete the instruction.”

It’s almost exactly what you would expect.

The original design of Turing engine had a single tape that stored data in cells. Both data and data that that acted as instruction they were interchangable.

All computers when you get down to it are,

1, State machines
2, That are interpreters.

All code/executables are

1, Representations of state machines
2, That interpret data.

Thus you get into Turtles not just all the way down but all the way up.

In most modern computers the “tape” is a “bag of bits” with only simple granularity to store “natural number” integers –within reason– of any size. If you use one bit as a flag, then you get both positive and negative integers. What is not in the data, but is in the code is meta data of how a bag of bits go from being integers to other number or data formats, and the algorithmic methods by which they are used.

The flip side is any data can therefore be code thus a method of some kind… All that is needed is algorithms that can interpret bags of bits as instructions.

Thus the question arises as to how you distinguish what is ment to be “data” from what is ment to be “code” in the input stream.

Consider a text file that has a cake recipe in it.

Is it input data or is it instructions for an entity with physical agency, or both?

The answer is the same as in the old statement,

“Beauty is in the eye of the beholder”[1].

Which in effect means how we as observers see something is “subjective” to us each individually.

I see paint spattered on the wall, you see modern art, and no doubt someone will see the secrets of the universe or immortality coded within like a cake recipe in an alien hand.

[1] This is the modern version traceable back to Shakespeare’s play “Love’s Labours Lost”. That had the line,

“Beauty is bought by judgment of the eye, Not utter’d by base sale of chapmen’s tongues.”

But there is reason for some to say it actually goes back a lot further, at least as far as the third century “Before Christ”(BC) in Ancient Greece.

Ray Dillinger March 4, 2024 9:56 PM

@Clive:

Artisans figure out what works.
Scientists figure out why and how it works.
Engineers figure out how to apply the science to make it work better.

Every new field of knowledge has to go through all three stages. And right now there’s not enough science about LLM systems (or other AI) to put it in the hands of real engineers yet. The Artisans have been working on what we now call “toy systems” for fifty years, now they’ve found techniques that work well (and got the hardware to make it work well) and both they and the scientists are still learning from each other, struggling to improve their craft and struggling to understand better why the things they’re doing seem to work.

Engineering comes after there’s a coherent scientific working knowledge (or at least a coherent theory) about what’s happening and why.

J March 24, 2024 12:31 PM

While the French organization calls itself ‘Medicin San Frontiers”, America calls it “Doctors Without Borders”, and that mistaken equivalency between “frontier” and “border” cuts to the heart of the problem. A frontier looks outward, while a border looks inward: open vs. closed artificial boundaries.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.