AI-Generated Law

On April 14, Dubai’s ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to “regularly suggest updates” to the law and “accelerate the issuance of legislation by up to 70%.” AI would create a “comprehensive legislative plan” spanning local and federal law and would be connected to public administration, the courts, and global policy trends.

The plan was widely greeted with astonishment. This sort of AI legislating would be a global “first,” with the potential to go “horribly wrong.” Skeptics fear that the AI model will make up facts or fundamentally fail to understand societal tenets such as fair treatment and justice when influencing law.

The truth is, the UAE’s idea of AI-generated law is not really a first and not necessarily terrible.

The first instance of enacted law known to have been written by AI was passed in Porto Alegre, Brazil, in 2023. It was a local ordinance about water meter replacement. Council member Ramiro Rosário was simply looking for help in generating and articulating ideas for solving a policy problem, and ChatGPT did well enough that the bill passed unanimously. We approve of AI assisting humans in this manner, although Rosário should have disclosed that the bill was written by AI before it was voted on.

Brazil was a harbinger but hardly unique. In recent years, there has been a steady stream of attention-seeking politicians at the local and national level introducing bills that they promote as being drafted by AI or letting AI write their speeches for them or even vocalize them in the chamber.

The Emirati proposal is different from those examples in important ways. It promises to be more systemic and less of a one-off stunt. The UAE has promised to spend more than $3 billion to transform into an “AI-native” government by 2027. Time will tell if it is also different in being more hype than reality.

Rather than being a true first, the UAE’s announcement is emblematic of a much wider global trend of legislative bodies integrating AI assistive tools for legislative research, drafting, translation, data processing, and much more. Individual lawmakers have begun turning to AI drafting tools as they traditionally have relied on staffers, interns, or lobbyists. The French government has gone so far as to train its own AI model to assist with legislative tasks.

Even asking AI to comprehensively review and update legislation would not be a first. In 2020, the U.S. state of Ohio began using AI to do wholesale revision of its administrative law. AI’s speed is potentially a good match to this kind of large-scale editorial project; the state’s then-lieutenant governor, Jon Husted, claims it was successful in eliminating 2.2 million words’ worth of unnecessary regulation from Ohio’s code. Now a U.S. senator, Husted has recently proposed to take the same approach to U.S. federal law, with an ideological bent promoting AI as a tool for systematic deregulation.

The dangers of confabulation and inhumanity—while legitimate—aren’t really what makes the potential of AI-generated law novel. Humans make mistakes when writing law, too. Recall that a single typo in a 900-page law nearly brought down the massive U.S. health care reforms of the Affordable Care Act in 2015, before the Supreme Court excused the error. And, distressingly, the citizens and residents of nondemocratic states are already subject to arbitrary and often inhumane laws. (The UAE is a federation of monarchies without direct elections of legislators and with a poor record on political rights and civil liberties, as evaluated by Freedom House.)

The primary concern with using AI in lawmaking is that it will be wielded as a tool by the powerful to advance their own interests. AI may not fundamentally change lawmaking, but its superhuman capabilities have the potential to exacerbate the risks of power concentration.

AI, and technology generally, is often invoked by politicians to give their project a patina of objectivity and rationality, but it doesn’t really do any such thing. As proposed, AI would simply give the UAE’s hereditary rulers new tools to express, enact, and enforce their preferred policies.

Mohammed’s emphasis that a primary benefit of AI will be to make law faster is also misguided. The machine may write the text, but humans will still propose, debate, and vote on the legislation. Drafting is rarely the bottleneck in passing new law. What takes much longer is for humans to amend, horse-trade, and ultimately come to agreement on the content of that legislation—even when that politicking is happening among a small group of monarchic elites.

Rather than expeditiousness, the more important capability offered by AI is sophistication. AI has the potential to make law more complex, tailoring it to a multitude of different scenarios. The combination of AI’s research and drafting speed makes it possible for it to outline legislation governing dozens, even thousands, of special cases for each proposed rule.

But here again, this capability of AI opens the door for the powerful to have their way. AI’s capacity to write complex law would allow the humans directing it to dictate their exacting policy preference for every special case. It could even embed those preferences surreptitiously.

Since time immemorial, legislators have carved out legal loopholes to narrowly cater to special interests. AI will be a powerful tool for authoritarians, lobbyists, and other empowered interests to do this at a greater scale. AI can help automatically produce what political scientist Amy McKay has termed “microlegislation“: loopholes that may be imperceptible to human readers on the page—until their impact is realized in the real world.

But AI can be constrained and directed to distribute power rather than concentrate it. For Emirati residents, the most intriguing possibility of the AI plan is the promise to introduce AI “interactive platforms” where the public can provide input to legislation. In experiments across locales as diverse as KentuckyMassachusetts, FranceScotlandTaiwan, and many others, civil society within democracies are innovating and experimenting with ways to leverage AI to help listen to constituents and construct public policy in a way that best serves diverse stakeholders.

If the UAE is going to build an AI-native government, it should do so for the purpose of empowering people and not machines. AI has real potential to improve deliberation and pluralism in policymaking, and Emirati residents should hold their government accountable to delivering on this promise.

Posted on May 15, 2025 at 7:00 AM22 Comments

Comments

K.S. May 15, 2025 8:51 AM

UAE, by using AI to write legislature, is opening a novel attack surface against its own infrastructure (laws). Unless they develop and train their own AI on their own hardware, they are just asking for foreign actors to interfere.

Clive Robinson May 15, 2025 10:47 AM

@ ALL,

The question “Can this be done?” Very much depends on what “this” covers.

Certain types of legislation are “formulaic” and start from one or two basic premises, and yes this can be seen in already passed legislation so the “language based” equivalent of “next likely word” will be able to duplicate, as well as point out non formulaic or unique formulations in current legislation and regulation.

Anything beyond that is really beyond current LLM & ML systems, so will require “human input”.

Because this is where the notions of

1, Context
2, Security
3, Environment

Will be top of the list of “err umms”.

LLMs and the ML systems have no concept of “Environment” so they can not grasp “Context” in the way humans do.

Oh and as for “Security” @K.S. above is indicating just one small aspect of the security issue. Devistating as it could be there is a lot more that could be as bad if not worse.

Consider another angle those in certain Nations have in recent times become aware of.

The probability issues will be detected and brought out into the open is very roughly proportional to the number of people involved who are not “invested” in the meme/cognitive bias of those with a self-entitled agenda, that is made moral by a few who should not be given such abilities, that then is made diktat by those with the power to do so. The use of current AI in effect removes all the “non-invested individuals” who might otherwise be an impediment to the self-entitled plans to at the very least “rights strip society”.

Bernie Birmbaum May 15, 2025 10:53 AM

The word “should” in the last paragraph is straining awfully hard. Considering that autocratic governments have absolutely no incentive in any way to diminish their power. Which is to say that the thoughts expressed are an exercise in wishful thinking that is not grounded in reality.

Might as well have ended the essay with “… and I want a pony.”

Tony May 15, 2025 12:20 PM

Presumably AI can also be used by the political opposition in a country to examine the text of proposed laws and highlight the loopholes.

mark May 15, 2025 12:56 PM

This is insane. What next, replace legislators with AI?

What would make sense is to use the AI to generate scenarios that would result from the law being put into place. Including how many ways the law could go wrong…

Bob Paddock May 15, 2025 12:57 PM

Could someone with better AI foo that I get the so called “Intelligence” to analyze the contents of the Cornell US Code Law Library?

All I get is a description of the library, ie. analogs to the building, the chairs, the mahogany table etc. Nothing about the actual laws of any value.

‘https://www.law.cornell.edu/uscode/text

Clive Robinson May 15, 2025 7:16 PM

@ ALL,

Some think I’m too hard on the AI Boosting Corps and their various paid for mouthpieces including those in Government (see my previous negative comments about an Ex-UK-PM and their incompetent grandstanding at Bletchley).

Apparently saying the Corps business plans are about surveillance based on a plan of,

“Bedazzle, Beguile, Bewitch, Befriend, and BETRAY”[1]

Is according to some unfair / unkind / un-something… Or as they say in parts of asia “spitting in someone elses ‘rice bowl'”.

But ask them the acid question,

“What has AI done for you today?”

And you get handwaving at best.

Well someone has asked a different question,

“If AI is so good at coding…
where are the open source contributions?”

And well the answer it appears is limp wristed err-umms at best. Not even at “the dog ate my homework” level of believability,

https://pivot-to-ai.com/2025/05/13/if-ai-is-so-good-at-coding-where-are-the-open-source-contributions/

[1] From what I can remember the first time I posted “their business plan” on this blog was just a year ago,

https://www.schneier.com/blog/archives/2024/06/online-privacy-and-overfishing.html

When things were a bit frenetic to put it politely, and even I was accused of being an AI…

Clive Robinson May 15, 2025 8:33 PM

@ ALL,

Related to AI security or lack there of, and hot off the press today is the question,

Is a stochastic parrot, still more reality-based than Elon Musk?

As some of you might have heard Grok had a brain-fart day and pushed out far right conspiracy yet again, but this time as an answer to nearly all queries made to Grok[1]. Then when complaints were made publicly the behaviour stopped as suddenly as it started…

So obviously so it’s near impossible to conclude it was not ham-fistedly pushed by some one “Up the X tree” with “hand of god” abilities…

So who could it be?

Some think it’s Hellon Rusk having a throw the toys out of the pram day again…

And some are being a little politer about it,

https://pivot-to-ai.com/2025/05/15/even-elon-musk-cant-make-grok-claim-a-white-genocide-in-south-africa/

Then go into the technical political and social issues of what is in effect propaganda pushing by “bias inducing” actions.

Clearly demonstrating what harms a “top of the tree” individual with “hand of god” rights over AI systems could do.

Importantly though, this time it was done so ham-fistedly it was “obvious to all” who cared to think about it rationally. But what of next time when it’s done with a little more care?

We’ve seen AI Bias in systems being used to rate re-offending, and how difficult it is to stop bias getting into Current AI LLM and ML systems. Along with how easily they can be used to implement extraordinarily harmful agendas “at arms length” hidden behind “commercial confidentiality”

Now ask the question of,

“How unbiased a current AI LLM and ML system used to generate legislation would be in the hands of a defacto autocrat?”

The answer to many would be “not good” and depends on your views with regards,

“Individual Rights v. Social Responsibilities”

[1] See some of the gory Grok “chain saw right arm in the air salute” idiocy details,

https://techcrunch.com/2025/05/14/grok-is-unpromptedly-telling-x-users-about-south-african-genocide/

Agammamon May 15, 2025 10:01 PM

‘Skeptics fear that the AI model will make up facts or fundamentally fail to understand societal tenets such as fair treatment and justice when influencing law.’

This is Dubai we’re talking about. Homosexuality is a capital crime. There is no freedom of speech.

They don’t care about fair treatment nor do they subscribe to our idea of justice to start with.

Clive Robinson May 15, 2025 11:32 PM

@ Agammamon, ALL,

With regards,

“They don’t care about…”

When I was young oh about a lifetime ago 😉 a common phrase used was,

“It only takes one bad apple to spoil the whole barrel.”

We’ve seen in more resent times bad legislation in the UK spread out to Australia where they made it worse, and since then several other countries and federations of states have taken these ideas onboard for legislation as well…

Thus the rot is spreading around this little green apple we live on.

The way to stop rot happening and ruining everything, is not to let it start.

Most of the world does not yet have the ability to set up their own LLM and ML systems nor do they have the resources to start building and run such systems.

So we are at a pivotal point in time, in that the technology will become more available and less resource intensive, thus potentially more available.

When that happens the opportunities we have now will have gone and with it any potential to limit harms.

Thus we should figuratively take the opportunity to “pick wash and wax our apples” before filling the barrel.

As most reading here will know ideas can be declared as “weapons” and strictly regulated by international law. If it can be done for cryptography, and it’s products then why should we not do the same for current and future AI?

One way would be by treaties that have to be not just signed but taken into national / state legislation in return for access with oversight to the technology.

Not foolproof by any means but enough of a deterrent to tip the balance.

You could put forward Cicero’s maxim of,

“Inter arma enim silent leges”

But the solution to that has always been to prevent “in times of war” happening. That is for states to stand together against aggressors so that it’s of no advantage to start war.

A big part of that is “trade” so yes it might sound like a circular argument, but think more of a spiral going up, lifting like a rising tide.

Peter A. May 16, 2025 6:29 AM

There’s already too many laws on the books and the output rate of legislating bodies is overwhelming comprehension abilities of the best legal counsels. Trying to speed it up in any way, AI or no AI, is purely insane. We need to slow down legislating, and speed up de-legislating, that is striking laws out.

Of course authoritarian or just authoritarian-leaning governments would love to eliminate or marginalize the role of elected representatives. It has already happened in many countries and in many aspects. Members of various chambers are already turned into button-pressing monkeys in a sense, regardless of best efforts of at least some of them. They are at times physically unable not only to comprehend the drafts they vote on, but even to read them in full. In fact, they vote on the title of the act, or on a few page synopsis or on a few minute verbal agitation of the proponent, not on its contents and in no way on all possible consequences thereof.

lurker May 16, 2025 2:15 PM

UAE? I thought all their laws were written 1300 years ago, done and dusted …

Bauke Jan Douma May 16, 2025 9:56 PM

But the solution to that has always been to prevent “in times of war” happening. That is for states to stand together against aggressors so that it’s of no advantage to start war.

That ship sailed when the UK, i.c. Tony Blair (Labour!) — when the UK went ahead with their criminal war of aggression that destroyed Iraq, killing hundreds of thousands, and causing millions to flee.

Felix May 17, 2025 12:54 PM

I don’t follow the argument. If drafting isn’t a bottleneck for legislation, and AI doesn’t make legislation faster, how does AI result in more “sophistication” or “tailoring”?

It covers more scenarios than lawmakers think of? Even if AI were capable of being more creative and realistic in coming up with scenarios than the users, this claim is paradoxical. Either (1) the AI faithfully applies a general principle that the legislator articulated to specific scenarios, in which case it’s redundant with legislation that simply states the general principle, or (2) the AI poorly or erroneously applies the general principle, in which case you just get loads of slop. Neither case results in more sophistication.

Clive Robinson May 17, 2025 2:51 PM

@ Agammamon, ALL,

A little more on the propagation of “bad law” for surveilling and privacy invasion.

It looks like supposedly “Neutral Switzerland” is going in the opposite direction to the EU[1].

https://www.techradar.com/vpn/vpn-privacy-security/we-would-be-less-confidential-than-google-proton-threatens-to-quit-switzerland-over-new-surveillance-law

It’s a bad direction to go in and Switzerland will in effect become the “legal ‘backdoor’ of Europe” due to the treaties it has with the EU and Member States on law.

[1] Although Switzerland is in the heart of continental europe geographically, it’s not really “in Europe” politically or in some areas economically. That is it can be seen as part of the “trade union”, but not the “political union”. Thus it has a degree of freedom over what laws it wants or not from the EU Council and the other “member states”. It’s to complicated for a simple one paragraph explanation so,

https://en.m.wikipedia.org/wiki/Switzerland–European_Union_relations

Gives a few more but by no means all details.

lurker May 18, 2025 3:59 AM

@Clive Robinson
re: Proton threatens to leave Switzerland

and the article repeats several times “leave Switzerland”, but go where? eSwatini? The moon’s about the only place left without strong fingers on its collar, and that might not be for much longer …

ResearcherZero May 21, 2025 2:33 AM

@Clive Robinson

Unaccountability Machines?

Do you mean to say that people like Elon Musk would make things up or blindly parrot a blind parrot without bothering to check the output of said blind parrot?

It does appear that those individuals in question have functioning eyes mounted in their faces. This may point to the former, rather than the later. That they knowingly mislead.

Another lawyer claims they failed to confirm legal material prepared using AI, after judge indicates that cases and evidence they submitted does not in fact exist at all.

‘https://www.ndtv.com/world-news/canadian-lawyer-uses-ai-to-draft-fake-cases-faces-contempt-8394213

Mike Lindell’s attorney among those presenting incorrect and fictional citations to court.
https://mashable.com/article/mypillow-lawsuit-ai-lawyer-filing

piglet June 4, 2025 3:39 AM

@Clive: Interestingly, the new surveillance rules in question are proposed as part of a regulatory change that wouldn’t require parliamentary approval. And the minister in charge of communications (Albert Rösti) is a pro-Russian right wing extremist. But he would need a majority of the Federal Council to agree, which seems unlikely if there is significant pushback from mainstream parties.

It will be very important to watch how this turns out.

piglet June 4, 2025 3:49 AM

“The first instance of enacted law known to have been written by AI was passed in Porto Alegre, Brazil, in 2023. It was a local ordinance about water meter replacement. Council member Ramiro Rosário was simply looking for help in generating and articulating ideas for solving a policy problem, and ChatGPT did well enough that the bill passed unanimously.”

It’s totally unclear what this means. Did he ask the chatbot how to solve the problem (seems unlikely), or did the chatbot help with structuring the text?

“AI’s speed is potentially a good match to this kind of large-scale editorial project; the state’s then-lieutenant governor, Jon Husted, claims it was successful in eliminating 2.2 million words’ worth of unnecessary regulation from Ohio’s code.”

Great, a corrupt politician claims something that seems absurd on its face and nobody verifies the claim. Maybe Axios’ reporting was done by a chatbot?

“Now a U.S. senator, Husted has recently proposed to take the same approach to U.S. federal law, with an ideological bent promoting AI as a tool for systematic deregulation.”

We actually know that Elon Musk’s unconstitutional band of incompetent government wreckers used AI tools and look how well that worked. The problem here isn’t “AI” per se but the corrupt intention behind its use, but it’s also glaringly obvious that “AI” isn’t capable of the miracles gullible people (including apparently Bruce) think it is.

piglet June 4, 2025 3:59 AM

Also agree with Felix above but there’s also this:

“As proposed, AI would simply give the UAE’s hereditary rulers new tools to express, enact, and enforce their preferred policies.”

Sure that’s probably true, in the same sense that any other tool in the hands of the rulers will be used to enforce their preferences (what else?). But also this is totally redunant in an absolute monarchy where the ruler’s power is not really restricted by any legal limits. All of this reeks of mainly a PR stunt to draw the attention of gullible plutocrats who like the idea of AI-generated law.

Clive Robinson June 4, 2025 6:42 AM

@ piglet,

With regards,

“The problem here isn’t “AI” per se but the corrupt intention behind its use, but it’s also glaringly obvious that “AI” isn’t capable of the miracles”

You and I appear to coincide with our thinking on this.

For a while now I’ve been warning of the most likely use of current AI LLM and ML systems being to put bias in to justify “at arms length” political mantra.

A case of think what it is you want then get the AI to make plausible sounding arguments for it.

But also it gives a new variant on the excuse of “The Computer says NO” which believe it or not went all the way up the British Court System. It was basically British gas harassing a customer repeatedly even though she had no supply and they knew it. British Gas tried to wriggle out of it by saying it was a computer error and that staff were “just following procedure / orders”.

One of the Justices pointed out importantly that British Gas’s system was a “creation of man” thus the creators ie British Gas Management were liable for any and all defects. Because they had failed to act on a clearly recognised failing…

Unfortunately if you look up RoboDebt you will see just how malicious and harmful this “arms lengthening” “political mantra” can be.

As for AI AGI etc etc and miracles, no the current AI LLM and ML systems are incapable beyond any reasonable doubt.

Because they can be shown to be little more than a database and statistical engine. At best the are a form of John Searle’s “Chinese Room argument” thought up in the 1970’s and formally presented in a paper in 1980,

https://plato.stanford.edu/entries/chinese-room/

All to often raw LLMs are less performant than a 1980’s spell checker.

In normal use because English is full of redundancies at many semantic levels they can add or remove redundant language. To “puff up” or “abstract / summarize down”.

When “puffing up” they can give a “style” to it which if neglected from the prompt directions which tended towards “marketing speak” style.

As a tool current AI LLMs do have a function that can help reduce some workloads. But the propensity to “Soft Bullshit / Hallucinate” means that,

“What you gain on the swings, you loose on the roundabout.”

Of “having to check and double check” the outputs, and that can actually be more of a time sink than any potential time saving.

Where they can make significant time savings is “testing” in this respect they are a little like “fuzzing”. Providing you have a well defined test / filter function then they can churn away quite rapidly to find,

“Non obvious but easy solutions”

As they can spot patterns humans who work in the field would not think of.

But it can go laughably wrong. Some one wanted to search for whales in photographs. So they curated a bunch of images that had whales in as a training set. Unfortunately the AI started churning out pictures of ships as well, and sometimes just odd shaped waves. It turns out that all the training data images of whales also had clouds in them as well as a whale .. So the AI recognised clouds as being the primary pattern and “some object” as the secondary pattern.

So tine savings / increases in productivity can be quite illusory.

Not something multi-billions of investment dollars wants to hear…

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.