Wisconsin Governor Hacks the Veto Process

In my latest book, A Hacker’s Mind, I wrote about hacks as loophole exploiting. This is a great example: The Wisconsin governor used his line-item veto powers—supposedly unique in their specificity—to change a one-year funding increase into a 400-year funding increase.

He took this wording:

Section 402. 121.905 (3) (c) 9. of the statues is created to read: 121.903 (3) (c) 9. For the limit for the 2023-24 school year and the 2024-25 school year, add $325 to the result under par. (b).

And he deleted these words, numbers, and punctuation marks:

Section 402. 121.905 (3) (c) 9. of the statues is created to read: 121.903 (3) (c) 9. For the limit for the 2023-24 school year and the 202425 school year, add $325 to the result under par. (b).

Seems to be legal:

Rick Champagne, director and general counsel of the nonpartisan Legislative Reference Bureau, said Evers’ 400-year veto is lawful in terms of its form because the governor vetoed words and digits.

“Both are allowable under the constitution and court decisions on partial veto. The hyphen seems to be new, but the courts have allowed partial veto of punctuation,” Champagne said.

Definitely a hack. This is not what anyone thinks about when they imagine using a line-item veto.

And it’s not the first time. I don’t know the details, but this was certainly the same sort of character-by-character editing:

Mr Evers’ Republican predecessor once deploying it to extend a state programme’s deadline by one thousand years.

A couple of other things:

One, this isn’t really a 400-year change. Yes, that’s what the law says. But it can be repealed. And who knows that a dollar will be worth—or if they will even be used—that many decades from now.

And two, from now all Wisconsin lawmakers will have to be on the alert for this sort of thing. All contentious bills will be examined for the possibility of this sort of delete-only rewriting. This sentence could have been reworded, for example:

For the 2023-2025 school years, add $325 to the result under par. (b).

The problem is, of course, that legalese developed over the centuries to be extra wordy in order to limit disputes. If lawmakers need to state things in the minimal viable language, that will increase court battles later. And that’s not even enough. Bills can be thousands of words long. If any arbitrary characters can be glued together by deleting enough other characters, bills can say anything the governor wants.

The real solution is to return the line-item veto to what we all think it is: the ability to remove individual whole provisions from a law before signing it.

Posted on July 10, 2023 at 7:24 AM22 Comments

Friday Squid Blogging: Giant Squid Nebula

Pretty:

A mysterious squid-like cosmic cloud, this nebula is very faint, but also very large in planet Earth’s sky. In the image, composed with 30 hours of narrowband image data, it spans nearly three full moons toward the royal constellation Cepheus. Discovered in 2011 by French astro-imager Nicolas Outters, the Squid Nebula’s bipolar shape is distinguished here by the telltale blue-green emission from doubly ionized oxygen atoms. Though apparently surrounded by the reddish hydrogen emission region Sh2-129, the true distance and nature of the Squid Nebula have been difficult to determine. Still, a more recent investigation suggests Ou4 really does lie within Sh2-129 some 2,300 light-years away. Consistent with that scenario, the cosmic squid would represent a spectacular outflow of material driven by a triple system of hot, massive stars, cataloged as HR8119, seen near the center of the nebula. If so, this truly giant squid nebula would physically be over 50 light-years across.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on July 7, 2023 at 5:08 PM36 Comments

The AI Dividend

For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds.

You are owed profits for your data that powers today’s AI, and we have a way to make that happen. We call it the AI Dividend.

Our proposal is simple, and harkens back to the Alaskan plan. When Big Tech companies produce output from generative AI that was trained on public data, they would pay a tiny licensing fee, by the word or pixel or relevant unit of data. Those fees would go into the AI Dividend fund. Every few months, the Commerce Department would send out the entirety of the fund, split equally, to every resident nationwide. That’s it.

There’s no reason to complicate it further. Generative AI needs a wide variety of data, which means all of us are valuable—not just those of us who write professionally, or prolifically, or well. Figuring out who contributed to which words the AIs output would be both challenging and invasive, given that even the companies themselves don’t quite know how their models work. Paying the dividend to people in proportion to the words or images they create would just incentivize them to create endless drivel, or worse, use AI to create that drivel. The bottom line for Big Tech is that if their AI model was created using public data, they have to pay into the fund. If you’re an American, you get paid from the fund.

Under this plan, hobbyists and American small businesses would be exempt from fees. Only Big Tech companies—those with substantial revenue—would be required to pay into the fund. And they would pay at the point of generative AI output, such as from ChatGPT, Bing, Bard, or their embedded use in third-party services via Application Programming Interfaces.

Our proposal also includes a compulsory licensing plan. By agreeing to pay into this fund, AI companies will receive a license that allows them to use public data when training their AI. This won’t supersede normal copyright law, of course. If a model starts producing copyright material beyond fair use, that’s a separate issue.

Using today’s numbers, here’s what it would look like. The licensing fee could be small, starting at $0.001 per word generated by AI. A similar type of fee would be applied to other categories of generative AI outputs, such as images. That’s not a lot, but it adds up. Since most of Big Tech has started integrating generative AI into products, these fees would mean an annual dividend payment of a couple hundred dollars per person.

The idea of paying you for your data isn’t new, and some companies have tried to do it themselves for users who opted in. And the idea of the public being repaid for use of their resources goes back to well before Alaska’s oil fund. But generative AI is different: It uses data from all of us whether we like it or not, it’s ubiquitous, and it’s potentially immensely valuable. It would cost Big Tech companies a fortune to create a synthetic equivalent to our data from scratch, and synthetic data would almost certainly result in worse output. They can’t create good AI without us.

Our plan would apply to generative AI used in the US. It also only issues a dividend to Americans. Other countries can create their own versions, applying a similar fee to AI used within their borders. Just like an American company collects VAT for services sold in Europe, but not here, each country can independently manage their AI policy.

Don’t get us wrong; this isn’t an attempt to strangle this nascent technology. Generative AI has interesting, valuable, and possibly transformative uses, and this policy is aligned with that future. Even with the fees of the AI Dividend, generative AI will be cheap and will only get cheaper as technology improves. There are also risks—both every day and esoteric—posed by AI, and the government may need to develop policies to remedy any harms that arise.

Our plan can’t make sure there are no downsides to the development of AI, but it would ensure that all Americans will share in the upsides—particularly since this new technology isn’t possible without our contribution.

This essay was written with Barath Raghavan, and previously appeared on Politico.com.

Posted on July 7, 2023 at 7:11 AM55 Comments

Belgian Tax Hack

Here’s a fascinating tax hack from Belgium (listen to the details here, episode #484 of “No Such Thing as a Fish,” at 28:00).

Basically, it’s about a music festival on the border between Belgium and Holland. The stage was in Holland, but the crowd was in Belgium. When the copyright collector came around, they argued that they didn’t have to pay any tax because the audience was in a different country. Supposedly it worked.

Posted on July 6, 2023 at 7:03 AM20 Comments

Class-Action Lawsuit for Scraping Data without Permission

I have mixed feelings about this class-action lawsuit against OpenAI and Microsoft, claiming that it “scraped 300 billion words from the internet” without either registering as a data broker or obtaining consent. On the one hand, I want this to be a protected fair use of public data. On the other hand, I want us all to be compensated for our uniquely human ability to generate language.

There’s an interesting wrinkle on this. A recent paper showed that using AI generated text to train another AI invariably “causes irreversible defects.” From a summary:

The tails of the original content distribution disappear. Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions. We call this effect model collapse.

Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI startups hammering the Internet Archive for training data.

This is the same idea that Ted Chiang wrote about: that ChatGPT is a “blurry JPEG of all the text on the Web.” But the paper includes the math that proves the claim.

What this means is that text from before last year—text that is known human-generated—will become increasingly valuable.

Posted on July 5, 2023 at 7:14 AM35 Comments

Self-Driving Cars Are Surveillance Cameras on Wheels

Police are already using self-driving car footage as video evidence:

While security cameras are commonplace in American cities, self-driving cars represent a new level of access for law enforcement ­ and a new method for encroachment on privacy, advocates say. Crisscrossing the city on their routes, self-driving cars capture a wider swath of footage. And it’s easier for law enforcement to turn to one company with a large repository of videos and a dedicated response team than to reach out to all the businesses in a neighborhood with security systems.

“We’ve known for a long time that they are essentially surveillance cameras on wheels,” said Chris Gilliard, a fellow at the Social Science Research Council. “We’re supposed to be able to go about our business in our day-to-day lives without being surveilled unless we are suspected of a crime, and each little bit of this technology strips away that ability.”

[…]

While self-driving services like Waymo and Cruise have yet to achieve the same level of market penetration as Ring, the wide range of video they capture while completing their routes presents other opportunities. In addition to the San Francisco homicide, Bloomberg’s review of court documents shows police have sought footage from Waymo and Cruise to help solve hit-and-runs, burglaries, aggravated assaults, a fatal collision and an attempted kidnapping.

In all cases reviewed by Bloomberg, court records show that police collected footage from Cruise and Waymo shortly after obtaining a warrant. In several cases, Bloomberg could not determine whether the recordings had been used in the resulting prosecutions; in a few of the cases, law enforcement and attorneys said the footage had not played a part, or was only a formality. However, video evidence has become a lynchpin of criminal cases, meaning it’s likely only a matter of time.

Posted on July 3, 2023 at 7:04 AM14 Comments

The US Is Spying on the UN Secretary General

The Washington Post is reporting that the US is spying on the UN Secretary General.

The reports on Guterres appear to contain the secretary general’s personal conversations with aides regarding diplomatic encounters. They indicate that the United States relied on spying powers granted under the Foreign Intelligence Surveillance Act (FISA) to gather the intercepts.

Lots of details about different conversations in the article, which are based on classified documents leaked on Discord by Jack Teixeira.

There will probably a lot of faux outrage at this, but spying on foreign leaders is a perfectly legitimate use of the NSA’s capabilities and authorities. (If the NSA didn’t spy on the UN Secretary General, we should fire it and replace it with a more competent NSA.) It’s the bulk surveillance of whole populations that should outrage us.

Posted on June 30, 2023 at 7:02 AM18 Comments

Redacting Documents with a Black Sharpie Doesn’t Work

We have learned this lesson again:

As part of the FTC v. Microsoft hearing, Sony supplied a document from PlayStation chief Jim Ryan that includes redacted details on the margins Sony shares with publishers, its Call of Duty revenues, and even the cost of developing some of its games.

It looks like someone redacted the documents with a black Sharpie ­ but when you scan them in, it’s easy to see some of the redactions. Oops.

I don’t particularly care about the redacted information, but it’s there in the article.

Posted on June 29, 2023 at 10:37 AM19 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.