New Mexico’s Meta Ruling and Encryption

Mike Masnick points out that the recent New Mexico court ruling against Meta has some bad implications for end-to-end encryption, and security in general:

If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice.

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

End-to-end encryption protects billions of people from surveillance, data breaches, authoritarian governments, stalkers, and domestic abusers. It’s one of the most important privacy and security tools ordinary people have. Every major security expert and civil liberties organization in the world has argued for stronger encryption, not weaker.

But under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications. The logic applies to literally every communication tool ever invented. Predators also use the postal service, telephones, and in-person conversation. The encryption itself harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors ­- choices made by people, not by the platform’s design.

The incentive this creates goes far beyond encryption, and it’s bad. If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements. Why add encryption if it becomes Exhibit A in a future lawsuit? Why implement any privacy-protective feature if a plaintiff’s lawyer will characterize it as “shielding bad actors”?

And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.

In a sane legal environment, you want companies to have these internal debates. You want engineers and safety teams to flag potential risks, wrestle with difficult tradeoffs, and document their reasoning. But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,” the rational corporate response is to stop putting anything in writing. Stop doing risk assessments. Stop asking hard questions internally.

The lesson every general counsel in Silicon Valley is learning right now: ignorance is safer than inquiry. That makes everyone less safe, not more.

The essay has a lot more: about Section 230, about competition in this space, about the myopic nature of the ruling. Go read it.

Posted on April 6, 2026 at 3:09 PM9 Comments

Comments

Eniac April 7, 2026 4:49 AM

@SocraticGadfly

Publishers with greayer liability self-censor more. 230 was actually good for speech. Too bad our skins are so thin we prefer safety and not beimg offended to street smarts and liberty. Could we continue to comment and discuss here without 230? Is that a risk our host would take?

Clive Robinson April 7, 2026 7:30 AM

@ Bruce, ALL,

With regards,

“One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger.”

This is a “nonsense” that was always going to happen, which I’ve mentioned several times in the past.

It’s predicated on the difference between being a “publisher” and a “common carrier”

The former carries a “duty of care” the latter instead has immunity.

What started the “rot” was the carrage of newspapers by reduced mail. Obviously the “carrier” –postal service– wanted full immunity from what they carried and delivered to an end customer. Others however claimed on the,

“Who has the deepest pockets and least ability to defend them, gets attacked first”.

That they were “publishers” on the “agents principle”.

It never got satisfactorily resolved so it’s unsurprising it keeps coming up.

I would have a small wager that neither the State or US Supreme Courts are going to get involved on this one as it’s “a grenade hurled on the floor” and nobody want’s to be close to that…

Everyone just want’s the mess to all go away, especially the current Executive.

Clive Robinson April 7, 2026 9:01 AM

@ Bruce,

With regards,

“… about Section 230, about competition in this space, about the myopic nature of the ruling.”

It needs to be said that Section 230 is,

“The last man standing”

Out of a quite substantial chunk of legislation, and the fact that it will now probably die along with the rest of it is well…

Yes it served a purpose and to a certain extent certainly still does.

But idiot lawyers wanting to “make a name” or something else ludicrously similar are going to cull it eventually.

But the simple fact is E2EE is now dead and almost forgotten replaced with “Compulsory Client Side Scanning” at the OS or lower layer on consumer and commercial devices. The spy is embedded worse than a leach.

Thus the old battle is in effect over and they won it simply because technology has moved on…

Thus we should re-group and fight again in a different way.

It’s why I’ve been saying for quite some time that we need to get the “security end point” off of the device that has the “communications end point” to prevent such “compulsory client side scanning”.

But those with intent on surveilling on mass they know that for atleast 9 out of 10 people “convenience always trumps security”…

The reason this is such a danger is that at some point all will reveal information that they should not and it will be “to late” to get it back…

Thus the danger is not the present but the future.

Rontea April 7, 2026 9:47 AM

From a security and privacy perspective, the incentives are grim. Smaller platforms, which often push innovation, won’t have the resources to endure protracted legal battles. Instead of designing better features or experimenting with new models, they’ll prioritize defensive lawyering and risk avoidance. Worse, attempts to mitigate liability may push platforms toward more intrusive monitoring of user behavior, further eroding privacy.

Algo Free April 7, 2026 3:02 PM

I think NM attorney general’s case against Meta’s “design choices create liability” is better argued, not from an “encryption decision” standpoint, but from a “proprietary algorithm choices” standpoint.

Social media platforms become “publishers” and are thus liable for the content they serve, when they deploy their own algorithms into the system that promote content based on marketing profiles.

I’m sure Meta (and X, etc) wants people to think that they can’t be held liable for content that users upload. And to the extent that they don’t edit or “censor” user-created content that may be true.

But as soon as they put their thumb on the scale by promoting (or de-platforming) content based on a user’s perceived (and tracked and a/b tested for marketability) psychology or demographics, they’ve introduced liability for harms caused by such algorithms.

As a counter-example, an algorithm-free social media platform like the Fediverse (Mastodon) isn’t doing anything to track or manipulate its users, so cannot be held liable for content. Users are free to upload what they want and “bad” content is easily blocked by users and instance moderators.

But nothing is promoted and users are not tracked or otherwise profiled for marketing purposes, thus incur no “harms” or liability for user content.

Seems to me this distinction would address the problem more directly.

lurker April 7, 2026 6:21 PM

@ Algo Free
re Fediverse “But nothing is promoted and users are not tracked … ”

“Why? What’s the percentage in that?” [Stan Freberg Green Christmas 1957]

Seventy years on and Madison Ave still rules the airwaves/websites. Zuckerberg et al are interested only in getting eyeballs in front of advertising, and if hands go into pockets to buy the advertised product those eyeballs become worth more. They’re not interested in any intellectual activity that might happen behind those eyeballs.

Mr Z’s first product was a titillation meter [Facemash: Hot or Not?]. I’m not aware he actually monetised it, but at the time he was interrupted by more pressing legal problems.

David Dyer-Bennet April 9, 2026 12:33 AM

Making any tool better makes it work better for bad users / evil people. Sadly, that probably doesn’t mean that the line of argument used to reach the New Mexico verdict won’t persist, at least for a while.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.