LLMs and Text-in-Text Steganography
Turns out that LLMs are really good at hiding text messages in other text messages.
Turns out that LLMs are really good at hiding text messages in other text messages.
Derek Jones • May 11, 2026 8:48 AM
One of my attempts to shroud human detectable meaning from LLMs was to make phonological changes to words. I was expecting word tokenizations to make it difficult for LLMs to decode sentences such as the following:
“phashyon es cycklyq. chuyldren donth wanth tew weywr chloths vat there pairent weywr. pwroggwrammyng languij phashyon hash phricksionz vat inycially inqloob impleementaision suppoort, lybrareyz (whych sloa doun adopsion, ant wunsh establysht jobz ol avaylable too suppourt ecksysting kowd (slowyng doun va demighz ov a langguij).”
In practice even small 4 billion parameter models handle these changes with ease.
Subscribe to comments on this entry
Sidebar photo of Bruce Schneier by Joe MacInnis.
Privacy • May 11, 2026 8:07 AM
To hide text, try white text on a white background. The human eye won’t see it but the computer will. If you want to test (your machine) not in the wild, try the command line to reformat the hard drive.