Entries Tagged "steganography"
Page 5 of 5
Read this essay by Randy Farmer, a pioneer of virtual online worlds, explaining something called Disney’s ToonTown.
Designers of online worlds for children wanted to severely restrict the communication that users could have with each other, lest somebody say something that’s inappropriate for children to hear.
Randy discusses various approaches to this problem that were tried over the years. The ToonTown solution was to restrict users to something called “Speedchat,” a menu of pre-constructed sentences, all innocuous. They also gave users the ability to conduct unrestricted conversations with each other, provided they both knew a secret code string. The designers presumed the code strings would be passed only to people a user knew in real life, perhaps on a school playground or among neighbors.
Users found ways to pass code strings to strangers anyway. This page describes several protocols, using gestures, canned sentences, or movement of objects in the game.
After you read the ways above to make secret friends, look here. Another way to make secret friends with toons you don’t know is to form letters/numbers with the picture frames in your house. Around you may see toons who have alot of picture frames at their toon estates, they are usually looking for secret friends. This is how to do it! So, lets say you wanted to make secret friends with a toon named Lily. Your “pretend” secret friend code is 4yt 56s.
- You: *Move frames around in house to form a 4.* “Okay.”
- Her: “Okay.” She has now written the first letter down on a piece of paper.
- You: *Move Frames around to form a y.* “Okay.”
- Her: “Okay.” She has now written the second number down on paper.
- You: *Move Frames around in house to form a t* “Okay.”
- Her: “Okay.” She has now written the third letter down on paper. “Okay.”
- You: *Do nothing* “Okay” This shows that you have made a space.
- Repeat process
Randy writes: “By hook, or by crook, customers will always find a way to connect with each other.”
It’s not cryptography—despite the name—but it’s interesting:
DNA-based watermarks using the DNA-Crypt algorithm
The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms.
The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein.
The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.
This paper introduces JitterBugs, a class of inline interception mechanisms that covertly transmit data by perturbing the timing of input events likely to affect externally observable network traffic. JitterBugs positioned at input devices deep within the trusted environment (e.g., hidden in cables or connectors) can leak sensitive data without compromising the host or its software. In particular, we show a practical Keyboard JitterBug that solves the data exfiltration problem for keystroke loggers by leaking captured passwords through small variations in the precise times at which keyboard events are delivered to the host. Whenever an interactive communication application (such as SSH, Telnet, instant messaging, etc) is running, a receiver monitoring the host’s network traffic can recover the leaked data, even when the session or link is encrypted. Our experiments suggest that simple Keyboard JitterBugs can be a practical technique for capturing and exfiltrating typed secrets under conventional OSes and interactive network applications, even when the receiver is many hops away on the Internet.
Seems that some squid can hide messages in their skin:
In the animal world, squid are masters of disguise. Pigmented skin cells enable them to camouflage themselves—almost instantaneously—from predators. Squid also produce polarized skin patterns by regulating the iridescence of their skin, possibly creating a “hidden communication channel”? visible only to animals that are sensitive to polarized light.
Mäthger and Hanlon’s findings present the first anatomical evidence for a “hidden communication channel”? that can remain masked by typical camouflage patterns. Their results suggest that it might be possible for squid to send concealed polarized signals to one other while staying camouflaged to fish or mammalian predators, most of which do not have polarization vision.
My favorite security stories are from the natural world. Evolution results in some of the most interesting security countermeasures.
Some years ago I did some design work on something I called a Deniable File System. The basic idea was the fact that the existence of ciphertext can in itself be incriminating, regardless of whether or not anyone can decrypt it. I wanted to create a file system that was deniable: where encrypted files looked like random noise, and where it was impossible to prove either the existence or non-existence of encrypted files.
This turns out to be a very hard problem for a whole lot of reasons, and I never pursued the project. But I just discovered a file system that seems to meet all of my design criteria—Rubberhose:
Rubberhose transparently and deniably encrypts disk data, minimising the effectiveness of warrants, coersive interrogations and other compulsive mechanims, such as U.K RIP legislation. Rubberhose differs from conventional disk encryption systems in that it has an advanced modular architecture, self-test suite, is more secure, portable, utilises information hiding (steganography / deniable cryptography), works with any file system and has source freely available.
The devil really is in the details with something like this, and I would hesitate to use this in places where it really matters without some extensive review. But I’m pleased to see that someone is working on this problem.
Next request: A deniable file system that fits on a USB token, and leaves no trace on the machine it’s plugged into.
I simply don’t have the science to evaluate this claim:
Since conventional sound waves disperse when traveling through a medium, the possibility of focusing sound waves could have applications in several areas. In cryptography, for example, when sending a secret message, the sender could ensure that only one location would receive the message. Interceptors at other locations would only pick up noise due to unfocused waves. Other potential uses include antisubmarine warfare and underwater communications that benefit from targeted signaling.
Many color laser printers embed secret information in every page they print, basically to identify you by. Here, the EFF has cracked the code of the Xerox DocuColor series of printers.
The DocuColor series prints a rectangular grid of 15 by 8 miniscule yellow dots on every color page. The same grid is printed repeatedly over the entire page, but the repetitions of the grid are offset slightly from one another so that each grid is separated from the others. The grid is printed parallel to the edges of the page, and the offset of the grid from the edges of the page seems to vary. These dots encode up to 14 7-bit bytes of tracking information, plus row and column parity for error correction. Typically, about four of these bytes were unused (depending on printer model), giving 10 bytes of useful data. Below, we explain how to extract serial number, date, and time from these dots. Following the explanation, we implement the decoding process in an interactive computer program.
Because of their limited contrast with the background, the forensic dots are not usually visible to the naked eye under white light. They can be made visible by magnification (using a magnifying glass or microscope), or by illuminating the page with blue instead of white light. Pure blue light causes the yellow dots to appear black. It can be helpful to use magnification together with illumination under blue light, although most individuals with good vision will be able to see the dots distinctly using either technique by itself.
EDITED TO ADD: News story here.
EDITED TO ADD: And another.
Remember all thost stories about the terrorists hiding messages in television broadcasts? They were all false alarms:
The first sign that something was amiss came a few days before Christmas Eve 2003. The US department of homeland security raised the national terror alert level to “high risk”. The move triggered a ripple of concern throughout the airline industry and nearly 30 flights were grounded, including long hauls between Paris and Los Angeles and subsequently London and Washington.
But in recent weeks, US officials have made a startling admission: the key intelligence that prompted the security alert was seriously flawed. CIA analysts believed they had detected hidden terrorist messages in al-Jazeera television broadcasts that identified flights and buildings as targets. In fact, what they had seen were the equivalent of faces in clouds – random patterns all too easily over-interpreted.
It’s a signal-to-noise issue. If you look at enough noise, you’re going to find signal just by random chance. It’s only signal that rises above random chance that’s valuable.
And the whole notion of terrorists using steganography to embed secret messages was ludicrous from the beginning. It makes no sense to communicate with terrorist cells this way, given the wide variety of more efficient anonymous communications channels.
I first wrote about this in September of 2001.
The politics is certainly interesting, but I am impressed with Felt’s tradecraft. Read Bob Woodward’s description of how he would arrange secret meetings with Felt.
I tried to call Felt, but he wouldn’t take the call. I tried his home in Virginia and had no better luck. So one night I showed up at his Fairfax home. It was a plain-vanilla, perfectly kept, everything-in-its-place suburban house. His manner made me nervous. He said no more phone calls, no more visits to his home, nothing in the open.
I did not know then that in Felt’s earliest days in the FBI, during World War II, he had been assigned to work on the general desk of the Espionage Section. Felt learned a great deal about German spying in the job, and after the war he spent time keeping suspected Soviet agents under surveillance.
So at his home in Virginia that summer, Felt said that if we were to talk it would have to be face to face where no one could observe us.
I said anything would be fine with me.
We would need a preplanned notification system—a change in the environment that no one else would notice or attach any meaning to. I didn’t know what he was talking about.
If you keep the drapes in your apartment closed, open them and that could signal me, he said. I could check each day or have them checked, and if they were open we could meet that night at a designated place. I liked to let the light in at times, I explained.
We needed another signal, he said, indicating that he could check my apartment regularly. He never explained how he could do this.
Feeling under some pressure, I said that I had a red cloth flag, less than a foot square—the kind used as warnings on long truck loads—that a girlfriend had found on the street. She had stuck it in an empty flowerpot on my apartment balcony.
Felt and I agreed that I would move the flowerpot with the flag, which usually was in the front near the railing, to the rear of the balcony if I urgently needed a meeting. This would have to be important and rare, he said sternly. The signal, he said, would mean we would meet that same night about 2 a.m. on the bottom level of an underground garage just over the Key Bridge in Rosslyn.
Felt said I would have to follow strict countersurveillance techniques. How did I get out of my apartment?
I walked out, down the hall, and took the elevator.
Which takes you to the lobby? he asked.
Did I have back stairs to my apartment house?
Use them when you are heading for a meeting. Do they open into an alley?
Take the alley. Don’t use your own car. Take a taxi to several blocks from a hotel where there are cabs after midnight, get dropped off and then walk to get a second cab to Rosslyn. Don’t get dropped off directly at the parking garage. Walk the last several blocks. If you are being followed, don’t go down to the garage. I’ll understand if you don’t show. All this was like a lecture. The key was taking the necessary time—one to two hours to get there. Be patient, serene. Trust the prearrangements. There was no fallback meeting place or time. If we both didn’t show, there would be no meeting.
Felt said that if he had something for me, he could get me a message. He quizzed me about my daily routine, what came to my apartment, the mailbox, etc. The Post was delivered outside my apartment door. I did have a subscription to the New York Times. A number of people in my apartment building near Dupont Circle got the Times. The copies were left in the lobby with the apartment number. Mine was No. 617, and it was written clearly on the outside of each paper in marker pen. Felt said if there was something important he could get to my New York Times—how, I never knew. Page 20 would be circled, and the hands of a clock in the lower part of the page would be drawn to indicate the time of the meeting that night, probably 2 a.m., in the same Rosslyn parking garage.
The relationship was a compact of trust; nothing about it was to be discussed or shared with anyone, he said.
How he could have made a daily observation of my balcony is still a mystery to me. At the time, before the era of intensive security, the back of the building was not enclosed, so anyone could have driven in the back alley to observe my balcony. In addition, my balcony and the back of the apartment complex faced onto a courtyard or back area that was shared with a number of other apartment or office buildings in the area. My balcony could have been seen from dozens of apartments or offices, as best I can tell.
A number of embassies were located in the area. The Iraqi Embassy was down the street, and I thought it possible that the FBI had surveillance or listening posts nearby. Could Felt have had the counterintelligence agents regularly report on the status of my flag and flowerpot? That seems highly unlikely, if not impossible.
Sidebar photo of Bruce Schneier by Joe MacInnis.