It’s not cryptography—despite the name—but it’s interesting:
DNA-based watermarks using the DNA-Crypt algorithm
The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms.
The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein.
The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.
Posted on June 8, 2007 at 11:47 AM •
…despite the use of encryption, a passive eavesdropper can still learn private information about what someone is watching via their Slingbox Pro.
First, in order to conserve bandwidth, the Slingbox Pro uses something called variable bitrate (VBR) encoding. VBR is a standard approach for compressing streaming multimedia. At a very abstract level, the idea is to only transmit the differences between frames. This means that if a scene changes rapidly, the Slingbox Pro must still transmit a lot of data. But if the scene changes slowly, the Slingbox Pro will only have to transmit a small amount of data—a great bandwidth saver.
Now notice that different movies have different visual effects (e.g., some movies have frequent and rapid scene changes, others don’t). The use of VBR encodings therefore means that the amount data transmitted over time can serve as a fingerprint for a movie. And, since encryption alone won’t fully conceal the number of bytes transmitted, this fingerprint can survive encryption!
We experimented with fingerprinting encrypted Slingbox Pro movie transmissions in our lab. We took 26 of our favorite movies (we tried to pick movies from the same director, or multiple movies in a series), and we played them over our Slingbox Pro. Sometimes we streamed them to a laptop attached to a wired network, and sometimes we streamed them to a laptop connected to an 802.11 wireless network. In all cases the laptop was one hop away.
We trained our system on some of those traces. We then took new query traces for these movies and tried to match them to our database. For over half of the movies, we were able to correctly identify the movie over 98% of the time. This is well above the less than 4% accuracy that one would get by random chance.
More details in the paper.
Posted on June 4, 2007 at 1:24 PM •
This paper, from February’s International Journal of Health Geographics, (abstract here), analyzes the consequences of a nuclear attack on several American cities and points out that burn unit capacity nationwide is far too small to accommodate the victims. It says just training people to flee crosswind could greatly reduce deaths from fallout.
The effects of 20 kiloton and 550 kiloton nuclear detonations on high priority target cities are presented for New York City, Chicago, Washington D.C. and Atlanta. Thermal, blast and radiation effects are described, and affected populations are calculated using 2000 block level census data. Weapons of 100 Kts and up are primarily incendiary or radiation weapons, able to cause burns and start fires at distances greater than they can significantly damage buildings, and to poison populations through radiation injuries well downwind in the case of surface detonations. With weapons below 100 Kts, blast effects tend to be stronger than primary thermal effects from surface bursts. From the point of view of medical casualty treatment and administrative response, there is an ominous pattern where these fatalities and casualties geographically fall in relation to the location of hospital and administrative facilities. It is demonstrated that a staggering number of the main hospitals, trauma centers, and other medical assets are likely to be in the fatality plume, rendering them essentially inoperable in a crisis.
Among the consequences of this outcome would be the probable loss of command-and-control, mass casualties that will have to be treated in an unorganized response by hospitals on the periphery, as well as other expected chaotic outcomes from inadequate administration in a crisis. Vigorous, creative, and accelerated training and coordination among the federal agencies tasked for WMD response, military resources, academic institutions, and local responders will be critical for large-scale WMD events involving mass casualties.
I’ve long said that emergency response is something we should be spending money on. This kind of analysis is both interesting and helpful.
Posted on April 6, 2007 at 10:24 AM •
WEP (Wired Equivalent Privacy) was the protocol used to secure wireless networks. It’s known to be insecure and has been replaced by Wi-Fi Protected Access, but it’s still in use.
This paper, “Breaking 104 bit WEP in less than 60 seconds,” is the best attack against WEP to date:
We demonstrate an active attack on the WEP protocol that is able to recover a 104-bit WEP key using less than 40.000 frames with a success probability of 50%. In order to succeed in 95% of all cases, 85.000 packets are needed. The IV of these packets can be randomly chosen. This is an improvement in the number of required frames by more than an order of magnitude over the best known key-recovery attacks for WEP. On a IEEE 802.11g network, the number of frames required can be obtained by re-injection in less than a minute. The required computational effort is approximately 2^20 RC4 key setups, which on current desktop and laptop CPUs in negligible.
Posted on April 4, 2007 at 12:46 PM •
Interesting article: Neil M. Richards & Daniel J. Solove, “Privacy’s Other Path: Recovering the Law of Confidentiality,” 96 Georgetown Law Journal, 2007.
The familiar legend of privacy law holds that Samuel Warren and Louis Brandeis “invented” the right to privacy in 1890, and that William Prosser aided its development by recognizing four privacy torts in 1960. In this article, Professors Richards and Solove contend that Warren, Brandeis, and Prosser did not invent privacy law, but took it down a new path. Well before 1890, a considerable body of Anglo-American law protected confidentiality, which safeguards the information people share with others. Warren, Brandeis, and later Prosser turned away from the law of confidentiality to create a new conception of privacy based on the individual’s “inviolate personality.” English law, however, rejected Warren and Brandeis’s conception of privacy and developed a conception of privacy as confidentiality from the same sources used by Warren and Brandeis. Today, in contrast to the individualistic conception of privacy in American law, the English law of confidence recognizes and enforces expectations of trust within relationships. Richards and Solove explore how and why privacy law developed so differently in America and England. Understanding the origins and developments of privacy law’s divergent paths reveals that each body of law’s conception of privacy has much to teach the other.
Posted on March 19, 2007 at 6:39 AM •
Ross Anderson and Tyler Moore just published their survey paper on the economics of information security. (Here are the slides from the conference talk, and a shorter version from Science.)
Posted on February 9, 2007 at 7:47 AM •
Peter Gutman’s “A Cost Analysis of Windows Vista Content Protection” is fascinating reading:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called “premium content”, typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it’s not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server). This document analyses the cost involved in Vista’s content protection, and the collateral damage that this incurs throughout the computer industry.
Executive Executive Summary
The Vista Content Protection specification could very well constitute the longest suicide note in history.
It contains stuff like:
Denial-of-Service via Driver Revocation
Once a weakness is found in a particular driver or device, that driver will have its signature revoked by Microsoft, which means that it will cease to function (details on this are a bit vague here, presumably some minimum functionality like generic 640×480 VGA support will still be available in order for the system to boot). This means that a report of a compromise of a particular driver or device will cause all support for that device worldwide to be turned off until a fix can be found. Again, details are sketchy, but if it’s a device problem then presumably the device turns into a paperweight once it’s revoked. If it’s an older device for which the vendor isn’t interested in rewriting their drivers (and in the fast-moving hardware market most devices enter “legacy” status within a year of two of their replacement models becoming available), all devices of that type worldwide become permanently unusable.
Read the whole thing.
And here’s commentary on the paper.
Posted on December 26, 2006 at 1:56 PM •
Microsoft has a new anti-phishing service in Internet Explorer 7 that will turn the address bar green and display the website owner’s identity when surfers visit on-line merchants previously approved as legitimate. So far, so good. But the service is only available to corporations: not to sole proprietorships, partnerships, or individuals.
Of course, if a merchant’s bar doesn’t turn green it doesn’t mean that they’re bad. It’ll be white, which indicates “no information.” There are also yellow and red indications, corresponding to “suspicious” and “known fraudulent site.” But small businesses are worried that customers will be afraid to buy from non-green sites.
That’s possible, but it’s more likely that users will learn that the marker isn’t reliable and start to ignore it.
Any white-list system like this has two sources of error. False positives, where phishers get the marker. And false negatives, where legitimate honest merchants don’t. Any system like this has to effectively deal with both.
EDITED TO ADD (12/21): Research paper: “Phinding Phish: An Evaulation of Anti-Phishing Toolbars,” by L. Cranor, S. Egleman, J. Hong, and Y. Zhang.
Posted on December 21, 2006 at 6:58 AM •
Absolutely fascinating paper: “A Platform for RFID Security and Privacy Administration.” The basic idea is that you carry a personalized device that jams the signals from all the RFID tags on your person until you authorize otherwise.
This paper presents the design, implementation, and evaluation of the RFID Guardian, the first-ever unified platform for RFID security and privacy administration. The RFID Guardian resembles an “RFID firewall”, enabling individuals to monitor and control access to their RFID tags by combining a standard-issue RFID reader with unique RFID tag emulation capabilities. Our system provides a platform for coordinated usage of RFID security mechanisms, offering fine-grained control over RFID-based auditing, key management, access control, and authentication capabilities. We have prototyped the RFID Guardian using off-the-shelf components, and our experience has shown that active mobile devices are a valuable tool for managing the security of RFID tags in a variety of applications, including protecting low-cost tags that are unable to regulate their own usage.
As Cory Doctorow points out, this is potentially a way to reap the benefits of RFID without paying the cost:
Up until now, the standard answer to privacy concerns with RFIDs is to just kill them—put your new US Passport in a microwave for a few minutes to nuke the chip. But with an RFID firewall, it might be possible to reap the benefits of RFID without the cost.
General info here. They’ve even built a prototype.
Posted on December 11, 2006 at 6:20 AM •
A new paper describes a timing attack against RSA, one that bypasses existing security measures against these sorts of attacks. The attack described is optimized for the Pentium 4, and is particularly suited for applications like DRM.
Meta moral: If Alice controls the device, and Bob wants to control secrets inside the device, Bob has a very difficult security problem. These “side-channel” attacks—timing, power, radiation, etc.—allow Alice to mount some very devastating attacks against Bob’s secrets.
I’m going to write more about this for Wired next week, but for now you can read the paper, the Slashdot thread, and the essay I wrote in 1998 about side-channel attacks (also this academic paper).
Posted on November 21, 2006 at 7:24 AM •
Sidebar photo of Bruce Schneier by Joe MacInnis.