Entries Tagged "forensics"

Page 5 of 10

Is iPhone Security Really this Good?

Simson Garfinkel writes that the iPhone has such good security that the police can’t use it for forensics anymore:

Technologies the company has adopted protect Apple customers’ content so well that in many situations it’s impossible for law enforcement to perform forensic examinations of devices seized from criminals. Most significant is the increasing use of encryption, which is beginning to cause problems for law enforcement agencies when they encounter systems with encrypted drives.

“I can tell you from the Department of Justice perspective, if that drive is encrypted, you’re done,” Ovie Carroll, director of the cyber-crime lab at the Computer Crime and Intellectual Property Section in the Department of Justice, said during his keynote address at the DFRWS computer forensics conference in Washington, D.C., last Monday. “When conducting criminal investigations, if you pull the power on a drive that is whole-disk encrypted you have lost any chance of recovering that data.”

Yes, I believe that full-disk encryption—whether Apple’s FileVault or Microsoft’s BitLocker (I don’t know what the iOS system is called)—is good; but its security is only as good as the user is at choosing a good password.

The iPhone always supported a PIN lock, but the PIN wasn’t a deterrent to a serious attacker until the iPhone 3GS. Because those early phones didn’t use their hardware to perform encryption, a skilled investigator could hack into the phone, dump its flash memory, and directly access the phone’s address book, e-mail messages, and other information. But now, with Apple’s more sophisticated approach to encryption, investigators who want to examine data on a phone have to try every possible PIN. Examiners perform these so-called brute-force attacks with special software, because the iPhone can be programmed to wipe itself if the wrong PIN is provided more than 10 times in a row. This software must be run on the iPhone itself, limiting the guessing speed to 80 milliseconds per PIN. Trying all four-digit PINs therefore requires no more than 800 seconds, a little more than 13 minutes. However, if the user chooses a six-digit PIN, the maximum time required would be 22 hours; a nine-digit PIN would require 2.5 years, and a 10-digit pin would take 25 years. That’s good enough for most corporate secrets—and probably good enough for most criminals as well.

Leaving aside the user practice questions—my guess is that very few users, even those with something to hide, use a ten-digit PIN—could this possibly be true? In the introduction to Applied Cryptography, almost 20 years ago, I wrote: “There are two kinds of cryptography in this world: cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files.”

Since then, I’ve learned two things: 1) there are a lot of gradients to kid sister cryptography, and 2) major government cryptography is very hard to get right. It’s not the cryptography; it’s everything around the cryptography. I said as much in the preface to Secrets and Lies in 2000:

Cryptography is a branch of mathematics. And like all mathematics, it involves numbers, equations, and logic. Security, palpable security that you or I might find useful in our lives, involves people: things people know, relationships between people, people and how they relate to machines. Digital security involves computers: complex, unstable, buggy computers.

Mathematics is perfect; reality is subjective. Mathematics is defined; computers are ornery. Mathematics is logical; people are erratic, capricious, and barely comprehensible.

If, in fact, we’ve finally achieved something resembling this level of security for our computers and handheld computing devices, this is something to celebrate.

But I’m skeptical.

Another article.

Slashdot has a thread on the article.

EDITED TO ADD: More analysis. And Elcomsoft can crack iPhones.

Posted on August 21, 2012 at 1:42 PMView Comments

Biases in Forensic Science

Some errors in forensic science may be the result of the biases of the examiners:

Though they cannot prove it, Dr Dror and Dr Hampikian suspect the difference in contextual information given to the examiners was the cause of the different results. The original pair may have subliminally interpreted ambiguous information in a way helpful to the prosecution, even though they did not consciously realise what they were doing.

[…]

This one example does not prove the existence of a systematic problem. But it does point to a sloppy approach to science. According to Norah Rudin, a forensic-DNA consultant in Mountain View, California, forensic scientists are beginning to accept that cognitive bias exists, but there is still a lot of resistance to the idea, because examiners take the criticism personally and feel they are being accused of doing bad science. According to Dr Rudin, the attitude that cognitive bias can somehow be willed away, by education, training or good intentions, is still pervasive.

Posted on January 31, 2012 at 11:13 AMView Comments

Full-Disk Encryption Works

According to researchers, full-disk encryption is hampering police forensics.

The authors of the report suggest there are some things law enforcement can do, but they all must happen prior to a drive being buttoned up by encryption. Specifically, they say that law enforcement should stop turning computers off to bring them to another location for study, doing so only causes the need for a password to be entered to read the encrypted data. Also, in some cases, doing so causes the data to be automatically destroyed. Fortunately, there are some tools forensics experts can use to gather data if it sits untouched, such as copying everything in memory to a separate disk. The team also suggests that law enforcement look first to see if the drive has been encrypted before scanning it with their own software, as doing so will likely result in a lot of wasted time.

Paper, behind a paywall.

Posted on December 1, 2011 at 1:44 PMView Comments

Identifying Speakers in Encrypted Voice Communication

I’ve already written how it is possible to detect words and phrases in encrypted VoIP calls. Turns out it’s possible to detect speakers as well:

Abstract: Most of the voice over IP (VoIP) traffic is encrypted prior to its transmission over the Internet. This makes the identity tracing of perpetrators during forensic investigations a challenging task since conventional speaker recognition techniques are limited to unencrypted speech communications. In this paper, we propose techniques for speaker identification and verification from encrypted VoIP conversations. Our experimental results show that the proposed techniques can correctly identify the actual speaker for 70-75% of the time among a group of 10 potential suspects. We also achieve more than 10 fold improvement over random guessing in identifying a perpetrator in a group of 20 potential suspects. An equal error rate of 17% in case of speaker verification on the CSLU speaker recognition corpus is achieved.

Posted on September 16, 2011 at 12:31 PMView Comments

Software as Evidence

Increasingly, chains of evidence include software steps. It’s not just the RIAA suing people—and getting it wrong—based on automatic systems to detect and identify file sharers. It’s forensic programs used to collect and analyze data from computers and smart phones. It’s audit logs saved and stored by ISPs and websites. It’s location data from cell phones. It’s e-mails and IMs and comments posted to social networking sites. It’s tallies from digital voting machines. It’s images and meta-data from surveillance cameras. The list goes on and on. We in the security field know the risks associated with trusting digital data, but this evidence is routinely assumed by courts to be accurate.

Sergey Bratus is starting to look at this problem. His paper, written with Ashlyn Lembree and Anna Shubina, is “Software on the Witness Stand: What Should it Take for Us to Trust it?

We discuss the growing trend of electronic evidence, created automatically by autonomously running software, being used in both civil and criminal court cases. We discuss trustworthiness requirements that we believe should be applied to such software and platforms it runs on. We show that courts tend to regard computer-generated materials as inherently trustworthy evidence, ignoring many software and platform trustworthiness problems well known to computer security researchers. We outline the technical challenges in making evidence-generating software trustworthy and the role Trusted Computing can play in addressing them.

From a presentation he gave on the subject:

Constitutionally, criminal defendants have the right to confront accusers. If software is the accusing agent, what should the defendant be entitled to under the Confrontation Clause?

[…]

Witnesses are sworn in and cross-examined to expose biases & conflicts—what about software as a witness?

Posted on April 19, 2011 at 6:47 AMView Comments

Computational Forensics

Interesting article from IEEE Spectrum:

During two years of deliberation by the National Academy’s forensic science committee (of which I was a member), a troubling picture emerged. A large part of current forensics practice is skill and art rather than science, and the influences present in a typical law-enforcement setting are not conducive to doing the best science. Also, many of the methods have never been scientifically validated. And the wide variation in forensic data often makes interpretation exceedingly difficult.

[…]

So how might greater automation of classical forensics techniques help? New algorithms and software could improve things in a number of ways. One important area is to quantify the chance that the evidence is unique by applying various probability models.

[…]

Computational forensics can also be used to narrow down the range of possible matches against a database of cataloged patterns. To do that, you need a way to quantify the similarity between the query and each entry in the database. These similarity values are then used to rank the database entries and retrieve the closest ones for further comparison. Of course, the process becomes more complicated when the database contains millions or even hundreds of millions of entries. But then, computers are much better suited than people to such tedious and repetitive search tasks.

Posted on December 20, 2010 at 11:48 AMView Comments

Kahn, Diffie, Clark, and Me at Bletchley Park

Saturday, I visited Bletchley Park to speak at the Annual ACCU Security Fundraising Conference. They had a stellar line of speakers this year, and I was pleased to be a part of the day.

Talk #1: “The Art of Forensic Warfare,” Andy Clark. Riffing on Sun Tzu’s The Art of War, Clark discussed the war—the back and forth—between cyber attackers and cyber forensics. This isn’t to say that we’re at war, but today’s attacker tactics are increasingly sophisticated and warlike. Additionally, the pace is greater, the scale of impact is greater, and the subjects of attack are broader. To defend ourselves, we need to be equally sophisticated and—possibly—more warlike.

Clark drew parallels from some of the chapters of Sun Tzu’s book combined with examples of the work at Bletchley Park. Laying plans: when faced with an attacker—especially one of unknown capabilities, tactics, and motives—it’s important to both plan ahead and plan for the unexpected. Attack by stratagem: increasingly, attackers are employing complex and long-term strategies; defenders need to do the same. Energy: attacks increasingly start off simple and get more complex over time; while it’s easier to defect primary attacks, secondary techniques tend to be more subtle and harder to detect. Terrain: modern attacks take place across a very broad range of terrain, including hardware, OSs, networks, communication protocols, and applications. The business environment under attack is another example of terrain, equally complex. The use of spies: not only human spies, but also keyloggers and other embedded eavesdropping malware. There’s a great World War II double-agent story about Eddie Chapman, codenamed ZIGZAG.

Talk #2: “How the Allies Suppressed the Second Greatest Secret of World War II,” David Kahn. This talk is from Kahn’s article of the same name, published in the Oct 2010 issue of The Journal of Military History. The greatest secret of World War II was the atom bomb; the second greatest secret was that the Allies were reading the German codes. But while there was a lot of public information in the years after World War II about Japanese codebreaking and its value, there was almost nothing about German codebreaking. Kahn discussed how this information was suppressed, and how historians writing World War II histories never figured it out. No one imagined as large and complex an operation as Bletchley Park; it was the first time in history that something like this had ever happened. Most of Kahn’s time was spent in a very interesting Q&A about the history of Bletchley Park and World War II codebreaking.

Talk #3: “DNSSec, A System for Improving Security of the Internet Domain Name System,” Whitfield Diffie. Whit talked about three watersheds in modern communications security. The first was the invention of the radio. Pre-radio, the most common communications security device was the code book. This was no longer enough when radio caused the amount of communications to explode. In response, inventors took the research in Vigenère ciphers and automated them. This automation led to an explosion of designs and an enormous increase in complexity—and the rise of modern cryptography.

The second watershed was shared computing. Before the 1960s, the security of computers was the physical security of computer rooms. Timesharing changed that. The result was computer security, a much harder problem than cryptography. Computer security is primarily the problem of writing good code. But writing good code is hard and expensive, so functional computer security is primarily the problem of dealing with code that isn’t good. Networking—and the Internet—isn’t just an expansion of computing capacity. The real difference is how cheap it is to set up communications connections. Setting up these connections requires naming: both IP addresses and domain names. Security, of course, is essential for this all to work; DNSSec is a critical part of that.

The third watershed is cloud computing, or whatever you want to call the general trend of outsourcing computation. Google is a good example. Every organization uses Google search all the time, which probably makes it the most valuable intelligence stream on the planet. How can you protect yourself? You can’t, just as you can’t whenever you hand over your data for storage or processing—you just have to trust your outsourcer. There are two solutions. The first is legal: an enforceable contract that protects you and your data. The second is technical, but mostly theoretical: homomorphic encryption that allows you to outsource computation of data without having to trust that outsourcer.

Diffie’s final point is that we’re entering an era of unprecedented surveillance possibilities. It doesn’t matter if people encrypt their communications, or if they encrypt their data in storage. As long as they have to give their data to other people for processing, it will be possible to eavesdrop on. Of course the methods will change, but the result will be an enormous trove of information about everybody.

Talk #4: “Reconceptualizing Security,” me. It was similar to this essay and this video.

Posted on November 9, 2010 at 6:01 AMView Comments

Tracking Location Based on Water Isotope Ratios

Interesting:

…water molecules differ slightly in their isotope ratios depending on the minerals at their source. …researchers found that water samples from 33 cities across the United State could be reliably traced back to their origin based on their isotope ratios. And because the human body breaks down water’s constituent atoms of hydrogen and oxygen to construct the proteins that make hair cells, those cells can preserve the record of a person’s travels.

Here’s the paper.

Posted on July 5, 2010 at 10:00 AMView Comments

1 3 4 5 6 7 10

Sidebar photo of Bruce Schneier by Joe MacInnis.