September 2012 Archives
A proposal to replace cryptography's Alice and Bob with Sita and Rama:
Any book on cryptography invariably involves the characters Alice and Bob. It is always Alice who wants to send a message to Bob. This article replaces the dramatis personnae of cryptography with characters drawn from Hindu mythology.
Kay Hamacher and Stefan Katzenbeisser, "Public Security: Simulations Need to Replace Conventional Wisdom," New Security Paradigms Workshop, 2011.
Abstract: Is more always better? Is conventional wisdom always the right guideline in the development of security policies that have large opportunity costs? Is the evaluation of security measures after their introduction the best way? In the past, these questions were frequently left unasked before the introduction of many public security measures. In this paper we put forward the new paradigm that agent-based simulations are an effective and most likely the only sustainable way for the evaluation of public security measures in a complex environment. As a case-study we provide a critical assessment of the power of Telecommunications Data Retention (TDR), which was introduced in most European countries, despite its huge impact on privacy. Up to now it is unknown whether TDR has any benefits in the identification of terrorist dark nets in the period before an attack. The results of our agent-based simulations suggest, contrary to conventional wisdom, that the current practice of acquiring more data may not necessarily yield higher identification rates.
Both the methodology and the conclusions are interesting.
This is the first one discovered, I think.
It's probably too late for me to affect the final decision, but I am hoping for "no award."
It's not that the new hash functions aren't any good, it's that we don't really need one. When we started this process back in 2006, it looked as if we would be needing a new hash function soon. The SHA family (which is really part of the MD4 and MD5 family), was under increasing pressure from new types of cryptanalysis. We didn't know how long the various SHA-2 variants would remain secure. But it's 2012, and SHA-512 is still looking good.
Even worse, none of the SHA-3 candidates is significantly better. Some are faster, but not orders of magnitude faster. Some are smaller in hardware, but not orders of magnitude smaller. When SHA-3 is announced, I'm going to recommend that, unless the improvements are critical to their application, people stick with the tried and true SHA-512. At least for a while.
I don't think NIST is going to announce "no award"; I think it's going to pick one. And of the five remaining, I don't really have a favorite. Of course I want Skein to win, but that's out of personal pride, not for some objective reason. And while I like some more than others, I think any would be okay.
Well, maybe there's one reason NIST should choose Skein. Skein isn't just a hash function, it's the large-block cipher Threefish and a mechanism to turn it into a hash function. I think the world actually needs a large-block cipher, and if NIST chooses Skein, we'll get one.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Interesting article on how the NSA is approaching risk in the era of cool consumer devices. There's a discussion of the president's network-disabled iPad, and the classified cell phone that flopped because it took so long to develop and was so clunky. Turns out that everyone wants to use iPhones.
Levine concluded, "Using commercial devices to process classified phone calls, using commercial tablets to talk over wifi -- that’s major game-changer for NSA to put classified information over wifi networks, but that’s what we’re going to do." One way that would be done, he said, was by buying capability from cell carriers that have networks of cell towers in much the way small cell providers and companies like Onstar do.
Interestingly, Levine described an agency that is being forced to adopt a more realistic and practical attitude toward risk. "It used to be that the NSA squeezed all risk out of everything," he said. Even lower-levels of sensitivity were covered by Top Secret-level crypto. "We don't do that now -- it's levels of risk. We say we can give you this, but can ensure only this level of risk." Partly this came about, he suggested, because the military has an inherent understanding that nothing is without risk, and is used to seeing things in terms of tradeoffs: "With the military, everything is a risk decision. If this is the communications capability I need, I'll have to take that risk."
A recent Ars Technica article made the point that password crackers are getting better, and therefore passwords are getting weaker. It's not just computing speed; we now have many databases of actual passwords we can use to create dictionaries of common passwords, or common password-generation techniques. (Example: dictionary word plus a single digit.)
This really isn't anything new. I wrote about it in 2007. Even so, the article has caused a bit of a stir since it was published. I didn't blog about it then, because I was waiting for Joe Bonneau to comment. He has, in a two-part blog post that's well worth reading.
Password cracking can be evaluated on two nearly independent axes: power (the ability to check a large number of guesses quickly and cheaply using optimized software, GPUs, FPGAs, and so on) and efficiency (the ability to generate large lists of candidate passwords accurately ranked by real-world likelihood using sophisticated models). It's relatively simple to measure cracking power in units of hashes evaluated per second or hashes per second per unit cost. There are details to account for, like the complexity of the hash being evaluated, but this problem is generally similar to cryptographic brute force against unknown (random) keys and power is generally increasing exponentially in tune with Moore's law. The move to hardware-based cracking has enabled well-documented orders-of-magnitude speedups.
Cracking efficiency, by contrast, is rarely measured well.
It's a known theft tactic to swallow what you're stealing. It works for food at the supermarket, and it also can work for diamonds. Here's a twist on that tactic:
Police say he could have swallowed the stone in an attempt to distract the diamond's owner, Suresh de Silva, while his accomplice stole the real gem.
Mr de Silva told the BBC that the Chinese men had visited the stall twice and he believed the diamond theft occurred during the first visit and not the second one, when the man swallowed the stone.
He insisted the man was trying to swap a fake stone for the real one and only swallowed the stone when he panicked after Mr de Silva apprehended him and alerted police.
This reminds me of group pickpocket tactics against tourists: the person who steals the wallet quickly passes it to someone else, so if the victim grabs the attacker, the wallet is long gone.
Two of my books can be seen in the background in CBS' new Sherlock Holmes drama, Elementary. Copies of Schneier on Security and Secrets & Lies are prominently displayed on Sherlock Holmes' bookshelf. You can see them in the first few minutes of the pilot episode. The show's producers contacted me early on to ask permission to use my books, so it didn't come as a surprise, but it's still a bit of a thrill.
Here's a listing of all the books visible on the bookshelf.
This sort of attack will become more common as banks require two-factor authentication:
Tatanga checks the user account details including the number of accounts, supported currency, balance/limit details. It then chooses the account from which it could steal the highest amount.
Next, it initiates a transfer.
At this point Tatanga uses a Web Inject to trick the user into believing that the bank is performing a chipTAN test. The fake instructions request that the user generate a TAN for the purpose of this "test" and enter the TAN.
Note that the attack relies on tricking the user, which isn't very hard.
This statistical research says once per decade:
Abstract: Quantities with right-skewed distributions are ubiquitous in complex social systems, including political conflict, economics and social networks, and these systems sometimes produce extremely large events. For instance, the 9/11 terrorist events produced nearly 3000 fatalities, nearly six times more than the next largest event. But, was this enormous loss of life statistically unlikely given modern terrorism's historical record? Accurately estimating the probability of such an event is complicated by the large fluctuations in the empirical distribution's upper tail. We present a generic statistical algorithm for making such estimates, which combines semi-parametric models of tail behavior and a non-parametric bootstrap. Applied to a global database of terrorist events, we estimate the worldwide historical probability of observing at least one 9/11-sized or larger event since 1968 to be 11-35%. These results are robust to conditioning on global variations in economic development, domestic versus international events, the type of weapon used and a truncated history that stops at 1998. We then use this procedure to make a data-driven statistical forecast of at least one similar event over the next decade.
Article about the research.
Nice essay on the futility of trying to prevent another 9/11:
"Never again." It is as simplistic as it is absurd. It is as vague as it is damaging. No two words have provided so little meaning or context; no catchphrase has so warped policy discussions that it has permanently confused the public's understanding of homeland security. It convinced us that invulnerability was a possibility.
The notion that policies should focus almost exclusively on preventing the next attack has also masked an ideological battle within homeland-security policy circles between "never again" and its antithesis, commonly referred to as "shit happens" but in polite company known as "resiliency." The debate isn't often discussed this way, and not simply because of the bad language. Time has not only eased the pain of that day, but there have also been no significant attacks. "Never again" has so infiltrated public discourse that to even acknowledge a trend away from prevention is considered risky, un-American. Americans don't do "Keep Calm and Carry On." But if they really want security, the kind of security that is sustainable and realistic, then they are going to have to.
There's a lot of good material in this essay.
The "Australia's Security Nightmares: The National Security Short Story Competition" is part of Safeguarding Australia 2012.
To aid the national security community in imagining contemporary threats, the Australian Security Research Centre (ASRC) is organising Australia's Security Nightmares: The National Security Short Story Competition. The competition aims to produce a set of short stories that will contribute to a better conception of possible future threats and help defence, intelligence services, emergency managers, health agencies and other public, private and non-government organisations to be better prepared. The ASRC competition also aims to raise community awareness of national security challenges, and lead to better individual and community resilience.
New, unpublished writers are encouraged to enter the competition.
The first prize is $1000, with the second prize being $500 and third prize being $300.
Entrants need to write a short story with a security scenario as the story plot line or as the essential backdrop. An Australia context to the story is required, and the story needs to be set between today and 2020. While the story is to be fictional, it needs to be grounded in a plausible, coherent and detailed security situation. Rather than just describing on an avalanche of frightening events, writers are encouraged to focus on the consequences and challenges posed by their scenarios, and tease out what the official and public responses would be. Such stories provide more useful insights for those planning to face security threats.
(And while we're on the topic, here's a video of the 100 greatest movie threats. Not movie-plot threats -- threats from actual movies.)
Well, new to us:
You see, an EMV payment card authenticates itself with a MAC of transaction data, for which the freshly generated component is the unpredictable number (UN). If you can predict it, you can record everything you need from momentary access to a chip card to play it back and impersonate the card at a future date and location. You can as good as clone the chip. It's called a "pre-play" attack. Just like most vulnerabilities we find these days some in industry already knew about it but covered it up; we have indications the crooks know about this too, and we believe it explains a good portion of the unsolved phantom withdrawal cases reported to us for which we had until recently no explanation.
There's a lot:
Advance tickets are required to enter this public, outdoor memorial. To book them, you’re obliged to provide your home address, email address, and phone number, and the full names of everyone in your party. It is “strongly recommended” that you print your tickets at home, which is where you must leave explosives, large bags, hand soap, glass bottles, rope, and bubbles. Also, "personal wheeled vehicles" not limited to bicycles, skateboards, and scooters, and anything else deemed inappropriate. Anyone age 13 or older must carry photo ID, to be displayed "when required and/or requested."
Once at the memorial you must go through a metal detector and your belongings must be X-rayed. Officers will inspect your ticketthat invulnerable document you nearly left on your printer -- at least five times. One will draw a blue line on it; 40 yards (and around a dozen security cameras) later, another officer will shout at you if your ticket and its blue line are not visible.
I'm one of the people commenting on whether this all makes sense.
I especially appreciated the last paragraph:
The Sept. 11 memorial’s designers hoped the plaza would be "a living part" of the city -- integrated into its fabric and usable "on a daily basis." I thought that sounded nice, so I asked Schneier one last question. Let’s say we dismantled all the security and let the Sept. 11 memorial be a memorial like any other: a place where citizens and travelers could visit spontaneously, on their own contemplative terms, day or night, subject only to capacity limits until the site is complete. What single measure would most guarantee their safety? I was thinking about cameras and a high-tech control center, "flower pot"-style vehicle barriers, maybe even snipers poised on nearby roofs. Schneier’s answer? Seat belts. On the drive to New York, or in your taxi downtown, buckle up, he warned. It’s dangerous out there.
So, what did he get wrong? First of all, the Stuxnet worm did not escape into the wild. The analysis of initial infections and propagations by Symantec show that, in fact, that it never was widespread, that it affected computers in closely connected clusters, all of which involved collaborators or companies that had dealings with each other. Secondly, it couldn't have escaped over the Internet, as Sanger's account maintains, because it never had that capability built into it: It can only propagate over [a] local-area network, over removable media such as CDs, DVDs, or USB thumb drives. So it was never capable of spreading widely, and in fact the sequence of infections is always connected by a close chain. Another thing that Sanger got wrong ... was the notion that the worm escaped when an engineer connected his computer to the PLCs that were controlling the centrifuges and his computer became infected, which then later spread over the Internet. This is also patently impossible because the software that was resident on the PLCs is the payload that directly deals with the centrifuge motors; it does not have the capability of infecting a computer because it doesn't have any copy of the rest of the Stuxnet system, so that part of the story is simply impossible. In addition, the explanation offered in his book and in his article is that Stuxnet escaped because of an error in the code, with the Americans claiming it was the Israelis' fault that suddenly allowed it to get onto the Internet because it no longer recognized its environment. Anybody who works in the field knows that this doesn't quite make sense, but in fact the last version, the last revision to Stuxnet, according to Symantec, had been in March, and it wasn't discovered until June 17. And in fact the mode of discovery had nothing to do with its being widespread in the wild because in fact it was discovered inside computers in Iran that were being supported by a Belarus antivirus company called VirusBlokAda.
EDITED TO ADD (9/14): Comment from Larry Constantine.
I'm trying to separate cloud security hype from reality. To that end, I'd like to talk to a few big corporate CSOs or CISOs about their cloud security worries, requirements, etc. If you're willing to talk, please contact me via e-mail. Eventually I will share the results of this inquiry. Thank you.
In this story, we learn that hackers got their hands on a database of 12 million Apple Unique Device Identifiers (UDIDs) by hacking an FBI laptop.
My question isn't about the hack, but about the data. Why does an FBI agent have user identification information about 12 million iPhone users on his laptop? And how did the FBI get their hands on this data in the first place?
For its part, the FBI denies everything:
In a statement released Tuesday afternoon, the FBI said, "The FBI is aware of published reports alleging that an FBI laptop was compromised and private data regarding Apple UDIDs was exposed. At this time there is no evidence indicating that an FBI laptop was compromised or that the FBI either sought or obtained this data."
Apple also denies giving the database to the FBI.
Okay, so where did the database come from? And are there really 12 million, or only one million?
If you've been hacked, you're not going to be informed:
DeHart said his firm would not be contacting individual consumers to notify them that their information had been compromised, instead leaving it up to individual publishers to contact readers as they see fit.
A team of security researchers from Oxford, UC Berkeley, and the University of Geneva say that they were able to deduce digits of PIN numbers, birth months, areas of residence and other personal information by presenting 30 headset-wearing subjects with images of ATM machines, debit cards, maps, people, and random numbers in a series of experiments. The paper, titled "On the Feasibility of Side-Channel Attacks with Brain Computer Interfaces," represents the first major attempt to uncover potential security risks in the use of the headsets.
This is a new development in spyware.
Yet another biometric: eye twitch patterns:
...a person's saccades, their tiny, but rapid, involuntary eye movements, can be measured using a video camera. The pattern of saccades is as unique as an iris or fingerprint scan but easier to record and so could provide an alternative secure biometric identification technology.
Probably harder to fool than iris scanners.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..