Blog: October 2012 Archives
There's a lot of hype and hyperbole in this story, but here's the interesting bit:
According to Ronald Kessler, the author of the 2009 book In the President’s Secret Service, Navy stewards gather bedsheets, drinking glasses, and other objects the president has touchedthey are later sanitized or destroyedin an effort to keep would be malefactors from obtaining his genetic material. (The Secret Service would neither confirm nor deny this practice, nor would it comment on any other aspect of this article.) And according to a 2010 release of secret cables by WikiLeaks, Secretary of State Hillary Clinton directed our embassies to surreptitiously collect DNA samples from foreign heads of state and senior United Nations officials. Clearly, the U.S. sees strategic advantage in knowing the specific biology of world leaders; it would be surprising if other nations didn’t feel the same.
The rest of the article is about individually targeted bioweapons.
I have a hard time getting worked up about this story:
I have X'd out any information that you could use to change my reservation. But it's all there, PNR, seat assignment, flight number, name, ect. But what is interesting is the bolded three on the end. This is the TSA Pre-Check information. The number means the number of beeps. 1 beep no Pre-Check, 3 beeps yes Pre-Check. On this trip as you can see I am eligible for Pre-Check. Also this information is not encrypted in any way.
What terrorists or really anyone can do is use a website to decode the barcode and get the flight information, put it into a text file, change the 1 to a 3, then use another website to re-encode it into a barcode. Finally, using a commercial photo-editing program or any program that can edit graphics replace the barcode in their boarding pass with the new one they created. Even more scary is that people can do this to change names. So if they have a fake ID they can use this method to make a valid boarding pass that matches their fake ID. The really scary part is this will get past both the TSA document checker, because the scanners the TSA use are just barcode decoders, they don't check against the real time information. So the TSA document checker will not pick up on the alterations. This means, as long as they sub in 3 they can always use the Pre-Check line.
What a dumb way to design the system. It would be easier -- and far more secure -- if the boarding pass checker just randomly chose 10%, or whatever percentage they want, of PreCheck passengers to send through regular screening. Why go through the trouble of encoding it in the barcode and then reading it?
And -- of course -- this means that you can still print your own boarding pass.
On the other hand, I think the PreCheck level of airport screening is what everyone should get, and that the no-fly list and the photo ID check add nothing to security. So I don't feel any less safe because of this vulnerability.
Still, I am surprised. Is this the same in other countries? Lots of countries scan my boarding pass before allowing me through security: France, the Netherlands, the UK, Japan, even Uruguay at Montevideo Airport when I flew out of there yesterday. I always assumed that those systems were connected to the airlines' reservation databases. Does anyone know?
I'm not sure what to think about this story:
Six Italian scientists and an ex-government official have been sentenced to six years in prison over the 2009 deadly earthquake in L'Aquila.
A regional court found them guilty of multiple manslaughter.
Prosecutors said the defendants gave a falsely reassuring statement before the quake, while the defence maintained there was no way to predict major quakes.
The 6.3 magnitude quake devastated the city and killed 309 people.
These were all members of the National Commission for the Forecast and Prevention of Major Risks, and some of Italy's most prominent and internationally respected seismologists and geological experts. Basically, the problem was that they failed to hedge their bets against the earthquake. In a press conference just before the earthquake, they incorrectly assured locals that there was no danger. This, according to the court, was equivalent to manslaughter.
No, it doesn't make any sense.
David Rothery, of the UK's Open University, said earthquakes were "inherently unpredictable".
"The best estimate at the time was that the low-level seismicity was not likely to herald a bigger quake, but there are no certainties in this game," he said.
Even the defendants were confused:
Another, Enzo Boschi, described himself as "dejected" and "desperate" after the verdict was read.
"I thought I would have been acquitted. I still don't understand what I was convicted of."
I do. He was convicted because the public wanted revenge -- and the scientists were their most obvious targets.
Needless to say, this is having a chilling effect on scientists talking to the public. Enzo Boschi, president of Italy's National Institute of Geophysics and Volcanology (INGV) in Rome, said: "When people, when journalists, asked my opinion about things, I used to tell them, but no more. Scientists have to shut up." Also, as part of their conviction, those scientists are prohibited from ever holding public office again.
From a security perspective, this seems like the worst possible outcome. The last thing we want of our experts is for them to refuse to give us the benefits of their expertise.
To be fair, the verdict isn't final. There are always appeals in Italy, and at least one level of appeal is certain in this case. Everything might be overturned, but I'm sure the chilling effect will remain, regardless.
As someone who constantly makes predictions about security that could potentially affect the livelihood and lives of those who listen to them, this really made me stop and think. Could I be arrested, or sued, for telling people that this particular security product is effective when in fact it is not? I am forever minimizing the risks of terrorism in general and airplane terrorism in particular. Sooner or later, there will be another terrorist event. Will that make me guilty of manslaughter as well? Italy is a long way away, but everything I write on the Internet reaches there.
EDITED TO ADD (11/13): Here is an article in "New Scientist" that gives the prosecutor's side of things. According to the prosecutor, this case was not about prediction. It was about communication. It wasn't about the odds of the quake, it was about how those odds were communicated to the public.
Peter Swire and Yianni Lagos have pre-published a law journal article on the risks of data portability. It specifically addresses an EU data protection regulation, but the security discussion is more general.
...Article 18 poses serious risks to a long-established E.U. fundamental right of data protection, the right to security of a person's data. Previous access requests by individuals were limited in scope and format. By contrast, when an individual's lifetime of data must be exported 'without hindrance,' then one moment of identity fraud can turn into a lifetime breach of personal data.
They have a point. If you're going to allow users to download all of their data with one command, you might want to double- and triple-check that command. Otherwise it's going to become an attack vector for identity theft and other malfeasance.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
A lot of the debate around President Obama's cybsersecurity initiative centers on how much of a burden it would be on industry, and how that should be financed. As important as that debate is, it obscures some of the larger issues surrounding cyberwar, cyberterrorism, and cybersecurity in general.
It's difficult to have any serious policy discussion amongst the fear mongering. Secretary Panetta's recent comments are just the latest; search the Internet for "cyber 9/11," "cyber Pearl-Harbor," "cyber Katrina," or -- my favorite -- "cyber Armageddon."
There's an enormous amount of money and power that results from pushing cyberwar and cyberterrorism: power within the military, the Department of Homeland Security, and the Justice Department; and lucrative government contracts supporting those organizations. As long as cyber remains a prefix that scares, it'll continue to be used as a bugaboo.
But while scare stories are more movie-plot than actual threat, there are real risks. The government is continually poked and probed in cyberspace, from attackers ranging from kids playing politics to sophisticated national intelligence gathering operations. Hackers can do damage, although nothing like the cyberterrorism rhetoric would lead you to believe. Cybercrime continues to rise, and still poses real risks to those of us who work, shop, and play on the Internet. And cyberdefense needs to be part of our military strategy.
Industry has definitely not done enough to protect our nation's critical infrastructure, and federal government may need more involvement. This should come as no surprise; the economic externalities in cybersecurity are so great that even the freest free market would fail.
For example, the owner of a chemical plant will protect that plant from cyber attack up to the value of that plant to the owner; the residual risk to the community around the plant will remain. Politics will color how government involvement looks: market incentives, regulation, or outright government takeover of some aspects of cybersecurity.
None of this requires heavy-handed regulation. Over the past few years we've heard calls for the military to better control Internet protocols; for the United States to be able to "kill" all or part of the Internet, or to cut itself off from the greater Internet; for increased government surveillance; and for limits on anonymity. All of those would be dangerous, and would make us less secure. The world's first military cyberweapon, Stuxnet, was used by the United States and Israel against Iran.
In all of this government posturing about cybersecurity, the biggest risk is a cyber-war arms race; and that's where remarks like Panetta's lead us. Increased government spending on cyberweapons and cyberdefense, and an increased militarization of cyberspace, is both expensive and destabilizing. Fears lead to weapons buildups, and weapons beg to be used.
I would like to see less fear mongering, and more reasoned discussion about the actual threats and reasonable countermeasures. Pushing the fear button benefits no one.
"Quantitative Analysis of the Full Bitcoin Transaction Graph," by Dorit Ron and Adi Shamir:
Abstract. The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the rst time a variety of interesting questions about the typical behavior of account owners, how they acquire and how they spend their Bitcoins, the balance of Bitcoins they keep in their accounts, and how they move Bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.
The paper has been submitted to the 2013 Financial Cryptography conference.
New report from the Presidential Commission for the Study of Bioethical Issues.
It's called "Privacy and Progress in Whole Genome Sequencing." The Commission described the rapid advances underway in the field of genome sequencing, but also noted growing concerns about privacy and security. The report lists twelve recommendations to improve current practices and to help safeguard privacy and security, including using deidentification wherever possible.
Interesting paper: "Before We Knew It: An Empirical Study of Zero-Day Attacks In The Real World," by Leyla Bilge and Tudor Dumitras:
Abstract: Little is known about the duration and prevalence of zeroday attacks, which exploit vulnerabilities that have not been disclosed publicly. Knowledge of new vulnerabilities gives cyber criminals a free pass to attack any target of their choosing, while remaining undetected. Unfortunately, these serious threats are difficult to analyze, because, in general, data is not available until after an attack is discovered. Moreover, zero-day attacks are rare events that are unlikely to be observed in honeypots or in lab experiments.
In this paper, we describe a method for automatically identifying zero-day attacks from field-gathered data that records when benign and malicious binaries are downloaded on 11 million real hosts around the world. Searching this data set for malicious files that exploit known vulnerabilities indicates which files appeared on the Internet before the corresponding vulnerabilities were disclosed. We identify 18 vulnerabilities exploited before disclosure, of which 11 were not previously known to have been employed in zero-day attacks. We also find that a typical zero-day attack lasts 312 days on average and that, after vulnerabilities are disclosed publicly, the volume of attacks exploiting them increases by up to 5 orders of magnitude.
This is important:
Previously, Apple had all but disabled tracking of iPhone users by advertisers when it stopped app developers from utilizing Apple mobile device data via UDID, the unique, permanent, non-deletable serial number that previously identified every Apple device.
For the last few months, iPhone users have enjoyed an unusual environment in which advertisers have been largely unable to track and target them in any meaningful way.
In iOS 6, however, tracking is most definitely back on, and it's more effective than ever, multiple mobile advertising executives familiar with IFA tell us. (Note that Apple doesn't mention IFA in its iOS 6 launch page).
EDITED TO ADD (10/15): Apple has provided a way to opt out of the targeted ads and also to disable the location information being sent.
Earlier this month, a retired New York City locksmith was selling a set of "master keys" on eBay:
Three of the five are standard issue for members of the FDNY, and the set had a metal dog tag that was embossed with an FDNY lieutenant's shield number, 6896.
The keys include the all-purpose "1620," a master firefighter key that with one turn could trap thousands of people in a skyscraper by sending all the elevators to the lobby and out of service, according to two FDNY sources. And it works for buildings across the city.
That key also allows one to open locked subway entrances, gain entry to many firehouses and get into boxes at construction jobs that house additional keys to all areas of the site.
The ring sold to The Post has two keys used by official city electricians that would allow access to street lamps, along with the basement circuit-breaker boxes of just about any large building.
Of course there's the terrorist tie-in:
"With all the anti-terrorism activities, with all the protection that the NYPD is trying to provide, it's astounding that you could get hold of this type of thing," he said.
He walked The Post through a couple of nightmare scenarios that would be possible with the help of such keys.
"Think about the people at Occupy Wall Street who hate the NYPD, hate the establishment. They would love to have a set. Wouldn't it be nice to walk in and disable Chase's elevators?" he said.
Or, he said, "I could open the master box at construction sites, which hold the keys and the building plans. Once you get inside, you can steal, vandalize or conduct terrorist activities."
The Huffington Post piled on:
"We cannot let anyone sell the safety of over 8 million people so easily," New York City Public Advocate Bill de Blasio said in a statement. "Having these keys on the open market literally puts lives at risk. The billions we've spent on counter-terrorism have been severely undercut by this breech [sic]."
Sounds terrible. But -- good news -- the locksmith has stopped selling them. (On the other hand, the press has helpfully published a photograph of the keys, so you can make your own, even if you didn't win the eBay auction.)
I found only one story that failed to hype the threat.
The current bit of sensationalism aside, this is fundamentally a hard problem. Master keys are only useful if they're widely applicable -- and if they're widely applicable, they need to be distributed widely. This means that 1) they can't be kept secret, and 2) they're very expensive to update. I could easily imagine an electronic lock solution that would be much more adaptable, but electronic locks come with their own vulnerabilities, since the electronics are something else that can fail. I don't know if a more complex system would be better in the end.
I was reviewed in Science:
Thus it helps to have a lucid and informative account such as Bruce Schneier's Liars and Outliers. The book provides an interesting and entertaining summary of the state of play of research on human social behavior, with a special emphasis on trust and trustworthiness.
Free from preoccupations and personal attachments to any of the scientific disciplines working on the topic, he has compiled a well-structured overview of what research can tell us about how trust and trustworthiness accumulate (although some academic readers may find their publications presented in an unexpected context). This he enlivens by adding real-life experiences on how to build trust and keep trustworthiness alive.
I am amused by the parenthetical comment.
Apple's map application shows more of Taiwan than Google Maps:
The Taiwanese government/military, like many others around the world, requests that satellite imagery providers, such as Google Maps, blur out certain sensitive military installations. Unfortunately, Apple apparently didn't get that memo.
According to reports the Taiwanese defence ministry hasn't filed a formal request with Apple yet but thought it would be a great idea to splash this across the media and bring everyone's attention to the story. Obviously it would terribly embarrassing if some unscrupulous person read the story and then found various uncensored military installations around Taiwan and posted photos of them.
Photos at the link.
Not computer networks, networks in general:
Findings so far suggest that networks of networks pose risks of catastrophic danger that can exceed the risks in isolated systems. A seemingly benign disruption can generate rippling negative effects. Those effects can cost millions of dollars, or even billions, when stock markets crash, half of India loses power or an Icelandic volcano spews ash into the sky, shutting down air travel and overwhelming hotels and rental car companies. In other cases, failure within a network of networks can mean the difference between a minor disease outbreak or a pandemic, a foiled terrorist attack or one that kills thousands of people.
Understanding these life-and-death scenarios means abandoning some well-established ideas developed from single-network studies. Scientists now know that networks of networks don’t always behave the way single networks do. In the wake of this insight, a revolution is under way. Researchers from various fields are rushing to figure out how networks link up and to identify the consequences of those connections.
Efforts by Havlin and colleagues have yielded other tips for designing better systems. Selectively choosing which nodes in one network to keep independent from the second network can prevent “poof” moments. Looking back to the blackout in Italy, the researchers found that they could defend the system by decoupling just four communications servers. “Here, we have some hope to make a system more robust,” Havlin says.
This promise is what piques the interest of governments and other agencies with money to fund deeper explorations of network-of-networks problems. It’s probably what attracted the attention of the Defense Threat Reduction Agency in the first place. Others outside the United States are also onboard. The European Union is spending millions of euros on Multiplex, putting together an all-star network science team to create a solid theoretical foundation for interacting networks. And an Italian-funded project, called Crisis Lab, will receive 9 million euros over three years to evaluate risk in real-world crises, with a focus on interdependencies among power grids, telecommunications systems and other critical infrastructures.
Eventually, Dueñas-Osorio envisions that a set of guidelines will emerge not just for how to simulate and study networks of networks, but also for how to best link networks up to begin with. The United States, along with other countries, have rules for designing independent systems, he notes. There are minimum requirements for constructing buildings and bridges. But no one says how networks of networks should come together.
It's a pretty good primer of current research into the risks involved in networked systems, both natural and artificial.
This is a fascinating story of a CIA burglar, who worked for the CIA until he tried to work against the CIA. The fact that he stole code books and keys from foreign embassies makes it extra interesting, and the complete disregard for the Constitution at the end makes it extra scary.
In the never-ending arms race between systems to prove that you're a human and computers that can fake it, here's a captcha that tests whether you have human feelings.
Instead of your run-of-the-mill alphanumeric gibberish, or random selection of words, the Civil Rights Captcha presents you with a short blurb about a Civil Rights violation and asks you how you feel about it. Ostensibly robots (and trolls) won't make it through because they'll remark that a human rights activist's murder makes them feel "aroused" instead of "upset." And bots will still have to make it past standard Captcha hurdles before they can even pick one of the choices.
The easy way to attack this system is to create a library with all the correct answers.
How soon before Deckard has to come to our house to administer a test?
Neat book illustration.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
On a NIST-sponsored hash function mailing list, Jesse Walker (from Intel; also a member of the Skein team) did some back-of-the-envelope calculations to estimate how long it will be before we see a practical collision attack against SHA-1. I'm reprinting his analysis here, so it reaches a broader audience.
According to E-BASH, the cost of one block of a SHA-1 operation on already deployed commodity microprocessors is about 214 cycles. If Stevens' attack of 260 SHA-1 operations serves as the baseline, then finding a collision costs about 214 * 260 ~ 274 cycles.
A core today provides about 231 cycles/sec; the state of the art is 8 = 23 cores per processor for a total of 23 * 231 = 234 cycles/sec. A server typically has 4 processors, increasing the total to 22 * 234 = 236 cycles/sec. Since there are about 225 sec/year, this means one server delivers about 225 * 236 = 261 cycles per year, which we can call a "server year."
There is ample evidence that Moore's law will continue through the mid 2020s. Hence the number of doublings in processor power we can expect between now and 2021 is:
3/1.5 = 2 times by 2015 (3 = 2015 - 2012)
6/1.5 = 4 times by 2018 (6 = 2018 - 2012)
9/1.5 = 6 times by 2021 (9 = 2021 - 2012)
So a commodity server year should be about:
261 cycles/year in 2012
22 * 261 = 263 cycles/year by 2015
24 * 261 = 265 cycles/year by 2018
26 * 261 = 267 cycles/year by 2021
Therefore, on commodity hardware, Stevens' attack should cost approximately:
274 / 261 = 213 server years in 2012
274 / 263 = 211 server years by 2015
274 / 265 = 29 server years by 2018
274 / 267 = 27 server years by 2021
Today Amazon rents compute time on commodity servers for about $0.04 / hour ~ $350 /year. Assume compute rental fees remain fixed while server capacity keeps pace with Moore's law. Then, since log2(350) ~ 8.4 the cost of the attack will be approximately:
213 * 28.4 = 221.4 ~ $2.77M in 2012
211 * 28.4 = 219.4 ~ $700K by 2015
29 * 28.4 = 217.4 ~ $173K by 2018
27 * 28.4 = 215.4 ~ $43K by 2021
A collision attack is therefore well within the range of what an organized crime syndicate can practically budget by 2018, and a university research project by 2021.
Since this argument only takes into account commodity hardware and not instruction set improvements (e.g., ARM 8 specifies a SHA-1 instruction), other commodity computing devices with even greater processing power (e.g., GPUs), and custom hardware, the need to transition from SHA-1 for collision resistance functions is probably more urgent than this back-of-the-envelope analysis suggests.
Any increase in the number of cores per CPU, or the number of CPUs per server, also affects these calculations. Also, any improvements in cryptanalysis will further reduce the complexity of this attack.
The point is that we in the community need to start the migration away from SHA-1 and to SHA-2/SHA-3 now.
It's a fine choice. I'm glad that SHA-3 is nothing like the SHA-2 family; something completely different is good.
Congratulations to the Keccak team. Congratulations -- and thank you -- to NIST for running a very professional, interesting, and enjoyable competition. The process has increased our understanding about the cryptanalysis of hash functions by a lot.
I know I just said that NIST should choose "no award," mostly because too many options makes for a bad standard. I never thought they would listen to me, and -- indeed -- only made that suggestion after I knew it was too late to stop the choice. Keccak is a fine hash function; I have absolutely no reservations about its security. (Or the security of any of the four SHA-2 function, for that matter.) I have to think more before I make specific recommendations for specific applications.
Again: great job, NIST. Let's do a really fast stream cipher next.
Among other findings in this CBO report:
Funding for homeland security has dropped somewhat from its 2009 peak of $76 billion, in inflation-adjusted terms; funding for 2012 totaled $68 billion. Nevertheless, the nation is now spending substantially more than what it spent on homeland security in 2001.
Note that this is just direct spending on homeland security. This does not include DoD spending -- which would include the costs of the wars in Iraq and Afghanistan -- and Department of Justice spending. John Mueller estimates that we have spent $1.1 trillion over the ten years between 2002 and 2011.
This story sounds pretty scary:
Developed by Robert Templeman at the Naval Surface Warfare Center in Indiana and a few buddies from Indiana University, PlaceRader hijacks your phone's camera and takes a series of secret photographs, recording the time, and the phone's orientation and location with each shot. Using that information, it can reliably build a 3D model of your home or office, and let cyber-intruders comb it for personal information like passwords on sticky notes, bank statements laying out on the coffee table, or anything else you might have lying around that could wind up the target of a raid on a later date.
It's just a demo, of course. but it's easy to imagine what this could mean in the hands of criminals.
Yes, I get that this is bad. But it seems to be a mashup of two things. One, the increasing technical capability to stitch together a series of photographs into a three-dimensional model. And two, an Android bug that allows someone to remotely and surreptitiously take pictures and then upload them. The first thing isn't a problem, and it isn't going away. The second is bad, irrespective of what else is going on.
EDITED TO ADD (10/1): I mistakenly wrote this up as an iPhone story. It's about the Android phone. Apologies.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.