Friday Squid Blogging: Squid Art
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Page 518
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
The National Academies Press has published Crisis Standards of Care: A Systems Framework for Catastrophic Disaster Response.
When a nation or region prepares for public health emergencies such as a pandemic influenza, a large-scale earthquake, or any major disaster scenario in which the health system may be destroyed or stressed to its limits, it is important to describe how standards of care would change due to shortages of critical resources. At the 17th World Congress on Disaster and Emergency Medicine, the IOM Forum on Medical and Public Health Preparedness sponsored a session that focused on the promise of and challenges to integrating crisis standards of care principles into international disaster response plans.
Okay, so he doesn’t use that term. But he explains how a magician’s inherent ability to detect deception can be useful to science.
We can’t make magicians out of scientists—we wouldn’t want to—but we can help scientists “think in the groove”—think like a magician. And we should.
We are not scientists—with a few rare but important exceptions, like Ray Hyman and Richard Wiseman. But our highly specific expertise comes from knowledge of the ways in which our audiences can be led to quite false conclusions by calculated means psychological, physical and especially sensory, visual being rather paramount since it has such a range of variety.
The fact that ours is a concealed art as well as one designed to confound persons of average and advanced thinking skills—our typical audience—makes it rather immune to ordinary analysis or solutions.
I’ve observed that scientists tend to think and perceive logically by using their training and observational skills—of course—and are thus often psychologically insulated from the possibility that there might be chicanery at work. This is where magicians can come in. No matter how well educated, or how basically intelligent, trained, or observant a scientist may be, s/he may be a poor judge of a methodology employed in deliberate deception.
Here’s my essay on the security mindset.
This is the most intelligent thing I’ve read about the JetBlue incident where a pilot had a mental breakdown in the cockpit:
For decades, public safety officials and those who fund them have focused on training and equipment that has a dual-use function for any hazard that may come our way. The post-9/11 focus on terrorism, with all the gizmos that were bought in its name, was a moment of frenzy, and sometimes inconsistent with sound public policy. Over time, there was a return to security measures that were adaptable (dual or multiple use) to any threat and more sustainable in a world that has its fair share of both predictable and utterly bizarre events.
The mental condition of airline pilots is a relevant factor in their annual or bi-annual physicals. (FAA rules differ on the number of physicals required, based on the type of plane being flown.) But believing that the system is flawed because it didn’t predict the breakdown of one of 450,000 certified pilots is a myopic reaction.
In many ways, though, this kind of incident was anticipated. The system envisions pilot incapacitation—physical, mental, or possibly, as in the campy movie ”Snakes on a Plane,” a slithering foe.
That is, after all, why we have copilots.
The whole essay is worth reading.
Good article on the current battle for Internet governance:
The War for the Internet was inevitable—a time bomb built into its creation. The war grows out of tensions that came to a head as the Internet grew to serve populations far beyond those for which it was designed. Originally built to supplement the analog interactions among American soldiers and scientists who knew one another off-line, the Internet was established on a bedrock of trust: trust that people were who they said they were, and trust that information would be handled according to existing social and legal norms. That foundation of trust crumbled as the Internet expanded. The system is now approaching a state of crisis on four main fronts.
The first is sovereignty: by definition, a boundary-less system flouts geography and challenges the power of nation-states. The second is piracy and intellectual property: information wants to be free, as the hoary saying goes, but rights-holders want to be paid and protected. The third is privacy: online anonymity allows for creativity and political dissent, but it also gives cover to disruptive and criminal behavior—and much of what Internet users believe they do anonymously online can be tracked and tied to people’s real-world identities. The fourth is security: free access to an open Internet makes users vulnerable to various kinds of hacking, including corporate and government espionage, personal surveillance, the hijacking of Web traffic, and remote manipulation of computer-controlled military and industrial processes.
Symantec deliberately “lost” a bunch of smart phones with tracking software on them, just to see what would happen:
Some 43 percent of finders clicked on an app labeled “online banking.” And 53 percent clicked on a filed named “HR salaries.” A file named “saved passwords” was opened by 57 percent of finders. Social networking tools and personal e-mail were checked by 60 percent. And a folder labeled “private photos” tempted 72 percent.
Collectively, 89 percent of finders clicked on something they probably shouldn’t have.
Meanwhile, only 50 percent of finders offered to return the gadgets, even though the owner’s name was listed clearly within the contacts file.
[…]
Some might consider the 50 percent return rate a victory for humanity, but that wasn’t really the point of Symantec’s project. The firm wanted to see if—even among what seem to be honest people—the urge to peek into someone’s personal data was just too strong to resist. It was.
EDITED TO ADD (4/13): Original study.
Turns out the password can be easily bypassed:
XRY works by first jailbreaking the handset. According to Micro Systemation, no ‘backdoors’ created by Apple used, but instead it makes use of security flaws in the operating system the same way that regular jailbreakers do.
Once the iPhone has been jailbroken, the tool then goes on to ‘brute-force’ the passcode, trying every possible four digit combination until the correct password has been found. Given the limited number of possible combinations for a four-digit passcode—10,000, ranging from 0000 to 9999—this doesn’t take long.
Once the handset has been jailbroken and the passcode guessed, all the data on the handset, including call logs, messages, contacts, GPS data and even keystrokes, can be accessed and examined.
One of the morals is to use an eight-digit passcode.
EDITED TO ADD (4/13): This has been debunked. The 1Password blog has a fairly lengthy post discussing the details of the XRY tool.
Paul Ceglia’s lawsuit against Facebook is fascinating, but that’s not the point of this blog post. As part of the case, there are allegations that documents and e-mails have been electronically forged. I found this story about the forensics done on Ceglia’s computer to be interesting.
This article talks about legitimate companies buying zero-day exploits, including the fact that “an undisclosed U.S. government contractor recently paid $250,000 for an iOS exploit.”
The price goes up if the hack is exclusive, works on the latest version of the software, and is unknown to the developer of that particular software. Also, more popular software results in a higher payout. Sometimes, the money is paid in instalments, which keep coming as long as the hack does not get patched by the original software developer.
Yes, I know that vendors will pay bounties for exploits. And I’m sure there are a lot of government agencies around the world who want zero-day exploits for both espionage and cyber-weapons. But I just don’t see that much value in buying an exploit from random hackers around the world.
These things only have value until they’re patched, and a known exploit—even if it is just known by the seller—is much more likely to get patched. I can much more easily see a criminal organization deciding that the exploit has significant value before that happens. Government agencies are playing a much longer game.
And I would expect that most governments have their own hackers who are finding their own exploits. One, cheaper. And two, only known within that government.
Here’s another story, with a price list for different exploits. But I still don’t trust this story.
Sidebar photo of Bruce Schneier by Joe MacInnis.