Schneier on Security
A blog covering security and security technology.
April 2010 Archives
I didn't know this:
A Squid is a motorcycle rider who, experienced or not, rides outside his abilities and sets poor examples by attire, propriety, and general behavior on the motorcycle.
115 questions in the test.
This is funny:
The world has been placed on a heightened security alert following reports that New Age terrorists have harnessed the power of homeopathy for evil. "Homeopathic weapons represent a major threat to world peace," said President Barack Obama, "they might not cause any actual damage but the placebo effect could be quite devastating."
It's a little too close to reality, though.
Ally Bank wants its customers to invent their own personal secret questions and answers; the idea is that an operator will read the question over the phone and listen for an answer. Ignoring for the moment the problem of the operator now knowing the question/answer pair, what are some good pairs? Some suggestions:
Q: Do you know why I think you're so sexy?
Okay, now it's your turn.
The U.S. is developing a weapon capable of striking anywhere on the planet within an hour. The article talks about the possibility of modifying Trident missiles -- problematic because they would be indistinguishable from nuclear weapons -- and using the Mach 5–capable X-51 hypersonic cruise missile.
Interesting technology, but we really need to think through the political ramifications of this sort of thing better.
EDITED TO ADD (5/13): Report on the policy implications.
Nice essay by sociologist Frank Furedi on worse-case thinking, exemplified by our reaction to the Icelandic volcano:
I am not a natural scientist, and I claim no authority to say anything of value about the risks posed by volcanic ash clouds to flying aircraft. However, as a sociologist interested in the process of decision-making, it is evident to me that the reluctance to lift the ban on air traffic in Europe is motivated by worst-case thinking rather than rigorous risk assessment. Risk assessment is based on an attempt to calculate the probability of different outcomes. Worst-case thinking these days known as precautionary thinking' -- is based on an act of imagination. It imagines the worst-case scenario and then takes action on that basis. In the case of the Icelandic volcano, fears that particles in the ash cloud could cause aeroplane engines to shut down automatically mutated into a conclusion that this would happen. So it seems to me to be the fantasy of the worst-case scenario rather than risk assessment that underpins the current official ban on air traffic.
Hiding your valuables in common household containers is an old trick.
Diversion safes look like containers designed to hide your valuables in plain sight. Common diversion safes include fake brand name containers for soda pop, canned fruit, home cleaners, or even novels. Diversion can safes have removable tops or bottoms so that you can put your goods in them, and the safes are weighed so that they appear normal when handled.
These are relatively inexpensive, although it's cheaper to make your own.
From Lance Spitzner:
In January of this year the National Highway Traffic Safety Administration released a report called "Analyzing the First Years Of the Ticket or Click It Mobilizations"... While the report is focused on the use of seat belts, it has fascinating applications to the world of security awareness. The report focuses on 2000 - 2006, when most states in the United States began campaigns (called Ticket or Click-It) promoting and requiring the use of seat belts. Just like security awareness, the goal of the campaign was to change behaviors, specifically to get people to wear their seat belts when driving... The campaigns were very successful, resulting in a 20-23% increase in seat belt use regardless of which statistics they used. The key finding of the report was that enforcement and not money spent on media were key to results. The states that had the strongest enforcement had the most people using seat belts. The states with the weakest enforcement had the lowest seat belt usage.
This blog entry should serve as a model for open and transparent security self-reporting. I'm impressed.
They were afraid that they might contain pipe bombs.
This is the correct reaction:
In any case, I suspect someone somewhere just panicked at the possibility that something might explode near the President on his watch, since the whole operation has the finesse of a teenage stoner shoving his pot paraphernalia under the bed and desperately trying to clear the air with a copy of "Maxim" when he hears his parents coming home.
When asked by Gothamist, their precinct contact replied: "No, they just did this because the president was coming and they didn't want anything on the sidewalks. You're not supposed to lock you bike to signposts anyway, they have those new bike racks you're supposed to use."
I'll bet you anything that they didn't leave the bicycles that were locked to the racks.
Nasty scam, where the user is pressured into accepting a "pre-trial settlement" for copyright violations. The level of detail is impressive.
The editor of the Freakonomics blog asked me to write about this topic. The idea was that they would get several opinions, and publish them all. They spiked the story, but I already wrote my piece. So here it is.
In deciding what to do with Gray Powell, the Apple employee who accidentally left a secret prototype 4G iPhone in a California bar, Apple needs to figure out how much of the problem is due to an employee not following the rules, and how much of the problem is due to unclear, unrealistic, or just plain bad rules.
If Powell sneaked the phone out of the Apple building in a flagrant violation of the rules -- maybe he wanted to show it to a friend -- he should be disciplined, perhaps even fired. Some military installations have rules like that. If someone wants to take something classified out of a top secret military compound, he might have to secrete it on his person and deliberately sneak it past a guard who searches briefcases and purses. He might be committing a crime by doing so, by the way. Apple isn't the military, of course, but if their corporate security policy is that strict, it may very well have rules like that. And the only way to ensure rules are followed is by enforcing them, and that means severe disciplinary action against those who bypass the rules.
Even if Powell had authorization to take the phone out of Apple's labs -- presumably someone has to test drive the new toys sooner or later -- the corporate rules might have required him to pay attention to it at all times. We've all heard of military attachés who carry briefcases chained to their wrists. It's an extreme example, but demonstrates how a security policy can allow for objects to move around town -- or around the world -- without getting lost. Apple almost certainly doesn't have a policy as rigid as that, but its policy might explicitly prohibit Powell from taking that phone into a bar, putting it down on a counter, and participating in a beer tasting. Again, if Apple's rules and Powell's violation were both that clear, Apple should enforce them.
On the other hand, if Apple doesn't have clear-cut rules, if Powell wasn't prohibited from taking the phone out of his office, if engineers routinely ignore or bypass security rules and -- as long as nothing bad happens -- no one complains, then Apple needs to understand that the system is more to blame than the individual. Most corporate security policies have this sort of problem. Security is important, but it's quickly jettisoned when there's an important job to be done. A common example is passwords: people aren't supposed to share them, unless it's really important and they have to. Another example is guest accounts. And doors that are supposed to remain locked but rarely are. People routinely bypass security policies if they get in the way, and if no one complains, those policies are effectively meaningless.
Apple's unfortunately public security breach has given the company an opportunity to examine its policies and figure out how much of the problem is Powell and how much of it is the system he's a part of. Apple needs to fix its security problem, but only after it figures out where the problem is.
EDITED TO ADD (4:26): In comments, people are reporting that the master password doesn't work. Near as I can tell, those are all recent downloads. So either they took out the feature, or changed the password.
EDITED TO ADD (5/13): More info.
Just published: Special Publication (SP) 800-122, "Guide to Protecting the Confidentiality of Personally Identifiable Information (PII)."
It's 60 pages long; I haven't read it.
An odd burglary prevention tool:
If a burglar breaks in, the system floods the business with a dense fog similar to what's used in theaters and nightclubs. An intense strobe light blinds and disorients the crook.
EDITED TO ADD (4/21): I blogged about the same thing in 2007, though that version was marketed to homeowners. It's interesting how much more negative my reaction is to fog as a home security device than as a security device to protect retail stock.
There's a lot out there on this topic. I've already linked to danah boyd's excellent SXSW talk (and her work in general), my essay on privacy and control, and my talk -- "Security, Privacy, and the Generation Gap" -- which I've given four times in the past two months.
Last week, two new papers were published on the topic.
"Youth, Privacy, and Reputation" is a literature review published by Harvard's Berkman Center. It's long, but an excellent summary of what's out there on the topic:
Conclusions: The prevailing discourse around youth and privacy assumes that young people don't care about their privacy because they post so much personal information online. The implication is that posting personal information online puts them at risk from marketers, pedophiles, future employers, and so on. Thus, policy and technical solutions are proposed that presume that young would not put personal information online if they understood the consequences. However, our review of the literature suggests that young people care deeply about privacy, particularly with regard to parents and teachers viewing personal information. Young people are heavily monitored at home, at school, and in public by a variety of surveillance technologies. Children and teenagers want private spaces for socialization, exploration, and experimentation, away from adult eyes. Posting personal information online is a way for youth to express themselves, connect with peers, increase popularity, and bond with friends and members of peer groups. Subsequently, young people want to be able to restrict information provided online in a nuanced and granular way.
"How Different Are Young Adults from Older Adults When it Comes to Information Privacy Attitudes & Policy?" from the University of California Berkeley, describes the results of a broad survey on privacy attitudes.
Conclusion: In policy circles, it has become almost a cliché to claim that young people do not care about privacy. Certainly there are many troubling anecdotes surrounding young individuals’ use of the internet, and of social networking sites in particular. Nevertheless, we found that in large proportions young adults do care about privacy. The data show that they and older adults are more alike on many privacy topics than they are different. We suggest, then, that young-adult Americans have an aspiration for increased privacy even while they participate in an online reality that is optimized to increase their revelation of personal data.
They're both worth reading for anyone interested in this topic.
This is an excellent read:
I wouldn't have believed you if you'd told me 20 years ago that America would someday be routinely firing missiles into countries it’s not at war with. For that matter, I wouldn't have believed you if you'd told me a few months ago that America would soon be plotting the assassination of an American citizen who lives abroad.
He goes on to discuss Obama's authorization of the assassination of Anwar al-Awlaki, an American living in Yemen. He speculates on whether or not this is illegal, but spends more time musing about the effectiveness of assassination, referring to a 2009 paper from Security Studies: "When Heads Roll: Assessing the Effectiveness of Leadership Decapitation": "She studied 298 attempts, from 1945 through 2004, to weaken or eliminate terrorist groups through 'leadership decapitation' -- eliminating people in senior positions."
From the paper's conclusion:
The data presented in this paper show that decapitation is not an effective counterterrorism strategy. While decapitation is effective in 17 percent of all cases, when compared to the overall rate of organizational decline, decapitated groups have a lower rate of decline than groups that have not had their leaders removed. The findings show that decapitation is more likely to have counterproductive effects in larger, older, religious, and separatist organizations. In these cases decapitation not only has a much lower rate of success, the marginal value is, in fact, negative. The data provide an essential test of decapitation’s value as a counterterrorism policy.
Back to the article:
Particularly ominous are Jordan's findings about groups that, like Al Qaeda and the Taliban, are religious. The chances that a religious terrorist group will collapse in the wake of a decapitation strategy are 17 percent. Of course, that’s better than zero, but it turns out that the chances of such a group fading away when there's no decapitation are 33 percent. In other words, killing leaders of a religious terrorist group seems to increase the group's chances of survival from 67 percent to 83 percent.
Read the whole thing.
I thought this comment, from former senator Gary Hart, was particularly good.
As a veteran of the Senate Select Committee to Investigate the Intelligence Services of the U.S. (so-called Church committee), we discovered at least five official plots to assassinate foreign leaders, including Fidel Castro with almost demented insistence. None of them worked, though the Diem brothers in Vietnam and Salvador Allende in Chile might argue otherwise. In no case did it work out well for the U.S. or its policy. Indeed, once exposed, as these things inevitably are, the ideals underlying our Constitution and the nation's prestige suffered incalculable damage. The issue is principle versus expediency. Principle always suffers when expediency becomes the rule. We simply cannot continue to sacrifice principle to fear.
Additional commentary from The Atlantic.
EDITED TO ADD (4/22): The Church Commmittee's report on foreign assassination plots.
EDITED TO ADD (5/13): Stratfor
Now, how would CYBERCOM respond to that situation and under what authorities?" Answer: That would be the responsibility of the Department of Homeland Security (DHS) and the FBI. Alexander was repeatedly asked about privacy and civil liberties impact of his new role, and gave answers that were, well, full of platitudes but essentially uninformative. He also played up the threat, saying that U.S. military networks are seeing "hundreds of thousands of probes a day," whatever that means. Prior to the hearing, Alexander answered written questions from the commitee. Particularly interesting are his answers to questions 24 and 27. The majority of the funding for the multi-billion dollar Comprehensive National Cybersecurity Initiative (SNCI) is contained in the classified National Intelligence Program budget, which is reviewed and approved by the congressional intelligence committees. Almost all important aspects of the CNCI remain highly classified, including the implementation plan for the Einstein 3 intrusion detection and prevention system. It is widely perceived that the Department of Homeland Security is actually likely to simply extend the cyber security system that the NSA developed for DOD into the civilian and even the private sector for defense of critical infrastructure. DOD is creating a sub-unified Cyber Command with the Director of NSA as its Commander. 24a) In your view, are we risking creating the perception, at home and abroad, that the U.S. government’s dominant interests and objectives in cyberspace are intelligence- and military-related, and if so, is this a perception that we want to exist? (U) No, I don’t believe we are risking creating this perception as long as we communicate clearly to the American people—and the world—regarding our interests and objectives. 24b) Based on your experience, are the American people likely to accept deployment of classified methods of monitoring electronic communications to defend the government and critical infrastructure without explaining basic aspects of how this monitoring will be conducted and how it may affect them? (U) I believe the government and the American people expect both NSA and U.S. Cyber Command to support the cyber defense of our nation. Our support does not in any way suggest that we would be monitoring Americans. (U) I don’t believe we should ask the public to accept blindly some unclear “classified” method. We need to be transparent and communicate to the American people about our objectives to address the national security threat to our nation—the nature of the threat, our overall approach, and the roles and responsibilities of each department and agency involved—including NSA and the Department of Defense. I am personally committed to this transparency, and I know that the Department of Defense, the Intelligence Community, and the rest of the Administration are as well. What needs to remain classified, and I believe that the American people will accept this as reasonable, are the specific foreign threats that we are looking for and how we identify them, and what actions we take when they are identified. For these areas, the American people have you, their elected representatives, to provide the appropriate oversight on their behalf. (U) Remainder of answer provided in the classified supplement. 24c) What are your views as to the necessity and desirability of maintaining the current level of classification of the CNCI? (U) In recent months, we have seen an increasing amount of information being shared by the Administration and the departments and agencies on the CNCI and cybersecurity in general, which I believe is consistent with our commitment to transparency. I expect that trend to continue, and personally believe and support this transparency as a foundational element of the dialogue that we need to have with the American people on cybersecurity. [...] 27. Designing the Internet for Better Security Cyber security experts emphasize that the Internet was not designed for security. 27a) How could the Internet be designed differently to provide much greater inherent security? (U) The design of the Internet is—and will continue to evolve—based on technological advancements. These new technologies will enhance mobility and, if properly implemented, security. It is in the best interest of both government and insustry to consider security more prominently in this evolving future Internet architecture. If confirmed, I look forward to working with this Committee, as well as industry leaders, academia, the services, and DOD agencies on these important concerns. 27b) Is it practical to consider adopting those modifications? (U) Answer provided in the classified supplement. 27c) What would the impact be on privacy, both pro and con? (U) Answer provided in the classified supplement.
Now, how would CYBERCOM respond to that situation and under what authorities?"
Answer: That would be the responsibility of the Department of Homeland Security (DHS) and the FBI.
Alexander was repeatedly asked about privacy and civil liberties impact of his new role, and gave answers that were, well, full of platitudes but essentially uninformative.
He also played up the threat, saying that U.S. military networks are seeing "hundreds of thousands of probes a day," whatever that means.
Prior to the hearing, Alexander answered written questions from the commitee. Particularly interesting are his answers to questions 24 and 27.
The majority of the funding for the multi-billion dollar Comprehensive National Cybersecurity Initiative (SNCI) is contained in the classified National Intelligence Program budget, which is reviewed and approved by the congressional intelligence committees. Almost all important aspects of the CNCI remain highly classified, including the implementation plan for the Einstein 3 intrusion detection and prevention system. It is widely perceived that the Department of Homeland Security is actually likely to simply extend the cyber security system that the NSA developed for DOD into the civilian and even the private sector for defense of critical infrastructure. DOD is creating a sub-unified Cyber Command with the Director of NSA as its Commander.
24a) In your view, are we risking creating the perception, at home and abroad, that the U.S. government’s dominant interests and objectives in cyberspace are intelligence- and military-related, and if so, is this a perception that we want to exist?
(U) No, I don’t believe we are risking creating this perception as long as we communicate clearly to the American people—and the world—regarding our interests and objectives.
24b) Based on your experience, are the American people likely to accept deployment of classified methods of monitoring electronic communications to defend the government and critical infrastructure without explaining basic aspects of how this monitoring will be conducted and how it may affect them?
(U) I believe the government and the American people expect both NSA and U.S. Cyber Command to support the cyber defense of our nation. Our support does not in any way suggest that we would be monitoring Americans.
(U) I don’t believe we should ask the public to accept blindly some unclear “classified” method. We need to be transparent and communicate to the American people about our objectives to address the national security threat to our nation—the nature of the threat, our overall approach, and the roles and responsibilities of each department and agency involved—including NSA and the Department of Defense. I am personally committed to this transparency, and I know that the Department of Defense, the Intelligence Community, and the rest of the Administration are as well. What needs to remain classified, and I believe that the American people will accept this as reasonable, are the specific foreign threats that we are looking for and how we identify them, and what actions we take when they are identified. For these areas, the American people have you, their elected representatives, to provide the appropriate oversight on their behalf.
(U) Remainder of answer provided in the classified supplement.
24c) What are your views as to the necessity and desirability of maintaining the current level of classification of the CNCI?
(U) In recent months, we have seen an increasing amount of information being shared by the Administration and the departments and agencies on the CNCI and cybersecurity in general, which I believe is consistent with our commitment to transparency. I expect that trend to continue, and personally believe and support this transparency as a foundational element of the dialogue that we need to have with the American people on cybersecurity.
27. Designing the Internet for Better Security
Cyber security experts emphasize that the Internet was not designed for security.
27a) How could the Internet be designed differently to provide much greater inherent security?
(U) The design of the Internet is—and will continue to evolve—based on technological advancements. These new technologies will enhance mobility and, if properly implemented, security. It is in the best interest of both government and insustry to consider security more prominently in this evolving future Internet architecture. If confirmed, I look forward to working with this Committee, as well as industry leaders, academia, the services, and DOD agencies on these important concerns.
27b) Is it practical to consider adopting those modifications?
(U) Answer provided in the classified supplement.
27c) What would the impact be on privacy, both pro and con?
(U) Answer provided in the classified supplement.
The Electronic Privacy Information Center has filed a Freedom of Information Act request for that classified supplement. I doubt we'll get it, though.
The U.S. Cyber Command was announced by Secretary of Defense Robert Gates in June 2009. It's supposed to be operational this year.
In 2006, writing about future threats on privacy, I described a life recorder:
A "life recorder" you can wear on your lapel that constantly records is still a few generations off: 200 gigabytes/year for audio and 700 gigabytes/year for video. It'll be sold as a security device, so that no one can attack you without being recorded.
I can't find a quote right now, but in talks I would say that this kind of technology would first be used by groups of people with diminished rights: children, soldiers, prisoners, and the non-lucid elderly.
It's been proposed:
With GPS capabilities built into phones that can be made ever smaller, and the ability for these phones to transmit both sound and audio, isn't it time to think about a wearable device that could be used to call for help and accurately report what was happening?
Just one sentence on the security and privacy issues:
Indeed, privacy concerns need to be addressed so that stalkers and predators couldn't compromise the device.
What I can't figure out is why? To me, it seems easier for the cameras to stream live video than prerecorded images.
Last weekend I was in New York, and saw posters on the subways warning people about real guns painted to look like toys. And today I find these pictures from the Baltimore police department. Searching, I find this article from 2006 New York.
I had no idea this was a thing.
Interesting study: "Patients, Pacemakers, and Implantable Defibrillators: Human Values and Security for Wireless Implantable Medical Devices," Tamara Denning, Alan Borning, Batya Friedman, Brian T. Gill, Tadayoshi Kohno, and William H. Maisel.
Abstract: Implantable medical devices (IMDs) improve patients' quality of life and help sustain their lives. In this study, we explore patient views and values regarding their devices to inform the design of computer security for wireless IMDs. We interviewed 13 individuals with implanted cardiac devices. Key questions concerned the evaluation of 8 mockups of IMD security systems. Our results suggest that some systems that are technically viable are nonetheless undesirable to patients. Patients called out a number of values that affected their attitudes towards the systems, including perceived security, safety, freedom from unwanted cultural and historical associations, and self-image. In our analysis, we extend the Value Sensitive Design value dams and flows technique in order to suggest multiple, complementary systems; in our discussion, we highlight some of the usability, regulatory, and economic complexities that arise from offering multiple options. We conclude by offering design guidelines for future security systems for IMDs.
This idea, by Stuart Schechter at Microsoft Research, is -- I think -- clever:
Abstract: Implantable medical devices, such as implantable cardiac defibrillators and pacemakers, now use wireless communication protocols vulnerable to attacks that can physically harm patients. Security measures that impede emergency access by physicians could be equally devastating. We propose that access keys be written into patients' skin using ultraviolet-ink micropigmentation (invisible tattoos).
It certainly is a new way to look at the security threat model.
Chris Hoofnagle has a new paper: "Internalizing Identity Theft." Basically, he shows that one of the problems is that lenders extend credit even when credit applications are sketchy.
From an article on the work:
Using a 2003 amendment to the Fair Credit Reporting Act that allows victims of ID theft to ask creditors for the fraudulent applications submitted in their names, Mr. Hoofnagle worked with a small sample of six ID theft victims and delved into how they were defrauded.
This is a textbook example of an economic externality. Because most of the cost of identity theft is borne by the victim -- even with the lender reimbursing the victim if pushed to -- the lenders make the trade-off that's best for their business, and that means issuing credit even in marginal situations. They make more money that way.
If we want to reduce identity theft, the only solution is to internalize that externality. Either give victims the ability to sue lenders who issue credit in their names to identity thieves, or pass a law with penalties if lenders do this.
Among the ways to move the cost of the crime back to issuers of credit, Mr. Hoofnagle suggests that lenders contribute to a fund that will compensate victims for the loss of their time in resolving their ID theft problems.
John Adams argues that our irrationality about comparative risks depends on the type of risk:
With "pure" voluntary risks, the risk itself, with its associated challenge and rush of adrenaline, is the reward. Most climbers on Mount Everest know that it is dangerous and willingly take the risk. With a voluntary, self-controlled, applied risk, such as driving, the reward is getting expeditiously from A to B. But the sense of control that drivers have over their fates appears to encourage a high level of tolerance of the risks involved.
This isn't a new result, but it's vital to understand how people react to different risks.
Nice analysis by John Mueller and Mark G. Stewart:
There is a general agreement about risk, then, in the established regulatory practices of several developed countries: risks are deemed unacceptable if the annual fatality risk is higher than 1 in 10,000 or perhaps higher than 1 in 100,000 and acceptable if the figure is lower than 1 in 1 million or 1 in 2 million. Between these two ranges is an area in which risk might be considered "tolerable."
Says Matt Blaze:
A decade ago, I observed that commercial certificate authorities protect you from anyone from whom they are unwilling to take money. That turns out to be wrong; they don't even do that much.
Scary research by Christopher Soghoian and Sid Stamm:
Abstract: This paper introduces a new attack, the compelled certificate creation attack, in which government agencies compel a certificate authority to issue false SSL certificates that are then used by intelligence agencies to covertly intercept and hijack individuals' secure Web-based communications. We reveal alarming evidence that suggests that this attack is in active use. Finally, we introduce a lightweight browser add-on that detects and thwarts such attacks.
Even more scary, Soghoian and Stamm found that hardware to perform this attack is being produced and sold:
At a recent wiretapping convention, however, security researcher Chris Soghoian discovered that a small company was marketing internet spying boxes to the feds. The boxes were designed to intercept those communications -- without breaking the encryption -- by using forged security certificates, instead of the real ones that websites use to verify secure connections. To use the appliance, the government would need to acquire a forged certificate from any one of more than 100 trusted Certificate Authorities.
Matt Blaze has the best analysis. Read his whole commentary; this is just the ending:
It's worth pointing out that, from the perspective of a law enforcement or intelligence agency, this sort of surveillance is far from ideal. A central requirement for most government wiretapping (mandated, for example, in the CALEA standards for telephone interception) is that surveillance be undetectable. But issuing a bogus web certificate carries with it the risk of detection by the target, either in real-time or after the fact, especially if it's for a web site already visited. Although current browsers don't ordinarily detect unusual or suspiciously changed certificates, there's no fundamental reason they couldn't (and the Soghoian/Stamm paper proposes a Firefox plugin to do just that). In any case, there's no reliable way for the wiretapper to know in advance whether the target will be alerted by a browser that scrutinizes new certificates.
CRN Magazine named me as one of its security superstars of 2010.
Last month at the RSA Conference, I gave a talk titled "Security, Privacy, and the Generation Gap." It was pretty good, but it was the first time I gave that talk in front of a large audience -- and its newness showed.
Last week, I gave the same talk again, at the CACR Higher Education Security Summit at Indiana University. It was much, much better the second time around, and there's a video available.
Dueling has a rational economic basis.
New cryptanalysis of the proprietrary encryption algorithm used in the Digital Enhanced Cordless Telecommunications (DECT) standard for cordless phones.
Abstract. The DECT Standard Cipher (DSC) is a proprietary 64-bit stream cipher based on irregularly clocked LFSRs and a non-linear output combiner. The cipher is meant to provide confidentiality for cordless telephony. This paper illustrates how the DSC was reverse-engineered from a hardware implementation using custom firmware and information on the structure of the cipher gathered from a patent. Beyond disclosing the DSC, the paper proposes a practical attack against DSC that recovers the secret key from 215 keystreams on a standard PC with a success rate of 50% within hours; somewhat faster when a CUDA graphics adapter is available.
Air marshals are being arrested faster than air marshals are making arrests.
Actually, there have been many more arrests of Federal air marshals than that story reported, quite a few for felony offenses. In fact, more air marshals have been arrested than the number of people arrested by air marshals.
At a news conference at the National Press Club, WikiLeaks said it had acquired the video from whistle-blowers in the military and viewed it after breaking the encryption code. WikiLeaks released the full 38-minute video as well as a 17-minute edited version.
And this quote from the WikiLeaks Twitter feed on Feb 20th:
Finally cracked the encryption to US military video in which journalists, among others, are shot. Thanks to all who donated $/CPUs.Surely this isn't NSA-level encryption. But what is it?
Note that this is intended to be a discussion about the cryptanalysis, not about the geopolitics of the event.
People intent on preventing a Moscow-style terrorist attack against the New York subway system are proposing a range of expensive new underground security measures, some temporary and some permanent.
They should save their money - and instead invest every penny they're considering pouring into new technologies into intelligence and old-fashioned policing.
Intensifying security at specific stations only works against terrorists who aren't smart enough to move to another station. Cameras are useful only if all the stars align: The terrorists happen to walk into the frame, the video feeds are being watched in real time and the police can respond quickly enough to be effective. They're much more useful after an attack, to figure out who pulled it off.
Installing biological and chemical detectors requires similarly implausible luck - plus a terrorist plot that includes the specific biological or chemical agent that is being detected.
What all these misguided reactions have in common is that they're based on "movie-plot threats": overly specific attack scenarios. They fill our imagination vividly, in full color with rich detail. Before long, we're envisioning an entire story line, with or without Bruce Willis saving the day. And we're scared.
It's not that movie-plot threats are not worth worrying about. It's that each one - Moscow's subway attack, the bombing of the Oklahoma City federal building, etc. - is too specific. These threats are infinite, and the bad guys can easily switch among them.
New York has thousands of possible targets, and there are dozens of possible tactics. Implementing security against movie-plot threats is only effective if we correctly guess which specific threat to protect against. That's unlikely.
A far better strategy is to spend our limited counterterrorism resources on investigation and intelligence - and on emergency response. These measures don't hinge on any specific threat; they don't require us to guess the tactic or target correctly. They're effective in a variety of circumstances, even nonterrorist ones.
The result may not be flashy or outwardly reassuring - as are pricey new scanners in airports. But the strategy will save more lives.
The 2006 arrest of the liquid bombers - who wanted to detonate liquid explosives to be brought onboard airliners traveling from England to North America - serves as an excellent example. The plotters were arrested in their London apartments, and their attack was foiled before they ever got to the airport.
It didn't matter if they were using liquids or solids or gases. It didn't even matter if they were targeting airports or shopping malls or theaters. It was a straightforward, although hardly simple, matter of following leads.
Gimmicky security measures are tempting - but they're distractions we can't afford. The Christmas Day bomber chose his tactic because it would circumvent last year's security measures, and the next attacker will choose his tactic - and target - according to similar criteria. Spend money on cameras and guards in the subways, and the terrorists will simply modify their plot to render those countermeasures ineffective.
Humans are a species of storytellers, and the Moscow story has obvious parallels in New York. When we read the word "subway," we can't help but think about the system we use every day. This is a natural response, but it doesn't make for good public policy. We'd all be safer if we rose above the simple parallels and the need to calm our fears with expensive and seductive new technologies - and countered the threat the smart way.
This essay originally appeared in the New York Daily News.
In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy's and Larry Ellison's comments from a few years earlier, and you've got a whole lot of tech CEOs proclaiming the death of privacy--especially when it comes to young people.
It's just not true. People, including the younger generation, still care about privacy. Yes, they're far more public on the Internet than their parents: writing personal details on Facebook, posting embarrassing photos on Flickr and having intimate conversations on Twitter. But they take steps to protect their privacy and vociferously complain when they feel it violated. They're not technically sophisticated about privacy and make mistakes all the time, but that's mostly the fault of companies and Web sites that try to manipulate them for financial gain.
To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it's no longer private. But that's not how privacy works, and it's not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social-networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone's e-mail conversations--your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.
People's relationship with privacy is socially complicated. Salience matters: People are more likely to protect their privacy if they're thinking about it, and less likely to if they're thinking about something else. Social-networking sites know this, constantly reminding people about how much fun it is to share photos and comments and conversations while downplaying the privacy risks. Some sites go even further, deliberately hiding information about how little control--and privacy--users have over their data. We all give up our privacy when we're not thinking about it.
Group behavior matters; we're more likely to expose personal information when our peers are doing it. We object more to losing privacy than we value its return once it's gone. Even if we don't have control over our data, an illusion of control reassures us. And we are poor judges of risk. All sorts of academic research backs up these findings.
Here's the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users' information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control.
You can see these forces in play with Google's launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but--and Google knew this--changing options is hard and most people accept the defaults, especially when they're trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.
Facebook tried a similar control grab when it changed people's default privacy settings last December to make them more public. While users could, in theory, keep their previous settings, it took an effort. Many people just wanted to chat with their friends and clicked through the new defaults without realizing it.
Facebook has a history of this sort of thing. In 2006 it introduced News Feeds, which changed the way people viewed information about their friends. There was no true privacy change in that users could not see more information than before; the change was in control--or arguably, just in the illusion of control. Still, there was a large uproar. And Facebook is doing it again; last month, the company announced new privacy changes that will make it easier for it to collect location data on users and sell that data to third parties.
With all this privacy erosion, those CEOs may actually be right--but only because they're working to kill privacy. On the Internet, our privacy options are limited to the options those companies give us and how easy they are to find. We have Gmail and Facebook accounts because that's where we socialize these days, and it's hard--especially for the younger generation--to opt out. As long as privacy isn't salient, and as long as these companies are allowed to forcibly change social norms by limiting options, people will increasingly get used to less and less privacy. There's no malice on anyone's part here; it's just market forces in action. If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can't rely on market forces to maintain it. Broad legislation protecting personal privacy by giving people control over their personal data is the only solution.
This essay originally appeared on Forbes.com.
This seems like science fiction to me:
The camera uses the same "red eye" effect of from camera flashes to project it hundreds of meters, allowing it to identify binoculars, sniper scopes, cameras and even human eyeballs that are staring at you....
It'll protect your secrets from your kid sister, unless she's smarter than that.
Looks cool, though.
This is a little hokey, but better them than the NSA:
The National Cybersecurity Awareness Campaign Challenge Competition is designed to solicit ideas from industry and individuals alike on how best we can clearly and comprehensively discuss cybersecurity with the American public.
Deadline is end of April, if you want to submit something. "Winners of the Challenge will be invited to an event in Washington D.C. in late May or early June." I wonder what kind of event.
Is MI5 playing a joke on us?
Female homicide bombers are being fitted with exploding breast implants which are almost impossible to detect, British spies have reportedly discovered.
Radical Islamist plastic surgeons could be carrying out the implant operations in lawless areas of Pakistan, security sources are said to warned.
They also could be having tea with their families. They could be building killer robots with lasers shooting out of their eyes.
I love the poor Photoshop job in this article from The Sun.
Perhaps we should just give up. When this sort of hysterical nonsense becomes an actual news story, the terrorists have won.
Once upon a time, men and women throughout the land lived in fear. This caused them to do foolish things that made them feel better temporarily, but didn't make them any safer. Gradually, some people became less fearful, and less tolerant of the foolish things they were told to submit to. The lords who ruled the land tried to revive the fear, but with less and less success. Sensible men and women from all over the land were peering behind the curtain, and seeing that the emperor had no clothes.
Thus it came to pass that the lords decided to appeal to the children. If the children could be made more fearful, then their fathers and mothers might also become more fearful, and the lords would remain lords, and all would be right with the order of things. The children would grow up in fear, and thus become accustomed to doing what the lords said, further allowing the lords to remain lords. But to do this, the lords realized they needed Frightful Fables and Fear-Mongering Fairytales to tell the children at bedtime.
That's this year's contest. Make your submissions short and sweet: 400 words or less. Imagine that someone will be illustrating this story for young children. Submit your entry in comments; deadline is May 1. I'll choose several semifinalists, and then you all will vote for the winner. The prize is a signed copy of my latest book, Cryptography Engineering. And if anyone seriously wants to illustrate this, please contact me directly -- or just go for it and post a link.
Thank you to loyal reader -- and frequent reader of my draft essays -- "grenouille," who suggested this year's contest.
And good luck!
The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Threat Contest rules, semifinalists, and winner. The Fourth Movie-Plot Threat Contest rules and winner.
EDITED TO ADD (4/1): I'm looking for entries in the form of a fairytale or fable. Plot summaries and descriptions won't count as entries, although you are welcome to post them and comment on them -- and use them if others post them.
EDITED TO ADD (5/15): Voting is now open here.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..