Blog: August 2008 Archives
Mr Jetley said he first realised his security password had been changed when a call centre staff member told him his code word did not match with the one on the computer.
“I thought it was actually quite a funny response,” he said.
“But what really incensed me was when I was told I could not change it back to ‘Lloyds is pants’ because they said it was not appropriate.
“The rules seemed to change, and they told me it had to be one word, so I tried ‘censorship’, but they didn’t like that, and then said it had to be no more than six letters long.”
Lloyd’s claims that they fired the employee responsible for this, but what I want to know is how the employee got a copy of the man’s password in the first place. Why isn’t it stored only in encrypted form on the bank’s computers?
How secure can the bank’s computer systems be if employees are allowed to look at and change customer passwords at whim?
It’s a man-in-the-middle attack. “The Internet’s Biggest Security Hole” (the title of that first link) has been that interior relays have always been trusted even though they are not trustworthy.
EDITED TO ADD (9/12): This is worth reading.
A plane was forced to land when a passenger had an extreme allergic reaction to a leaking jar of mushroom soup, it was revealed today.
The soup fell on the man from an overhead locker on a Ryanair flight to Dublin from Budapest.
He reportedly suffered allergic swelling in his neck and struggled to breathe, forcing staff to seek emergency medical treatment.
It’s unclear if this error is random or systematic. If it’s random—a small percentage of all votes are dropped—then it is highly unlikely that this affected the outcome of any election. If it’s systematic—a small percentage of votes for a particular candidate are dropped—then it is much more problematic.
Ohio is trying to sue:
Ohio Secretary of State Jennifer Brunner is seeking to recover millions of dollars her state spent on the touch-screen machines and is urging the state legislature to require optical scanners statewide instead.
In a lawsuit, Brunner charged on Aug. 6 that touch-screen machines made by the former Diebold Election Systems and bought by 11 Ohio counties “produce computer stoppages” or delays and are vulnerable to “hacking, tampering and other attacks.” In all, 44 Ohio counties spent $83 million in 2006 on Diebold’s touch screens.
In other news, election officials sometimes take voting machines home for the night.
My 2004 essay: “Why Election Technology is Hard.”
It’s all about the captions:
…doctored photographs are the least of our worries. If you want to trick someone with a photograph, there are lots of easy ways to do it. You don’t need Photoshop. You don’t need sophisticated digital photo-manipulation. You don’t need a computer. All you need to do is change the caption.
The photographs presented by Colin Powell at the United Nations in 2003 provide several examples. Photographs that were used to justify a war. And yet, the actual photographs are low-res, muddy aerial surveillance photographs of buildings and vehicles on the ground in Iraq. I’m not an aerial intelligence expert. I could be looking at anything. It is the labels, the captions, and the surrounding text that turn the images from one thing into another. Photographs presented by Colin Powell at the United Nations in 2003.
Powell was arguing that the Iraqis were doing something wrong, knew they were doing something wrong, and were trying to cover their tracks. Later, it was revealed that the captions were wrong. There was no evidence of chemical weapons and no evidence of concealment. Morris’s mockery of the sweeping interpretations made in Powell’s photographs.
There is a larger point. I don’t know what these buildings were really used for. I don’t know whether they were used for chemical weapons at one time, and then transformed into something relatively innocuous, in order to hide the reality of what was going on from weapons inspectors. But I do know that the yellow captions influence how we see the pictures. “Chemical Munitions Bunker” is different from “Empty Warehouse” which is different from “International House of Pancakes.” The image remains the same but we see it differently.
Change the yellow labels, change the caption and you change the meaning of the photographs. You don’t need Photoshop. That’s the disturbing part. Captions do the heavy lifting as far as deception is concerned. The pictures merely provide the window-dressing. The unending series of errors engendered by falsely captioned photographs are rarely remarked on.
In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of “full disclosure,” asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.
The “Oyster card” used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston “T” was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start—despite facing an open-and-shut case of First Amendment prior restraint.
The U.S. court has since seen the error of its ways—but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.
The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors—who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.
Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes’ existence, or calling them just theoretical. It wasn’t until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this “responsible disclosure” protocol, it’s the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.
Outside of computer security, secrecy is much more the norm. Some security communities, like locksmiths, behave much like medieval guilds, divulging the secrets of their profession only to those within it. These communities hate open research, and have responded with surprising vitriol to researchers who have found serious vulnerabilities in bicycle locks, combination safes, master-key systems and many other security devices.
Researchers have received a similar reaction from other communities more used to secrecy than openness. Researchers—sometimes young students—who discovered and published flaws in copyright-protection schemes, voting-machine security and now wireless access cards have all suffered recriminations and sometimes lawsuits for not keeping the vulnerabilities secret. When Christopher Soghoian created a website allowing people to print fake airline boarding passes, he got several unpleasant visits from the FBI.
This preference for secrecy comes from confusing a vulnerability with information about that vulnerability. Using secrecy as a security measure is fundamentally fragile. It assumes that the bad guys don’t do their own security research. It assumes that no one else will find the same vulnerability. It assumes that information won’t leak out even if the research results are suppressed. These assumptions are all incorrect.
The problem isn’t the researchers; it’s the products themselves. Companies will only design security as good as what their customers know to ask for. Full disclosure helps customers evaluate the security of the products they buy, and educates them in how to ask for better security. The Dutch court got it exactly right when it wrote: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”
In a world of forced secrecy, vendors make inflated claims about their products, vulnerabilities don’t get fixed, and customers are no wiser. Security research is stifled, and security technology doesn’t improve. The only beneficiaries are the bad guys.
If you’ll forgive the analogy, the ethics of full disclosure parallel the ethics of not paying kidnapping ransoms. We all know why we don’t pay kidnappers: It encourages more kidnappings. Yet in every kidnapping case, there’s someone—a spouse, a parent, an employer—with a good reason why, in this one case, we should make an exception.
The reason we want researchers to publish vulnerabilities is because that’s how security improves. But in every case there’s someone—the Massachusetts Bay Transit Authority, the locksmiths, an election machine manufacturer—who argues that, in this one case, we should make an exception.
We shouldn’t. The benefits of responsibly publishing attacks greatly outweigh the potential harm. Disclosure encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers. It’s how we learn about security, and how we improve future security.
This essay previously appeared on Wired.com.
EDITED TO ADD (8/26): Matt Blaze has a good essay on the topic.
EDITD TO ADD (9/12): A good legal analysis.
Interesting: the solution to one problem causes another.
“The rigorous studies clearly show red-light cameras don’t work,” said lead author Barbara Langland-Orban, professor and chair of health policy and management at the USF College of Public Health. “Instead, they increase crashes and injuries as drivers attempt to abruptly stop at camera intersections.”
Comprehensive studies from North Carolina, Virginia, and Ontario have all reported cameras are associated with increases in crashes. The study by the Virginia Transportation Research Council also found that cameras were linked to increased crash costs. The only studies that conclude cameras reduced crashes or injuries contained “major research design flaws,” such as incomplete data or inadequate analyses, and were always conducted by researchers with links to the Insurance Institute for Highway Safety. The IIHS, funded by automobile insurance companies, is the leading advocate for red-light cameras since insurance companies can profit from red-light cameras by way of higher premiums due to increased crashes and citations.
And, of course, the agenda of the government is to increase revenue due to fines:
A 2001 paper by the Office of the Majority Leader of the U.S. House of Representatives reported that red-light cameras are “a hidden tax levied on motorists.” The report came to the same conclusions that all of the other valid studies have, that red-light cameras are associated with increased crashes and that the timings at yellow lights are often set too short to increase tickets for red-light running. That’s right, the state actually tampers with the yellow light settings to make them shorter, and more likely to turn red as you’re driving through them.
In fact, six U.S. cities have been found guilty of shortening the yellow light cycles below what is allowed by law on intersections equipped with cameras meant to catch red-light runners. Those local governments have completely ignored the safety benefit of increasing the yellow light time and decided to install red-light cameras, shorten the yellow light duration, and collect the profits instead.
The cities in question include Union City, CA, Dallas and Lubbock, TX, Nashville and Chattanooga, TN, and Springfield, MO, according to Motorists.org, which collected information from reports from around the country.
Starting September 27th: a 36-foot-long, 330-lb female and a 20-foot-long, 100-lb male.
Abstract—We reverse engineer copyright enforcement in the popular BitTorrent file sharing network and find that a common approach for identifying infringing users is not conclusive. We describe simple techniques for implicating arbitrary network endpoints in illegal content sharing and demonstrate the effectiveness of these techniques experimentally, attracting real DMCA complaints for nonsense devices, e.g., IP printers and a wireless access point. We then step back and evaluate the challenges and possible future directions for pervasive monitoring in P2P file sharing networks.
Webpage on the research.
There’s no profile:
MI5 has concluded that there is no easy way to identify those who become involved in terrorism in Britain, according to a classified internal research document on radicalisation seen by the Guardian.
The main findings include:
• The majority are British nationals and the remainder, with a few exceptions, are here legally. Around half were born in the UK, with others migrating here later in life. Some of these fled traumatic experiences and oppressive regimes and claimed UK asylum, but more came to Britain to study or for family or economic reasons and became radicalised many years after arriving.
• Far from being religious zealots, a large number of those involved in terrorism do not practise their faith regularly. Many lack religious literacy and could actually be regarded as religious novices. Very few have been brought up in strongly religious households, and there is a higher than average proportion of converts. Some are involved in drug-taking, drinking alcohol and visiting prostitutes. MI5 says there is evidence that a well-established religious identity actually protects against violent radicalisation.
• The “mad and bad” theory to explain why people turn to terrorism does not stand up, with no more evidence of mental illness or pathological personality traits found among British terrorists than is found in the general population.
• British-based terrorists are as ethnically diverse as the UK Muslim population, with individuals from Pakistani, Middle Eastern and Caucasian backgrounds. MI5 says assumptions cannot be made about suspects based on skin colour, ethnic heritage or nationality.
• Most UK terrorists are male, but women also play an important role. Sometimes they are aware of their husbands’, brothers’ or sons’ activities, but do not object or try to stop them.
• While the majority are in their early to mid-20s when they become radicalised, a small but not insignificant minority first become involved in violent extremism at over the age of 30.
• Far from being lone individuals with no ties, the majority of those over 30 have steady relationships, and most have children. MI5 says this challenges the idea that terrorists are young men driven by sexual frustration and lured to “martyrdom” by the promise of beautiful virgins waiting for them in paradise. It is wrong to assume that someone with a wife and children is less likely to commit acts of terrorism.
• Those involved in British terrorism are not unintelligent or gullible, and nor are they more likely to be well-educated; their educational achievement ranges from total lack of qualifications to degree-level education. However, they are almost all employed in low-grade jobs.
They break planes:
Citing sources within the aviation industry, ABC News reports an overzealous TSA employee attempted to gain access to the parked aircraft by climbing up the fuselage… reportedly using the Total Air Temperature (TAT) probes mounted to the planes’ noses as handholds.
“The brilliant employees used an instrument located just below the cockpit window that is critical to the operation of the onboard computers,” one pilot wrote on an American Eagle internet forum. “They decided this instrument, the TAT probe, would be adequate to use as a ladder.”
They harass innocents:
James Robinson is a retired Air National Guard brigadier general and a commercial pilot for a major airline who flies passenger planes around the country.
He has even been certified by the Transportation Security Administration to carry a weapon into the cockpit as part of the government’s defense program should a terrorist try to commandeer a plane.
But there’s one problem: James Robinson, the pilot, has difficulty even getting to his plane because his name is on the government’s terrorist “watch list.”
It’s easy to sneak by them:
The third-grader has been on the watch list since he was 5 years old. Asked whether he is a terrorist, he said, “I don’t know.”
Though he doesn’t even know what a terrorist is, he is embarrassed that trips to the airport cause a ruckus, said his mother, Denise Robinson.
Denise Robinson says she tells the skycaps her son is on the list, tips heavily and is given boarding passes. And booking her son as “J. Pierce Robinson” also has let the family bypass the watch list hassle.
And here’s how to sneak lockpicks past them.
EDITED TO ADD (8/21): Ha ha ha ha:
Even though its inspector’s actions caused nine American Eagle planes
to be grounded in Chicago this week, the Transporatation Security
Administration says it may pursue action against the airline for
And a step in the right direction:
A federal appeals court ruled this week that individuals who are blocked from commercial flights by the federal no-fly list can challenge their detention in federal court.
The TCP/IP protocols were conceived during a time that was quite different from the hostile environment they operate in now. Yet a direct result of their effectiveness and widespread early adoption is that much of today’s global economy remains dependent upon them.
While many textbooks and articles have created the myth that the Internet Protocols (IP) were designed for warfare environments, the top level goal for the DARPA Internet Program was the sharing of large service machines on the ARPANET. As a result, many protocol specifications focus only on the operational aspects of the protocols they specify and overlook their security implications.
Though Internet technology has evolved, the building blocks are basically the same core protocols adopted by the ARPANET more than two decades ago. During the last twenty years many vulnerabilities have been identified in the TCP/IP stacks of a number of systems. Some were flaws in protocol implementations which affect only a reduced number of systems. Others were flaws in the protocols themselves affecting virtually every existing implementation. Even in the last couple of years researchers were still working on security problems in the core protocols.
The discovery of vulnerabilities in the TCP/IP protocols led to reports being published by a number of CSIRTs (Computer Security Incident Response Teams) and vendors, which helped to raise awareness about the threats as well as the best mitigations known at the time the reports were published.
Much of the effort of the security community on the Internet protocols did not result in official documents (RFCs) being issued by the IETF (Internet Engineering Task Force) leading to a situation in which “known” security problems have not always been addressed by all vendors. In many cases vendors have implemented quick “fixes” to protocol flaws without a careful analysis of their effectiveness and their impact on interoperability.
As a result, any system built in the future according to the official TCP/IP specifications might reincarnate security flaws that have already hit our communication systems in the past.
Producing a secure TCP/IP implementation nowadays is a very difficult task partly because of no single document that can serve as a security roadmap for the protocols.
There is clearly a need for a companion document to the IETF specifications that discusses the security aspects and implications of the protocols, identifies the possible threats, proposes possible counter-measures, and analyses their respective effectiveness.
This document is the result of an assessment of the IETF specifications of the Internet Protocol from a security point of view. Possible threats were identified and, where possible, counter-measures were proposed. Additionally, many implementation flaws that have led to security vulnerabilities have been referenced in the hope that future implementations will not incur the same problems. This document does not limit itself to performing a security assessment of the relevant IETF specification but also offers an assessment of common implementation strategies.
Whilst not aiming to be the final word on the security of the IP, this document aims to raise awareness about the many security threats based on the IP protocol that have been faced in the past, those that we are currently facing, and those we may still have to deal with in the future. It provides advice for the secure implementation of the IP, and also insights about the security aspects of the IP that may be of help to the Internet operations community.
Feedback from the community is more than encouraged to help this document be as accurate as possible and to keep it updated as new threats are discovered.
Contrary to popular belief, homicide due to mental illness is declining, at least in England and Wales:
The rate of total homicide and the rate of homicide due to mental disorder rose steadily until the mid-1970s. From then there was a reversal in the rate of homicides attributed to mental disorder, which declined to historically low levels, while other homicides continued to rise.
Remember this the next time you read a newspaper article about how scared everyone is because some patients escaped from a mental institution:
We are convinced by the media that people with serious mental illnesses make a significant contribution to murders, and we formulate our approach as a society to tens of thousands of people on the basis of the actions of about 20. Once again, the decisions we make, the attitudes we have, and the prejudices we express are all entirely rational, when analysed in terms of the flawed information we are fed, only half chewed, from the mouths of morons.
At this moment, Adi Shamir is giving an invited talk at the Crypto 2008 conference about a new type of cryptanalytic attack called “cube attacks.” He claims very broad applicability to stream and block ciphers.
My personal joke—at least I hope it’s a joke—is that he’s going to break every NIST hash submission without ever seeing any of them. (Note: The attack, at least at this point, doesn’t apply to hash functions.)
EDITED TO ADD (8/19): AES is immune to this attack—the degree of the algebraic polynomial is too high—and all the block ciphers we use have a higher degree. But, in general, anything that can be described with a low-degree polynomial equation is vulnerable: that’s pretty much every LFSR scheme.
EDITED TO ADD (8/19): The typo that amused you all below has been fixed. And this attack doesn’t apply to any block cipher—DES, AES, Blowfish, Twofish, anything else—in common use; their degree is much too high. It doesn’t apply to hash functions at all, at least not yet—but again, the degree of all the common ones is much too high. I will post a link to the paper when it becomes available; I assume Adi will post it soon. (The paper was rejected from Asiacrypt, demonstrating yet again that the conference review process is broken.)
EDITED TO ADD (8/19): Adi’s coauthor is Itai Dinur. Their plan is to submit the paper to Eurocrypt 2009. They will publish it as soon as they can, depending on the Eurocrypt rules about prepublication.
EDITED TO ADD (9/14): The paper is online.
This is interesting:
Exactly who was behind the cyberattack is not known. The Georgian government blamed Russia for the attacks, but the Russian government said it was not involved. In the end, Georgia, with a population of just 4.6 million and a relative latecomer to the Internet, saw little effect beyond inaccessibility to many of its government Web sites, which limited the government’s ability to spread its message online and to connect with sympathizers around the world during the fighting with Russia.
In Georgia, media, communications and transportation companies were also attacked, according to security researchers. Shadowserver saw the attack against Georgia spread to computers throughout the government after Russian troops entered the Georgian province of South Ossetia. The National Bank of Georgia’s Web site was defaced at one point. Images of 20th-century dictators as well as an image of Georgia’s president, Mr. Saakashvili, were placed on the site. “Could this somehow be indirect Russian action? Yes, but considering Russia is past playing nice and uses real bombs, they could have attacked more strategic targets or eliminated the infrastructure kinetically,” said Gadi Evron, an Israeli network security expert. “The nature of what’s going on isn’t clear,” he said.
In addition to D.D.O.S. attacks that crippled Georgia’s limited Internet infrastructure, researchers said there was evidence of redirection of Internet traffic through Russian telecommunications firms beginning last weekend. The attacks continued on Tuesday, controlled by software programs that were located in hosting centers controlled by a Russian telecommunications firms. A Russian-language Web site, stopgeorgia.ru, also continued to operate and offer software for download used for D.D.O.S. attacks.
Welcome to 21st century warfare.
“It costs about 4 cents per machine,” Mr. Woodcock said. “You could fund an entire cyberwarfare campaign for the cost of replacing a tank tread, so you would be foolish not to.”
Illegally diverting water is terrorism:
South Australian Premier Mike Rann says the diversion of water from the Paroo River in Queensland is an act of terrorism during a water crisis.
Anonymously threatening people with messages on playing cards, like the Joker in The Dark Knight, is terrorism:
Giles County deputies arrest two county teenagers they say made terroristic threats to people on playing cards.
Investigators say 18-year olds Brian Stafford and Justin Dirico left eight threatening playing cards at the Pearisburg Wal-Mart on Saturday, August 9th. The cards read “9 people will die” and “9 people will suffer” with the date 8-15-08.
A ninth card was found on a car at the Dairy Queen on Sunday, August 10th.
EDITED TO ADD (8/26): In the UK, walking on a bicycle path is terrorism.
An index of fiction.
The site was inspired by Margaret Atwood’s infamous comment that Oryx and Crake isn’t really science fiction, because science fiction is “talking squids in outer space.” This prompted a hunt for science fiction which actually did feature talking squids in outer space.
They said—and it’s almost too stupid to believe—that:
the balaclava “could be used to conceal someone’s identity or could be used in the course of a criminal act”.
Don’t they realize that balaclavas are for sale everywhere in the UK? Or that scarves, hoods, handkerchiefs, and dark glasses could also be used to conceal someone’s identity?
The game sounds like it could be fun, though:
Each player starts as an empire filled with good intentions and a determination to liberate the world from terrorists and from each other.
Then the reality of world politics kicks and terrorist states emerge.
Andrew said: “The terrorists can win and quite often do and it’s global anarchy. It sums up the randomness of geo-politics pretty well.”
In their cardboard version of realpolitik George Bush’s “Axis of Evil” is reduced to a spinner in the middle of the board, which determines which player is designated a terrorist state.
That person then has to wear a balaclava (included in the box set) with the word “Evil” stitched on to it.
In the middle of a sensationalist article about risks to children and how giving them cell phones can help, there’s at least one person who gets it.
Since the 1999 Columbine High School shootings and the 9/11 terrorist attacks, many parents feel better having a way to contact their children. But hundreds of students on cell phones during an emergency can cause problems for responders.
“There’s a huge difference between feeling safer and being safer,” says Kenneth Trump, president of National School Safety and Security Services.
According to Trump, students’ cell phone use during emergencies can do three things: increase the spread of rumors about the situation, expedite parental traffic at a scene that needs to be controlled and accelerate the overload of cell-phone systems in the area.
Tom Hautton, an attorney for the National School Board Association, said that cell phones in schools also can lead to classroom distractions, text-message cheating and inappropriate photographs and videos being spread around campus.
We are just naturally inclined to make irrational security decisions when it comes to our children.
I don’t know any of the details, but this seems like a good use of data mining:
Mr Tancredi said Verisign’s fraud detection kit would help “decrease the time between the attack being launched and the brokerage being able to respond”.
Before now, he said, brokerages relied on counter measures such as restrictive stock trading or analysis packages that only spotted a problem when money had gone.
Verisign’s software is a module that brokers can add to their in-house trading system that alerts anti-fraud teams to look more closely at trades that exhibit certain behaviour patterns.
“What this self-learning behavioural engine does is look at the different attributes of the event, not necessarily about the computer or where you are logging on from but about the actual transaction, the trade, the amount of the trade,” said Mr Tancredi.
“For example have you liquidated all of your assets in stock that you own in order to buy one penny stock?” he said. “Another example is when a customer who normally trades tech stock on Nasdaq all of a sudden trades a penny stock that has to do with health care and is placing a trade four times more than normal.”
This is a good use of data mining because, as I said previously:
Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms.
Another news article here.
Some reality to counter the hype.
The Bottom Line
While there has been much consternation and alarm-raising over the potential for widespread proliferation of biological weapons and the possible use of such weapons on a massive scale, there are significant constraints on such designs. The current dearth of substantial biological weapons programs and arsenals by governments worldwide, and the even smaller number of cases in which systems were actually used, seems to belie—or at least bring into question—the intense concern about such programs.
While we would like to believe that countries such as the United States, the United Kingdom and Russia have halted their biological warfare programs for some noble ideological or humanitarian reason, we simply can’t. If biological weapons were in practice as effective as some would lead us to believe, these states would surely maintain stockpiles of them, just as they have maintained their nuclear weapons programs. Biological weapons programs were abandoned because they proved to be not as effective as advertised and because conventional munitions proved to provide more bang for the buck.
The UK has made public its previously classified National Risk Register.
The National Risk Register is intended to capture the range of emergencies that might have a major impact on all, or significant parts of, the UK. It provides a national picture of the risks we face, and is designed to complement Community Risk Registers, already produced and published locally by emergency planners. The driver for this work is the Civil Contingencies Act 2004, which also defines what we mean by emergencies, and what responsibilities are placed on emergency responders in order to prepare for them. Further information about the Act can be found on the UK Resilience website.
Seems like the greatest threat to national security is a flu pandemic.
Seems like the procedure has changed:
Mr. Peters nodded, and then looked down at the sheet which I had filled out and signed. “I’m going to have to make some calls to verify your identity.”
He pulled out a cell phone. I had assumed that we would be going to some separate screening room, but that wasn’t the case. He stood facing the silver table, and I leaned back against it. So this was the dreaded interview. People walked past us with bags and luggage.
“Hello,” he said. “Security.” Long pause. It sounded like he was transferred. He said a number that I think had the same number of digits as a phone number. Then he said a shorter number. “No, she doesn’t.” He wrote something in small letters on the form. Then he spelled my name over the phone. “D-A-V-I-D-O-F-F. That’s Indigo Delta… yes.”
He looked at me. “What’s the name of a street that you lived on prior to your current address?”
“Inman,” he repeated. There was a pause. “Where did you live in 2004?”
“Hmm…” I said. “New Mexico? I think? Maybe Massachusetts.”
He conferred with the person on the phone. “That’s fine.” He hung up.
“All right,” he said. “You’re going to go through full security screening.” He wrote “SSSS” in red marker on my printed boarding pass. He handed my form to one of the officers at the podium, and then gestured to the first screening line. “Right here.”
This only works if you’ve lost your ID, not if you refuse to show it.
Obama has a cyber security plan.
It’s basically what you would expect: Appoint a national cyber security advisor, invest in math and science education, establish standards for critical infrastructure, spend money on enforcement, establish national standards for securing personal data and data-breach disclosure, and work with industry and academia to develop a bunch of needed technologies.
I could comment on the plan, but with security the devil is always in the details—and, of course, at this point there are few details. But since he brought up the topic—McCain supposedly is “working on the issues” as well—I have three pieces of policy advice for the next president, whoever he is. They’re too detailed for campaign speeches or even position papers, but they’re essential for improving information security in our society. Actually, they apply to national security in general. And they’re things only government can do.
One, use your immense buying power to improve the security of commercial products and services. One property of technological products is that most of the cost is in the development of the product rather than the production. Think software: The first copy costs millions, but the second copy is free.
You have to secure your own government networks, military and civilian. You have to buy computers for all your government employees. Consolidate those contracts, and start putting explicit security requirements into the RFPs. You have the buying power to get your vendors to make serious security improvements in the products and services they sell to the government, and then we all benefit because they’ll include those improvements in the same products and services they sell to the rest of us. We’re all safer if information technology is more secure, even though the bad guys can use it, too.
Two, legislate results and not methodologies. There are a lot of areas in security where you need to pass laws, where the security externalities are such that the market fails to provide adequate security. For example, software companies who sell insecure products are exploiting an externality just as much as chemical plants that dump waste into the river. But a bad law is worse than no law. A law requiring companies to secure personal data is good; a law specifying what technologies they should use to do so is not. Mandating software liabilities for software failures is good, detailing how is not. Legislate for the results you want and implement the appropriate penalties; let the market figure out how—that’s what markets are good at.
Three, broadly invest in research. Basic research is risky; it doesn’t always pay off. That’s why companies have stopped funding it. Bell Labs is gone because nobody could afford it after the AT&T breakup, but the root cause was a desire for higher efficiency and short-term profitability—not unreasonable in an unregulated business. Government research can be used to balance that by funding long-term research.
Spread those research dollars wide. Lately, most research money has been redirected through DARPA to near-term military-related projects; that’s not good. Keep the earmark-happy Congress from dictating how the money is spent. Let the NSF, NIH and other funding agencies decide how to spend the money and don’t try to micromanage. Give the national laboratories lots of freedom, too. Yes, some research will sound silly to a layman. But you can’t predict what will be useful for what, and if funding is really peer-reviewed, the average results will be much better. Compared to corporate tax breaks and other subsidies, this is chump change.
If our research capability is to remain vibrant, we need more science and math students with decent elementary and high school preparation. The declining interest is partly from the perception that scientists don’t get rich like lawyers and dentists and stockbrokers, but also because science isn’t valued in a country full of creationists. One way the president can help is by trusting scientific advisers and not overruling them for political reasons.
Oh, and get rid of those post-9/11 restrictions on student visas that are causing so many top students to do their graduate work in Canada, Europe and Asia instead of in the United States. Those restrictions will hurt us immensely in the long run.
Those are the three big ones; the rest is in the details. And it’s the details that matter. There are lots of serious issues that you’re going to have to tackle: data privacy, data sharing, data mining, government eavesdropping, government databases, use of Social Security numbers as identifiers, and so on. It’s not enough to get the broad policy goals right. You can have good intentions and enact a good law, and have the whole thing completely gutted by two sentences sneaked in during rulemaking by some lobbyist.
Security is both subtle and complex, and—unfortunately—doesn’t readily lend itself to normal legislative processes. You’re used to finding consensus, but security by consensus rarely works. On the internet, security standards are much worse when they’re developed by a consensus body, and much better when someone just does them. This doesn’t always work—a lot of crap security has come from companies that have “just done it”—but nothing but mediocre standards come from consensus bodies. The point is that you won’t get good security without pissing someone off: The information broker industry, the voting machine industry, the telcos. The normal legislative process makes it hard to get security right, which is why I don’t have much optimism about what you can get done.
And if you’re going to appoint a cyber security czar, you have to give him actual budgetary authority. Otherwise he won’t be able to get anything done, either.
This essay originally appeared on Wired.com.
This is huge:
Two security researchers have developed a new technique that essentially bypasses all of the memory protection safeguards in the Windows Vista operating system, an advance that many in the security community say will have far-reaching implications not only for Microsoft, but also on how the entire technology industry thinks about attacks.
In a presentation at the Black Hat briefings, Mark Dowd of IBM Internet Security Systems (ISS) and Alexander Sotirov, of VMware Inc. will discuss the new methods they’ve found to get around Vista protections such as Address Space Layout Randomization(ASLR), Data Execution Prevention (DEP) and others by using Java, ActiveX controls and .NET objects to load arbitrary content into Web browsers.
By taking advantage of the way that browsers, specifically Internet Explorer, handle active scripting and .NET objects, the pair have been able to load essentially whatever content they want into a location of their choice on a user’s machine.
EDITED TO ADD (8/11): Here’s commentary that says this isn’t such a big deal after all. I’m not convinced; I think this will turn out to be a bigger problem than that.
Since its birth 12 years ago after a fatal kidnapping in Texas, Amber Alert has quickly become one of the best-known tools in the national law enforcement arsenal. The warnings are familiar to anyone who watches cable TV news, especially during the summer, when the drumbeat of abduction stories seems to increase. Last year, 227 alerts were issued nationwide, each galvanizing interest in the local community and flooding police with tips. While the particulars of the state systems differ, the goal is the same: to disperse news of a kidnapping as widely and quickly as possible, in the hope that someone will spot the kidnapper before a child is harmed.
The program’s champions say that its successes have been dramatic. According to the National Center for Missing and Exploited Children, more than 400 children have been saved by Amber Alerts. Of the 17 children Massachusetts has issued alerts on since it created its system in 2003, all have been safely returned.
These are encouraging statistics—but also deeply misleading, according to some of the only outside scholars to examine the system in depth. In the first independent study of whether Amber Alerts work, a team led by University of Nevada criminologist Timothy Griffin looked at hundreds of abduction cases between 2003 and 2006 and found that Amber Alerts—for all their urgency and drama—actually accomplish little. In most cases where they were issued, Griffin found, Amber Alerts played no role in the eventual return of abducted children. Their successes were generally in child custody fights that didn’t pose a risk to the child. And in those rare instances where kidnappers did intend to rape or kill the child, Amber Alerts usually failed to save lives.
According to a recent court ruling, we are all subject to the provisions of the DMCA, but the government is not:
The Court of Federal Claims that first heard the case threw it out, and the new Appellate ruling upholds that decision. The reasoning behind the decisions focuses on the US government’s sovereign immunity, which the court describes thusly: “The United States, as [a] sovereign, ‘is immune from suit save as it consents to be sued . . . and the terms of its consent to be sued in any court define that court’s jurisdiction to entertain the suit.'”
In the case of copyright law, the US has given up much of its immunity, but the government retains a few noteworthy exceptions. The one most relevant to this case says that when a government employee is in a position to induce the use of the copyrighted material, “[the provision] does not provide a Government employee a right of action ‘where he was in a position to order, influence, or induce use of the copyrighted work by the Government.'” Given that Davenport used his position as part of the relevant Air Force office to get his peers to use his software, the case fails this test.
But the court also addressed the DMCA claims made by Blueport, and its decision here is quite striking. “The DMCA itself contains no express waiver of sovereign immunity,” the judge wrote, “Indeed, the substantive prohibitions of the DMCA refer to individual persons, not the Government.” Thus, because sovereign immunity is not explicitly eliminated, and the phrasing of the statute does not mention organizations, the DMCA cannot be applied to the US government, even in cases where the more general immunity to copyright claims does not apply.
It appears that Congress took a “do as we say, not as we need to do” approach to strengthening digital copyrights.
The headline says it all: “‘Fakeproof’ e-passport is cloned in minutes.”
Does this surprise anyone? This is what I wrote about electronic passports two years ago in The Washington Post:
The other security mechanisms are also vulnerable, and several security researchers have already discovered flaws. One found that he could identify individual chips via unique characteristics of the radio transmissions. Another successfully cloned a chip. The State Department called this a “meaningless stunt,” pointing out that the researcher could not read or change the data. But the researcher spent only two weeks trying; the security of your passport has to be strong enough to last 10 years.
This is perhaps the greatest risk. The security mechanisms on your passport chip have to last the lifetime of your passport. It is as ridiculous to think that passport security will remain secure for that long as it would be to think that you won’t see another security update for Microsoft Windows in that time. Improvements in antenna technology will certainly increase the distance at which they can be read and might even allow unauthorized readers to penetrate the shielding.
It was really big news yesterday, but I don’t think it’s that much of a big deal. These crimes are still easy to commit and it’s still too hard to catch the criminals. Catching one gang, even a large one, isn’t going to make us any safer.
If we want to mitigate identity theft, we have to make it harder for people to get credit, make transactions, and generally do financial business remotely:
The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud. This is what’s been in the news recently: ChoicePoint, LexisNexis, Bank of America, and so on. But data privacy is more than just fraud. Whether it is the books we take out of the library, the websites we visit, or the contents of our text messages, most of us have personal data on third-party computers that we don’t want made public. The posting of Paris Hilton’s phone book on the Internet is a celebrity example of this.
The second issue is the ease with which a criminal can use personal data to commit fraud. It doesn’t take much personal information to apply for a credit card in someone else’s name. It doesn’t take much to submit fraudulent bank transactions in someone else’s name. It’s surprisingly easy to get an identification card in someone else’s name. Our current culture, where identity is verified simply and sloppily, makes it easier for a criminal to impersonate his victim.
Proposed fixes tend to concentrate on the first issue—making personal data harder to steal—whereas the real problem is the second. If we’re ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.
I am, however, impressed that we managed to pull together the police forces from several countries to prosecute this case.
London’s Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won’t be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.
Here’s the story. Every Oyster card has a radio-frequency identification chip that communicates with readers mounted on the ticket barrier. That chip, the “Mifare Classic” chip, is used in hundreds of other transport systems as well—Boston, Los Angeles, Brisbane, Amsterdam, Taipei, Shanghai, Rio de Janeiro—and as an access pass in thousands of companies, schools, hospitals, and government buildings around Britain and the rest of the world.
The security of Mifare Classic is terrible. This is not an exaggeration; it’s kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.
The group that broke Mifare Classic is from Radboud University Nijmegen in the Netherlands. They demonstrated the attack by riding the Underground for free, and by breaking into a building. Their two papers (one is already online) will be published at two conferences this autumn.
The second paper is the one that NXP sued over. They called disclosure of the attack “irresponsible,” warned that it will cause “immense damages,” and claimed that it “will jeopardize the security of assets protected with systems incorporating the Mifare IC.” The Dutch court would have none of it: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”
Exactly right. More generally, the notion that secrecy supports security is inherently flawed. Whenever you see an organization claiming that design secrecy is necessary for security—in ID cards, in voting machines, in airport security—it invariably means that its security is lousy and it has no choice but to hide it. Any competent cryptographer would have designed Mifare’s security with an open and public design.
Secrecy is fragile. Mifare’s security was based on the belief that no one would discover how it worked; that’s why NXP had to muzzle the Dutch researchers. But that’s just wrong. Reverse-engineering isn’t hard. Other researchers had already exposed Mifare’s lousy security. A Chinese company even sells a compatible chip. Is there any doubt that the bad guys already know about this, or will soon enough?
Publication of this attack might be expensive for NXP and its customers, but it’s good for security overall. Companies will only design security as good as their customers know to ask for. NXP’s security was so bad because customers didn’t know how to evaluate security: either they don’t know what questions to ask, or didn’t know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.
It’s unclear how this break will affect Transport for London. Cloning takes only a few seconds, and the thief only has to brush up against someone carrying a legitimate Oyster card. But it requires an RFID reader and a small piece of software which, while feasible for a techie, are too complicated for the average fare dodger. The police are likely to quickly arrest anyone who tries to sell cloned cards on any scale. TfL promises to turn off any cloned cards within 24 hours, but that will hurt the innocent victim who had his card cloned more than the thief.
The vulnerability is far more serious to the companies that use Mifare Classic as an access pass. It would be very interesting to know how NXP presented the system’s security to them.
And while these attacks only pertain to the Mifare Classic chip, it makes me suspicious of the entire product line. NXP sells a more secure chip and has another on the way, but given the number of basic cryptography mistakes NXP made with Mifare Classic, one has to wonder whether the “more secure” versions will be sufficiently so.
This essay originally appeared in the Guardian.
From the Dilbert blog:
They then said that I could not fill it out—my manager had to. I told them that my manager doesn’t work in the building, nor does anyone in my management chain. This posed a problem for the crack security team. At last, they formulated a brilliant solution to the problem. They told me that if I had grocery bag in my office I could put the laptop in it and everything would be okay . Of course, I don’t have grocery bags in my office. Who would? I did have a windbreaker, however. So I went up to my office, wrapped up the laptop in my windbreaker, and went back down.
People put in charge of implementing a security policy are more concerned with following the letter of the policy than they are about improving security. So even if what they do makes no sense—and they know it makes no sense—they have to do it in order to follow “policy.”
They’re all here:
Via a Freedom of Information Act request (which involved paying $700 and waiting almost 4 years), The Memory Hole has obtained blank copies of most forms used by the National Security Agency.
Most are not very interesting, but I agree with Russ Kick:
They range from the exotic to the pedestrian, but even the most prosaic form shines some light into the workings of No Such Agency.
Stealing databases of personal information is never good, but this doesn’t make a bit of difference to airport security. I’ve already written about the Clear program: it’s a $100-a-year program that lets you cut the security line, and nothing more. Clear members are no more trusted than anyone else.
None of this is security. Absolutely none of it.
EDITED TO ADD (8/7): The laptop has been found. Turns out it was never stolen:
The laptop was found Tuesday morning in the same company office where it supposedly had gone missing, said spokeswoman Allison Beer.
“It was not in an obvious location,” said Beer, who said an investigation was under way to determine whether the computer was actually stolen or had just been misplaced.
Why in the world do these people not use full-disk encryption?
Soldiers were deployed throughout Italy on Monday to embassies, subway and railway stations, as part of broader government measures to fight violent crime here for which illegal immigrants are broadly blamed.
The conservative government of Silvio Berlusconi won elections in April while promising to crack down on petty crime and illegal immigrants. The new patrols of soldiers, who are not empowered to make arrests, do not seem aimed only at illegal immigrants, though the patrols were deployed to centers where illegal immigrants are housed.
“Security is something concrete,” Mr. La Russa said on Monday. The troops, he said, will be a “deterrent to criminals.”
That reminds me of one of my favorite logical fallacies: “We must do something. This is something. Therefore, we must do it.” It does seem largely to be a demonstration of “doing something” by the Berlusconi government. The legitimate police, of course, think it’s a terrible idea.
“You need to be specially trained to carry out some kinds of controls,” Nicola Tanzi, the secretary of a trade union that represents Italian police officers. “Soldiers just aren’t qualified.”
He also questioned whether the $93.6 million that will be spent for the extra deployment, called Operation Safe Streets, might not have been better used to increase the budgets for Italy’s police and military.
A grisly slaying on a Greyhound bus has prompted calls for tighter security on Canadian bus lines, despite the company and Canada’s transport agency calling the stabbing death a tragic but isolated incident.
Greyhound spokeswoman Abby Wambaugh said bus travel is the safest mode of transportation, even though bus stations do not have metal detectors and other security measures used at airports.
“Hearing about this incident really worries me,” said Donna Ryder, 56, who was waiting Thursday at the bus depot in Toronto.
“I’m in a wheelchair and what would I be able to do to defend myself? Probably nothing. So that’s really scary.”
Ryder, who was heading to Kitchener, Ont., said buses are essentially the only way she can get around the province, as her wheelchair won’t fit on Via Rail trains. As it is her main option for travel, a lack of security is troubling, she said.
“I guess we’re going to have to go the airline way, maybe have a search and baggage check, X-ray maybe,” she said.
“Really, I don’t know what you can do about security anymore.”
Of course, airplane security won’t work on buses.
But—more to the point—this essay I wrote on overreacting to rare risks applies here:
People tend to base risk analysis more on personal story than on data, despite the old joke that “the plural of anecdote is not data.” If a friend gets mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than abstract crime statistics.
We give storytellers we have a relationship with more credibility than strangers, and stories that are close to us more weight than stories from foreign lands. In other words, proximity of relationship affects our risk assessment. And who is everyone’s major storyteller these days? Television.
Which is why Canadians are talking about increasing security on long-haul busses, and not Americans.
EDITED TO ADD (8/4): Look at this headline: “Man beheads girlfriend on Santorini island.” Do we need airport-style security measures for Greek islands, too?
EDITED TO ADD (8/5): A surprisingly refreshing editorial:
Here is our suggestion for what ought to be done to upgrade the security of bus transportation after the knife killing of Tim McLean by a fellow Greyhound bus passenger: nothing. Leave the system alone. Mr. McLean could have been murdered equally easily by a random psychopath in a movie theatre or a classroom or a wine bar or a shopping mall—or on his front lawn, for that matter. Unless all of those venues, too, are to be included in the new post-Portage la Prairie security crackdown, singling out buses makes no sense.
There’s a quote attributed to me here:
Well-known author and expert on security, Bruce Schneier, born in 1963, maintains “Terrorists can only take my life. Only my government can take my freedom.”
I don’t think I’ve ever said that. It certainly doesn’t sound like something I would say. It’s not in any of my books. It’s not in any of the essays I’ve written.
So I Googled the quote. Here it is being used as a sig in December 2001, without attribution. The real source must be at least as old as that. The immediate source might be this blog. Possibly, it might come from this comment to my blog, reworded and attributed to me:
Surely the man who trades freedom for security theatre deserves both freedom and security less than the first man!
I like that quote, “we must remember that we have more power than our enemies to worsen our fate”. Terrorists can, at most, take away my life. They can never take away my freedom. Only my government has the power to do that.
Anyone have any better theories?
Amazing. The U.S. government has published its policy: they can take your laptop anywhere they want, for as long as they want, and share the information with anyone they want:
Federal agents may take a traveler’s laptop or other electronic device to an off-site location for an unspecified period of time without any suspicion of wrongdoing, as part of border search policies the Department of Homeland Security recently disclosed. Also, officials may share copies of the laptop’s contents with other agencies and private entities for language translation, data decryption, or other reasons, according to the policies, dated July 16 and issued by two DHS agencies, US Customs and Border Protection and US Immigration and Customs Enforcement.
DHS officials said that the newly disclosed policies—which apply to anyone entering the country, including US citizens—are reasonable and necessary to prevent terrorism.
The policies cover ‘any device capable of storing information in digital or analog form,’ including hard drives, flash drives, cell phones, iPods, pagers, beepers, and video and audio tapes. They also cover ‘all papers and other written documentation,’ including books, pamphlets and ‘written materials commonly referred to as “pocket trash…”
It’s not the policy that’s amazing; it’s the fact that the government has actually made it public.
Here’s the actual policy.
Although honestly, the best thing is probably to keep your encrypted archives on some network drive somewhere, and download what you need after you cross the border.
When Indian police investigating bomb blasts which killed 42 people traced an email claiming responsibility to a Mumbai apartment, they ordered an immediate raid.
But at the address, rather than seizing militants from the Islamist group which said it carried out the attack, they found a group of puzzled American expats.
In a cautionary tale for those still lax with their wireless internet security, police believe the email about the explosions on Saturday in the west Indian city of Ahmedabad was sent after someone hijacked the network belonging to one of the Americans, 48-year-old Kenneth Haywood.
Of course, the terrorists could have sent the e-mail from anywhere. But life is easier if the police don’t raid your apartment.
EDITED TO ADD (8/1): My wireless network is still open. But, honestly, the terrorists are more likely to use the open network at the coffee shop up the street and around the corner.
Sidebar photo of Bruce Schneier by Joe MacInnis.