Blog: August 2008 Archives
Mr Jetley said he first realised his security password had been changed when a call centre staff member told him his code word did not match with the one on the computer.
“I thought it was actually quite a funny response,” he said.
“But what really incensed me was when I was told I could not change it back to ‘Lloyds is pants’ because they said it was not appropriate.
“The rules seemed to change, and they told me it had to be one word, so I tried ‘censorship’, but they didn’t like that, and then said it had to be no more than six letters long.”
Lloyd’s claims that they fired the employee responsible for this, but what I want to know is how the employee got a copy of the man’s password in the first place. Why isn’t it stored only in encrypted form on the bank’s computers?
How secure can the bank’s computer systems be if employees are allowed to look at and change customer passwords at whim?
It’s a man-in-the-middle attack. “The Internet’s Biggest Security Hole” (the title of that first link) has been that interior relays have always been trusted even though they are not trustworthy.
EDITED TO ADD (9/12): This is worth reading.
A plane was forced to land when a passenger had an extreme allergic reaction to a leaking jar of mushroom soup, it was revealed today.
The soup fell on the man from an overhead locker on a Ryanair flight to Dublin from Budapest.
He reportedly suffered allergic swelling in his neck and struggled to breathe, forcing staff to seek emergency medical treatment.
It’s unclear if this error is random or systematic. If it’s random — a small percentage of all votes are dropped — then it is highly unlikely that this affected the outcome of any election. If it’s systematic — a small percentage of votes for a particular candidate are dropped — then it is much more problematic.
Ohio is trying to sue:
Ohio Secretary of State Jennifer Brunner is seeking to recover millions of dollars her state spent on the touch-screen machines and is urging the state legislature to require optical scanners statewide instead.
In a lawsuit, Brunner charged on Aug. 6 that touch-screen machines made by the former Diebold Election Systems and bought by 11 Ohio counties “produce computer stoppages” or delays and are vulnerable to “hacking, tampering and other attacks.” In all, 44 Ohio counties spent $83 million in 2006 on Diebold’s touch screens.
In other news, election officials sometimes take voting machines home for the night.
My 2004 essay: “Why Election Technology is Hard.”
It’s all about the captions:
…doctored photographs are the least of our worries. If you want to trick someone with a photograph, there are lots of easy ways to do it. You don’t need Photoshop. You don’t need sophisticated digital photo-manipulation. You don’t need a computer. All you need to do is change the caption.
The photographs presented by Colin Powell at the United Nations in 2003 provide several examples. Photographs that were used to justify a war. And yet, the actual photographs are low-res, muddy aerial surveillance photographs of buildings and vehicles on the ground in Iraq. I’m not an aerial intelligence expert. I could be looking at anything. It is the labels, the captions, and the surrounding text that turn the images from one thing into another. Photographs presented by Colin Powell at the United Nations in 2003.
Powell was arguing that the Iraqis were doing something wrong, knew they were doing something wrong, and were trying to cover their tracks. Later, it was revealed that the captions were wrong. There was no evidence of chemical weapons and no evidence of concealment. Morris’s mockery of the sweeping interpretations made in Powell’s photographs.
There is a larger point. I don’t know what these buildings were really used for. I don’t know whether they were used for chemical weapons at one time, and then transformed into something relatively innocuous, in order to hide the reality of what was going on from weapons inspectors. But I do know that the yellow captions influence how we see the pictures. “Chemical Munitions Bunker” is different from “Empty Warehouse” which is different from “International House of Pancakes.” The image remains the same but we see it differently.
Change the yellow labels, change the caption and you change the meaning of the photographs. You don’t need Photoshop. That’s the disturbing part. Captions do the heavy lifting as far as deception is concerned. The pictures merely provide the window-dressing. The unending series of errors engendered by falsely captioned photographs are rarely remarked on.
In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of “full disclosure,” asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.
The “Oyster card” used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston “T” was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start — despite facing an open-and-shut case of First Amendment prior restraint.
The U.S. court has since seen the error of its ways — but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.
The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors — who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.
Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes’ existence, or calling them just theoretical. It wasn’t until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this “responsible disclosure” protocol, it’s the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.
Outside of computer security, secrecy is much more the norm. Some security communities, like locksmiths, behave much like medieval guilds, divulging the secrets of their profession only to those within it. These communities hate open research, and have responded with surprising vitriol to researchers who have found serious vulnerabilities in bicycle locks, combination safes, master-key systems and many other security devices.
Researchers have received a similar reaction from other communities more used to secrecy than openness. Researchers — sometimes young students — who discovered and published flaws in copyright-protection schemes, voting-machine security and now wireless access cards have all suffered recriminations and sometimes lawsuits for not keeping the vulnerabilities secret. When Christopher Soghoian created a website allowing people to print fake airline boarding passes, he got several unpleasant visits from the FBI.
This preference for secrecy comes from confusing a vulnerability with information about that vulnerability. Using secrecy as a security measure is fundamentally fragile. It assumes that the bad guys don’t do their own security research. It assumes that no one else will find the same vulnerability. It assumes that information won’t leak out even if the research results are suppressed. These assumptions are all incorrect.
The problem isn’t the researchers; it’s the products themselves. Companies will only design security as good as what their customers know to ask for. Full disclosure helps customers evaluate the security of the products they buy, and educates them in how to ask for better security. The Dutch court got it exactly right when it wrote: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”
In a world of forced secrecy, vendors make inflated claims about their products, vulnerabilities don’t get fixed, and customers are no wiser. Security research is stifled, and security technology doesn’t improve. The only beneficiaries are the bad guys.
If you’ll forgive the analogy, the ethics of full disclosure parallel the ethics of not paying kidnapping ransoms. We all know why we don’t pay kidnappers: It encourages more kidnappings. Yet in every kidnapping case, there’s someone — a spouse, a parent, an employer — with a good reason why, in this one case, we should make an exception.
The reason we want researchers to publish vulnerabilities is because that’s how security improves. But in every case there’s someone — the Massachusetts Bay Transit Authority, the locksmiths, an election machine manufacturer — who argues that, in this one case, we should make an exception.
We shouldn’t. The benefits of responsibly publishing attacks greatly outweigh the potential harm. Disclosure encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers. It’s how we learn about security, and how we improve future security.
This essay previously appeared on Wired.com.
EDITED TO ADD (8/26): Matt Blaze has a good essay on the topic.
EDITD TO ADD (9/12): A good legal analysis.
Interesting: the solution to one problem causes another.
“The rigorous studies clearly show red-light cameras don’t work,” said lead author Barbara Langland-Orban, professor and chair of health policy and management at the USF College of Public Health. “Instead, they increase crashes and injuries as drivers attempt to abruptly stop at camera intersections.”
Comprehensive studies from North Carolina, Virginia, and Ontario have all reported cameras are associated with increases in crashes. The study by the Virginia Transportation Research Council also found that cameras were linked to increased crash costs. The only studies that conclude cameras reduced crashes or injuries contained “major research design flaws,” such as incomplete data or inadequate analyses, and were always conducted by researchers with links to the Insurance Institute for Highway Safety. The IIHS, funded by automobile insurance companies, is the leading advocate for red-light cameras since insurance companies can profit from red-light cameras by way of higher premiums due to increased crashes and citations.
And, of course, the agenda of the government is to increase revenue due to fines:
A 2001 paper by the Office of the Majority Leader of the U.S. House of Representatives reported that red-light cameras are “a hidden tax levied on motorists.” The report came to the same conclusions that all of the other valid studies have, that red-light cameras are associated with increased crashes and that the timings at yellow lights are often set too short to increase tickets for red-light running. That’s right, the state actually tampers with the yellow light settings to make them shorter, and more likely to turn red as you’re driving through them.
In fact, six U.S. cities have been found guilty of shortening the yellow light cycles below what is allowed by law on intersections equipped with cameras meant to catch red-light runners. Those local governments have completely ignored the safety benefit of increasing the yellow light time and decided to install red-light cameras, shorten the yellow light duration, and collect the profits instead.
The cities in question include Union City, CA, Dallas and Lubbock, TX, Nashville and Chattanooga, TN, and Springfield, MO, according to Motorists.org, which collected information from reports from around the country.
Sidebar photo of Bruce Schneier by Joe MacInnis.