September 15, 2001
by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe or unsubscribe, see below.
Copyright (c) 2001 by Counterpane Internet Security, Inc.
In this issue:
- 11 September 2001
- NSA's Dual Counter Mode
- Corrections and Explanations
- Crypto-Gram Reprints
- Counterpane Internet Security News
- The Doghouse: Shyfile
- New Microsoft Root Certificate Program
- Comments from Readers
Both sides of the calendar debate were wrong; the new century began on 11 September 2001.
All day I fielded phone calls from reporters looking for the "computer security angle" to the story. I couldn't find one, although I expect several to come out of the aftermath.
Calls for increased security began immediately. Unfortunately, the quickest and easy way to satisfy those demands is by decreasing liberties. This is always short sighted; real security solutions exist that preserve the free society that we all hold dear, but they're harder to find and require reasoned debate. Strong police forces without Constitutional limitations might appeal to those wanting immediate safety, but the reality is the opposite. Laws that limit police power can increase security, by enforcing honesty, integrity, and fairness. It is our very liberties that make our society as safe as it is.
In times of crisis it's easy to disregard these liberties or, worse, to actively attack them and stigmatize those who support them. We've already seen government proposals for increased wiretapping capabilities and renewed rhetoric about encryption limitations. I fully expect more automatic surveillance of ordinary citizens, limits on information flow and digital-security technologies, and general xenophobia. I do not expect much debate about their actual effectiveness, or their effects on freedom and liberty. It's easier just to react. In 1996, TWA Flight 800 exploded and crashed in the Atlantic. Originally people thought it was a missile attack. The FBI demanded, and Congress passed, a law giving law enforcement greater abilities to expel aliens from the country. Eventually we learned the crash was caused by a mechanical malfunction, but the law still stands.
We live in a world where nation states are not the only institutions which wield power. International bodies, corporations, non-governmental organizations, pan-national ethnicities, and disparate political groups all have the ability to affect the world in an unprecedented manner. As we adjust to this new reality, it is important that we don't become the very forces we abhor. I consider the terrorist attacks on September 11th to be an attack against America's ideals. If our freedoms erode because of those attacks, then the terrorists have won.
The ideals we uphold during a crisis define who we are. Freedom and liberty have a price, and that price is constant vigilance so it not be taken from us in the name of security. Ben Franklin said something that was often repeated during the American Revolutionary War: "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." It is no less true today.
Calls to ban encryption:
Re-emergence of Carnivore:
Erosions of civil liberties are coming:
"Americans must rethink how to safeguard the country without bartering away the rights and privileges of the free society that we are defending. The temptation will be great in the days ahead to write draconian new laws that give law enforcement agencies - or even military forces - a right to undermine the civil liberties that shape the character of the United States. President Bush and Congress must carefully balance the need for heightened security with the need to protect the constitutional rights of Americans."
- The New York Times, 12 Sep 01
"Our values, our resolve, our commitment, our sense of community will serve us well. I am confident that, as a nation, we will seek and serve justice. Our Nation, my neighbors and friends in Vermont demand no less, but we must not let the terrorists win. If we abandon our democracy to battle them, they win. If we forget our role as the world's leader to defeat them, they win. And we will win. We will maintain our democracy, and with justice, we will use our strength."
- Sen. Patrick Leahy, 12 Sep 01
"History teaches that grave threats to liberty often come in times of urgency, when constitutional rights seem too extravagant to endure."
- Justice Thurgood Marshall, 1989
Last month I mentioned that NIST is soliciting new modes of operation for AES. One of the modes submitted was "Dual Counter Mode," by Mike Boyle and Chris Salter of the National Security Agency. Within days of publication, there were at least two successful cryptanalyses of the mode. (One of them is linked to below. The other, by Phillip Rogaway, has not been published.) Just over a week later, NSA withdrew the mode from consideration.
As cryptographers, we tend to be sloppy with the word "break." There's a world of difference between an academic break and a practical break. I wrote about this extensively during the AES process. Academic breaks are the ones that force you to change things in the design process; practical breaks force you to change things in fielded equipment. This work is clearly of the academic-break variety.
This is not to minimize it; it's great work. Donescu, Gligor, and Wagner broke both the security and integrity properties of the mode. That is, they showed that the NSA claims for this mode were not true.
NSA documentation stated they have been working on this mode for 18 months. Honestly, I don't know whether to believe them. DCM is poorly specified, a sloppiness that I would expect the NSA to have caught as the mode moved from office to office for review.
The meta-moral here is that security proofs are important. For 20 years, people tried to build "two-for-the-price-of-one" modes that gave integrity for free, and essentially all such schemes were broken. As a result, the conventional wisdom was that this was hopeless. Then Charanjit Jutla published a paper showing that it was possible; he gave a proof of correctness. Since then, all new "two-for-one" modes from the research community come with proofs, whereas the NSA's mode came with no proof. When taken in the context of all the previous failed attempts to build such a mode, it should not be terribly surprising to find that yet another unproven mode got broken. The lesson is clear. If you're going to build a new mode of operation, for heaven's sake, make sure you've got a proof of security for it.
In summary, the attacks are real and impressive, but not practical in the real world. Still, the NSA really should have caught them. They were right to withdraw the mode.
NIST Proposed Modes:
NSA's Dual Counter Mode:
The cryptanalysis paper:
Other documentation, including NSA's retraction:
I've received some reports that Dmitry Sklyarov handed out copies of Elcomsoft's eBook-security-cracking software at DefCon. If this is true, it certainly muddies the whole issue. The DMCA isn't any better a law, but distributing such software within the U.S. is clearly in violation of it.
There has been a vicious rumor floating around that Ed Felten signed an NDA before he evaluated the various RIAA technologies, and he violated that NDA by publishing. This is completely false. The RIAA had a click-through license on their Web page, stating that any researchers breaking the technology would have to sign an NDA to claim the prize. Felten never claimed the prize, and never signed an NDA. Here is the agreement:
I've also received considerable mail about my news item poking fun at Microsoft for sending out signed security bulletins that fail verification. I know that PGP uses a web of trust, and that the only way to verify a signature is to have the public key signed by some mutually trusted party. My complaint with Microsoft is that they sign their bulletins with a public key that is unsigned, by anybody. Hence it is impossible to verify the signature.
One way to think of PGP is that every user is also a CA. By asking someone to sign your key, you are asking that person to be a CA on your behalf. By choosing to trust keys signed by someone, you are choosing to trust him or her as a CA. When Microsoft uses a public key without any signatures, they bypass this mechanism. You can verify the signature on Microsoft security bulletins; the signature might be valid, but you still have no way of knowing that the signing key actually belongs to Microsoft. You can use some non-cryptographic means to verify they key -- you got it off the Microsoft Web site, all Microsoft bulletins are signed with the same key, etc. -- but that has a set of non-cryptographic risks that you may not want to accept. Were you fooled by those fake Microsoft service bulletins or CNN news reports that have gone by in the last year or so?
The way for Microsoft to solve this is to get their key signed by a variety of trusted organizations, thereby getting itself immersed into PGP's web of trust. The fact that Microsoft doesn't understand this is what I was poking fun at.
On the other hand, someone else pointed out that SANS and CERT advisories suffer from the same problem. Oops.
The "Code Red III" report out of Korea seems to have been a false alarm, at least at the time I wrote that piece on Code Red last month. But since then, there's been at least one other Code Red variant, so my point stands.
And about a zillion people sent e-mail to tell me that the Code Red worm was not based on an eEye vulnerability announcement, but on an older worm without a sexy name. (Oddly enough, eEye never contacted me to complain.) I was wrong, and I unjustly laid some of the blame for Code Red onto eEye. Apologies to them.
A much larger issue -- and also one I received a boatload of mail about -- is whether people and companies who disclose vulnerabilities are blameworthy when hackers write exploits that target that vulnerability. More than one person pointed out the irony of me blaming eEye in one part of Crypto-Gram, and then praising Sklyarov in another. One person's reaction was included as comments in a piece of exploit code (+10 points for style, but I prefer e-mail).
In general, I am a strong proponent of full disclosure. It is a long-standing tradition in cryptography, and has done more to improve computer and network security than anything previously tried. However, there is a fine line between full disclosure and arming criminals, and it's not at all obvious where that line is at all times. Sometimes I'm not even sure that I know. And when it is not obvious, I tend to side with openness.
I plan on talking about this in a future issue of Crypto-Gram.
Full disclosure and the Window of Exposure:
Open-source and security:
Factoring a 512-bit number:
GCHQ (the British military crypto research facility that independently invented public-key cryptography in the 1970s) has a new identity-based public-key scheme. The details, and source code, are published.
Microsoft's initial Code Red patch didn't necessarily work. I believe this has been fixed in later versions of the patch.
This story about computerized car-security devices doesn't talk about the safety issues. What if there's a short-circuit in the auto when the real owner is driving, and the door-locks go into "prisoner" mode. What if the engine-kill feature in a stolen car goes into effect when the car is on a busy highway, simply because there's a police station nearby. I don't think computer security is reliable enough for this kind of stuff.
Viruses and instant messaging:
A Senate report on the NSA says they're not doing well in the face of technological advances.
Niels Ferguson says he's broken Intel's HDCP Digital Video Encryption System, but won't reveal details because he's afraid of being arrested under the DMCA. I know Niels, and have always found his honesty unimpeachable. If he says he can break HDCP, he can. Intel can claim that the break is only theoretical and not practical, and nothing to worry about, but I believe Niels is telling the truth. Kind of sounds like the security world before full disclosure: without revealing details, the affected companies simply lie and say there's nothing to worry about.
Ferguson's essay on the topic:
Yet another 802.11 wireless security flaw:
AirSnort is a new hacking tool for wireless networks. It grabs network traffic and recovers the master encryption password, using some of the vulnerabilities previously published. One interesting feature is that you don't need to sniff all the packets in one sitting. Instead, you can accumulate small increments over a long period of time to accomplish the same goal. This can even be done with a program that runs when the laptop is apparently asleep. That is, someone can quietly be sniffing your network, even when they aren't actively using their computer. Maybe they're just sitting in your lobby, waiting for a job interview, or something equally innocuous.
The next step is for someone to port it to a wireless PDA.
The crackdown on people with security knowledge continues. Brian West, working at an ISP, was testing a banner advertisement at a local newspaper. He inadvertently discovered a massive security hole in the newspaper's Web software. After calling the newspaper to notify them, he was arrested by the FBI. The more you read about this story, the weirder it is.
FBI's side of the story:
A good essay supporting full disclosure:
Flash worms. Thirty seconds to infect the Internet. (This is an improvement of the Warhol Worm idea I linked to last month, which can infect the Internet in fifteen minutes.)
Really interesting paper on the likely future development of worms and viruses.
Secure your computer by filling it with concrete. It works, too.
The "European Copyright Directive," basically the European equivalent of the DMCA:
One insurance company charges you more if you use Microsoft IIS:
Under the DMCA, you're guilty until proven innocent.
For more than five years, McDonald's prize games have been rigged. This is a great story about internal attacks and how damaging they can be.
The NSA has released a new version of Security Enhanced Linux, based on the 2.4.9 kernel.
Interesting article about the liability of hacking victims. If, for example, a brokerage company's network is hacked, preventing its customers from making trades, can those customers sue the brokerage company?
See these more scholarly papers on the same topic:
Yet another Code Red variant. I hope no one thinks these sorts of worms are ever going to go away.
Flooz hack. People were buying Flooz with fake credit card numbers, and then redeeming the currency at legitimate merchants:
Really good opinion piece on the fallacy of perfect security and the importance of risk management:
The newly accused spy Brian Regan appears to have chosen a bad crypto scheme to protect his secrets; the FBI was able to decrypt his files. My guess is that he choose a good crypto algorithm, but that he used a guessable key.
Here's an interesting risk. Increasingly, people are taking corporate PCs for home use; for example, as compensation in lieu of pay from failing dot-coms. And they're not changing the security settings. Hence, computers that are configured for use behind a firewall are now being used with no firewall.
My favorite quote: "Most [people] are totally unaware that computers can't just be turned on and left to their own devices."
Russian hackers and organized crime:
Nasty new virus. This one is disguised as an e-mail warning from Microsoft: they fake the e-mail "from" address. The text tries to convince the user to run the executable, which then encrypts all executable files, rendering them unusable.
Security flaw found in the Gauntlet firewall. This is a big deal: someone discovered a vulnerability in this brand of firewall that would allow a hacker to take control of the box. Network Associates shouldn't be singled out here; vulnerabilities have been discovered in other security products. Again, I want to stress the importance of real-time monitoring and defense in depth. It's the only way to get resilient security.
We're already seeing attacks against Windows XP, and the product hasn't even been released yet.
Interesting Code Red footnote. It seems the FBI discovered a test version of the worm in April, but decided not to warn people. Here's a good example of how full disclosure could have helped.
Another worm, a "cure" for Code Red:
Online casinos hacked:
More IDSs vulnerable to Unicode attacks:
Son of DCMA. The SSSCA aims to make copyright protection manditory in all computer hardware. This would be a disaster on all fronts: for privacy, for security, and for the unfettered ability for technology to prosper.
Text of bill:
This is something I predicted months ago:
Bruce Schneier submitted a brief in the Felten vs. RIAA case:
Schneier is speaking at the ConSec conference in Austin, TX, on 24 September:
Schneier is speaking at the ISSE Conference in London, on 26 September:
Counterpane Internet Security, Inc. provides Managed Security Monitoring services to large and medium enterprise networks. I formed the company because I believe that expert human monitoring is the only way to provide effective security, and that outsourcing this kind of service is the only cost-effective way to provide it. This CERT document provides a different perspective on the difficulty of an enterprise managing network security by itself, and discusses some of the implications of Counterpane's service. (Actually, the whole site is really useful, but the particular document goes through a specific methodology for building a reasonable enterprise-wide audit infrastructure. I believe that anyone who seriously considers doing this themselves will quickly see the benefits -- both in cost and quality -- of outsourcing to Counterpane.)
There's even a contest. If you manage to decrypt a specific file, they'll give you $1000. The contest expires in 2011; no word about the company.
"New root certificates are no longer available with Microsoft Internet Explorer. Any new roots accepted by Microsoft are available to Windows XP clients through Windows Update. When a user visits a secure Web site (that is, by using HTTPS), reads a secure e-mail (that is, S/MIME), or downloads an ActiveX control that uses a new root certificate, the Windows XP certificate chain verification software checks the appropriate Windows Update location and downloads the necessary root certificate. To the user, the experience is seamless. The user does not see any security dialog boxes or warnings. The download happens automatically, behind the scenes."
This is the kind of thing that worries me. What exactly is this process. What happens when it fails. Why does everyone have to trust a root certificate, just because Microsoft does. And if the user doesn't see any security dialog boxes or warnings, the effects of any failure are likely to be catastrophic.
For a while now I have talked about the differences between vulnerability and risk. Something can be very vulnerable, but not risky because there is so little at stake. As Microsoft continues to centralize security, authentication, permissions, etc., risk rises dramatically because the effect of a single vulnerability is magnified.
From: Ryan Russell <ryansecurityfocus.com>
Subject: eEye and Code Red
It sounds an awful lot like you've been listening to people who keep trying to convince us that eEye essentially released an exploit. They didn't. Have you looked at their post. <http://www.securityfocus.com/archive/1/191873>
Despite a comment in there about releasing a test exploit later, they didn't. What they did do was give an academic discussion about how to preserve shell code in a Unicode conversion environment, with no code samples. Based on what you had to say about Sklyarov later in the same Crypto-Gram, you would appear to be in support of such discussions.
Now take a look at the Code Red disassembly. Notice that the general techniques discussed in the Bugtraq post were not used.
The hints that eEye gave didn't help the people who needed help writing their overflows, and the people who understood what they were saying didn't need help in the first place. The Code Red authors (the first version) didn't need any help from eEye, that's for sure. The comments eEye made about the exploit were little more than an in-joke for people who already knew enough to figure it out themselves. This would be like you dropping hints about S-box design; it wouldn't do a thing to help me replicate Twofish.
I'll repeat what I said when you made similar disparaging comments about Mixter in the wake of the first DDoS attacks: "Would you rather have had the month to act before Code Red was written, or not. Would you rather have had the Microsoft patch, IDS rules, and details of the level of access gained by the hole, or not?"
I agree with you about the complexity and speed of the Internet. We're eventually going to get to the point where the next big worm is using 0-day exploits that we haven't seen before. In fact, there was a blip on the radar not too long ago; the precursor to Code Red, actually. It used an unknown exploit. Fortunately, the author hadn't perfected his spreading technique yet.
Given the choice, I'll take the month's warning, please. I'd rather have eEye tell me than Mr. Worm.
> eEye is going too far.
They did free research for Microsoft, waited for Microsoft to release a patch, didn't release an exploit, created a product that can help block these kinds of attacks, wrote a free scanner to help people look for vulnerable servers, stayed up all night on several occasions to disassemble two worms for free (and the world was in serious need of those disassemblies at that point) and then did what they could to help spread the word to the media to try and get everyone to patch their servers. What do you want from these guys?
From: Seth Arnold <sarnoldwirex.com>
Subject: eEye and Code Red
I am baffled why you badmouthed eEye in this month's crypto-gram. Marc Maiffret at eEye posted a note to bugtraq, which should explain why eEye is the last group that should be blamed for Code Red. (Mainly, that Code Red used a different technique to exploit the problem than the one discussed by eEye.)
Note especially that eEye coordinated a release with Microsoft, to ensure that a patch was available *before* release. Note that Microsoft would not have known about the .ida vulnerability without eEye alerting Microsoft. (Read: we have no idea how long this problem has been known in the hacker community.)
I am especially surprised at your position for one very basic reason: you are underestimating the hacking community. You of all people should know that keeping the details closely hidden will work for only so long before attackers get all the details. If eEye could find this problem, chances are good hundreds or even thousands of people could have found the problem just as easily -- had any of them bothered to check.
From: "Ian Alexander" <ialexandertouchnetnw.com>
Subject: Microsoft and Code Red
It's funny that Culp at Microsoft told the Eastside Journal that only a few MSN servers were affected. A friend of mine who works at Microsoft in Redmond was telling me how their corporate network was toast for close to a full day. It hit Microsoft a lot harder than they were comfortably divulging to the newspaper.
From: Kee Hinckley <nazgulsomewhere.com>
Subject: Code Red and Personal Web Servers
One thing you don't mention about Code Red: it has triggered something which I've been expecting for a while -- the segregation of the Internet. A number of ISPs, mostly cable but some DSL, have blocked incoming port 80 connections and show no signs of ever turning them back on. Last week Mediaone/AT&T/@Home turned off incoming port 80 with no warning whatsoever. To the press they claimed that consumers weren't allowed to run Web servers anyway (something that is contradicted by their Terms of Service, which specifically states that customers may run Web or ftp servers at their own risk). To their customers they've said nothing at all.
The fact of the matter is that keeping a server secure on the Internet is a more than most individuals and small companies can handle. I spend a significant amount of time every day tracking security lists, and while I can justify it as part of my consulting business, most people can't. I think what we're going to see is two types of Internet access. Companies with IT staffs will be able to afford a raw Internet connection, but most people are going to sit behind ISP firewalls that block virtually all incoming connections and many outgoing ones. The Internet as the great equalizer, where "anyone can publish what they want" is going to fade away. Perhaps some ISPs will provide limited open connections for extra cost, but the configuration and support issues are such that I think it's unlikely.
From: "Enright, John" <johnemail.webaccess.net>
Subject: Software Liability
There are several fundamental differences between tires and software. Tires must hold up under a very limited, well-defined set of tests. For a complicated software system, one simply can't test everything, or all possible uses of the system; let alone try to think of all ways a hacker might break through to create a security problem in one's software. You yourself have said many times that hacking is a different mindset; the security algorithm designer isn't generally able to break his/her own system as well as others will be able to. It's difficult for the builder to wear both hats. Most large software companies/corporations have quality testing departments, but we can't expect them to catch everything.
The next very fundamental difference here is something you just pointed out: tires are directly related to the safety of persons. Software generally does not fall into this category, unless you start looking at the medical field. Where lives may be at stake, you can bet that software is very, very well tested (and there may indeed be accountability in this area).
You know, in a roundabout way, the market is more of a driving force to make sure that a company's tires are of sufficient quality than the accountability regulations themselves. Firestone/Bridgestone (and similarly Ford) are undergoing very large financial burdens because of this. It's not just the accounting for poor quality, it's the damage they incur from such a public incident and loss of reputation. That will stick with them for years to come.
So I guess we're comparing hardware safety regulations to software security flaws. I don't see them as very similar. Maybe if you more clearly defined what you mean by "accountability," I would better understand you. It's absurd to demand accountability for every piece of software out there. The lawyers would have a field day, and the software industry would just stop. You'd have lawsuits on everything from carpal tunnel syndrome, to eyestrain (from font size, etc.), to headaches, to security breaches. Then software companies would start getting around this by requiring that you sign some sort of waiver (more than that absurd and annoying EULA everyone clicks through without reading now). Accountability would have to be clearly defined. It's a bureaucratic nightmare.
From: Tim Kordas <timkfilanet.com>
Subject: Lockpicks and the DMCA
Lock picks are an excellent example. In all jurisdictions that I'm familiar with, lock picking tools are on a list of "burglar's tools" (which also includes things like vice-lock pliers, screwdrivers and flashlights). Possession of burglar's tools alone isn't illegal. Instructional manuals about how to pick locks aren't illegal (and are clearly protected by the First Amendment), and are widely available. Buying locks to pick isn't illegal either.
Possession of lock picking tools along with "criminal intent" to use them in the commission of a crime is the illegal part; possession also can be taken into account in sentencing on conviction (apparently they make the possessor seem like a professional criminal). Presumably publishing papers, and giving talks about the advantages or disadvantages of some particular locking technology would be allowed (provided it didn't qualify as libel).
The DMCA seems to grant an unprecedented set of previously nonexistent "rights" to inventors of copyright technology. The provisions in the law that make exceptions for "legitimate security research" specify that such research cannot take the form of material which can be used to circumvent the security measures! The easiest (if not best) way to show, in a legitimate security research context, that any particular security measure is in adequate is to provide at least a detailed description of a means to circumvent the security.
There is no practical way of allowing legitimate security research to proceed while disallowing invention of circumvention measures.
From: jjordanunitechsys.com (Jason Jordan)
Subject: Adobe, Elcomsoft, and the DMCA
This is a scary quote from U.S. Attorney General John Ashcroft: "There are many people of poor and evil motivations who are seeking to disrupt business and government and exploit any vulnerabilities in the digital universe."
There are many people of good intentions who are seeking to help business and government by exploiting vulnerabilities in the digital universe. Let's face it, if what Sklyarov did was "of poor and evil motivations," he would have kept it quiet and not gone public with it. He basically told Adobe that their e-book security was flawed. Instead of Adobe saying something like, thanks, we'll improve it. They said, government help! Instead of people taking the time and effort of making a security system correctly, they make third-rate security systems and depend upon laws to protect them from those who point out the flaws. If this was the way security worked, we'd still have doorknobs on safes in banks, and turning a doorknob would be illegal.
From: Jeff Tucker <jefftwciatl.com>
Subject: Microsoft's Untrusted PGP Signatures
>This story is too weird for words. Microsoft
>adds PGP signatures at the bottom of its security
>bulletins, for verification. But if you try to
>verify the signatures, they fail.
This one I think you've got wrong. The problem is a ridiculous choice of words chosen by PGP (now Network Associates), not Microsoft. This particular issue (not specific to Microsoft) has been hashed out many times on the PGP newsgroup.
The problem is that in PGP there is no central key issuing authority, as you know. Everything is based on the "web of trust." Thus, I will personally trust a few keys. If a person who I trust has signed your key, I will trust you.
The problem arises because of the terminology PGP uses. If it decodes the signature, but the key isn't trusted, PGP says that the key is "Invalid." Obviously, this is ridiculous. A better choice of words would be untrusted, unverified, whatever.
Microsoft can't do anything about this. PGP ships by default with NO keys trusted. Keys can be manually trusted by individual users. Thus, if you download the MS key from their secure Web site and figure it's the right key, you really should sign the key with your own key. This will allow you to trust the key, and the Invalid goes away. If you were to send me a message with your own key today, I'd get the same thing: Invalid. Unless one of the 20 keys I trust happened to sign yours.
This has lead to an amazing example, though, of how users can't be trusted to understand cryptography and identity verification. At least one person on the PGP newsgroup said that he knew how to get rid of all the Invalids. He just routinely signs every key on his keyring and, voila, the Invalid is gone. He might as well uninstall PGP, it would have the same effect.
From: Jaap-Henk Hoepman <hoepmancs.utwente.nl>
Subject: Council of Europe CyberCrime Treaty
Although the CoE treaty on cybercrime has improved, I would not call it "kind of neat" yet. The treaty still makes it illegal to produce, sell, etc. devices whose primary purpose is to circumvent protection schemes without right. As you state elsewhere in the same issue of Crypto-Gram:
> Elcomsoft created and marketed a product that circumvented
> Adobe's product. This kind of software is often required in
> Russia, where people have a legal right to make personal backups.
There is no way really to tell whether the primary purpose of a device is to make personal backups or illegal copies. So the CoE CyberCrime treaty suffers from the same problem as the DMCA in this respect.
More worrisome is the fact that the treaty fails to consistently require dual criminality as a condition for mutual assistance. This means that in certain cases, states have to cooperate in a criminal investigation against a citizen of that state even if that state does not consider the act under investigation a criminal offence.
From: Adam Wagman <adam.wagmancognex.com>
Subject: SirCam Worm
A worm that disseminates people's personal information was first described in John Brunner's 1975 novel, "The Shockwave Rider." Brunner coined the term "worm" in that novel.
From: Steve Andrews <sjandrewschantilley.com>
Subject: Chantilley Data Security
I've read your recent comments in Crypto-Gram and have posted a reply:
In my reply I finish by inviting you to come to us in the UK and examine our technology, or alternatively we can come to you. I do hope you will accept my invitation so we can prove to you that our technology is sound and that we do know what we're doing.
From: Eric Young <eaypobox.com>
Subject: Computer/Telephony Integration
The telco security/privacy rot has already begun in Australia. One of the long distance carriers -- AAPT <www.aapt.com.au> -- allows one to view one's account information online, make payments, list all phone calls, etc. Your login name and password. Account/customer number and password/invoice number -- both of which happen to be consecutive lines on your monthly paper statement.
What is worse is that this lets you sign up for ISP services, change all your phone numbers over to AAPT, etc. It is also not possible to opt out. Every AAPT subscriber is automatically on the Web. I sent some e-mail complaining about this and asking how I could opt out. No response, but there is now a mechanism to change the password to 5-8 characters, no spaces, symbols or case sensitivity. When I originally complained, there was no way to change the "password."
Total access for the effort of intercepting one monthly statement from the mailbox. And any statement, no matter how old, will give a working login/password.
From: John Giannandrea <jgmeer.net>
Subject: Computer/Telephony Integration
One thing you omitted from your "Phone Hacking: The Next Generation" article is VoiceXML, a new markup language from W3C which will control all manner of automated voice interactions: <http://www.w3.org/Voice/>. The combination of VoiceXML and VoIP will be particularly vulnerable to security attacks.
Subject: Computer/Telephony Integration
A private telephone line is protected by the fourth amendment. The Internet is a shared network, with no implied privacy. If these two are mixed, do we all lose our fourth amendment rights on the telephone?
From: Markus Schumacher <msito.tu-darmstadt.de>
Subject: Computer/Telephony Integration
We identified some vulnerabilities of a H.323 based IP telephony
scenario that incorporates gatekeepers and gateways. During our tests we could:
* get control of an end-system
* perform DoS-Attacks using IP telephony signaling
* perform DoS-Attacks using general means
* attack the users' privacy
* attack infrastructure components
Those vulnerabilities are not due to limitations or shortcomings within technology, architectures and signaling protocols. Again the main reasons are severe flaws in design, implementation and policies of products.
You can find more details here:
Subject: Copyright in History
Sorry to butt in to a conversation among IT professionals, but as a professor of medieval British literature I have to take issue with Richard Straub's interpretation of the history of publishing and copyright.
The argument that copyright was not needed by Chaucer or Shakespeare is irrelevant in Chaucer's case, and misleading in Shakespeare's. Chaucer did not write with any expectation that anyone, including himself, would ever make money selling copies of his works -- except a manuscript shop which might make one to order every now and then. Contrary to the assertion that medieval books were all expensive and hand-copied by monks "in papal states" (whatever that is supposed to imply), secular manuscript shops were mass-producing books of wide interest as early as the 1100s in Paris, as well as books needed for university students. Money was made by those who made the physical books and sold them, but not by those who wrote them -- the critical problem from a modern point of view.
Shakespeare would have liked to have copyright protection, since in his time plays were routinely stolen and performed without remuneration to the company which created them. The only defense was to restrict each actor to his own copy of the play he was performing, one which included only his own part and no others. Garbled copies of plays nonetheless would be cobbled together and shown in performances which were not sanctioned by, and returned no box office receipts to, the actual authors.
Cervantes, author of "Don Quixote," watched helplessly as his book became one of the most popular ever known in Europe without a dime coming his way (from the original, that is). But since the 18th century, it has been possible for people to make money as writers of drama or of novels -- only because they receive payment for their work, passed on by booksellers, publishers, or what have you under some sort of contractual arrangement. The proposal that "rich citizens can afford to commission works directly" sets the clock right back to Chaucer's time.
Subject: Copyright and Radio
As a long-time member of the Musician's Union, I'd like to contradict your statement that when radio was invented that folks didn't bemoan the fact that people could listen to things for free. Actually, the Musician's Union was highly concerned about copyright issues regarding radio broadcasts, both for the performers and the composers of the works. What they established (after lengthy negotiations) was a scheme whereby funds were provided for airplays (in very small amounts per play), and placed into pools that could be used to provide royalties and also pay for live performances. In fact, when the phonograph became popular, and more recently when electronic synthesizers became popular, musicians were concerned again that live music would be replaced by recordings. In fact some of this is true, but recorded music and synthesis has created other opportunities for musicians, and expanded audience interest in concerts. So although I totally agree with you that there needs to be a better model for distribution of copyrighted works, perhaps even leaving it for the market to decide might turn out to have beneficial side effects, the example you provided about radio is not true to what actually happened in the music industry in that regard.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to email@example.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>. Back issues are available on <http://www.counterpane.com>.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of "Secrets and Lies" and "Applied Cryptography," and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He served on the board of the International Association for Cryptologic Research, EPIC, and VTW. He is a frequent writer and lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 2000 companies world-wide.