Crypto-Gram

February 15, 2002

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.

Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com.

Copyright (c) 2002 by Counterpane Internet Security, Inc.


In this issue:


Microsoft and “Trustworthy Computing”

Bill Gates is correct in stating that the entire industry needs to focus on achieving trustworthy computing. He’s right when he says that it is a difficult and long-term challenge, and I hope he’s right when he says that Microsoft is committed to that challenge. I don’t know for sure, though. I can’t tell if the Gates memo represents a real change in Microsoft, or just another marketing tactic. Microsoft has made so many empty claims about their security processes—and the security of their processes—that when I hear another one I can’t help believing it’s more of the same flim-flam.

Anyone remember last November, when Microsoft VP Jim Allchin said in an eWeek interview that all buffer overflows were eliminated in Windows XP? Or that it installed in a minimalist way, with features turned off by default? Not only was the UPnP vulnerability in an unneeded feature that was enabled by default; it was a buffer overflow. Anyone remember Scott Culp complaining about how people caused “information anarchy” by releasing details about Microsoft security vulnerabilities, and touting how fast Microsoft was at patching problems? There’s a new vulnerability in IE that Microsoft is busy ignoring. Or when Culp said that the UPnP vulnerability was the “the first network-based, remote compromise” in Windows, conveniently ignoring Code Red, Nimda, and the dozens of others that came before?

But let’s hope that the Gates memo is more than a headline grab, and represents a sea change within Microsoft. If that’s the case, I applaud the company’s decision. It’s a difficult one. Putting security ahead of features is not easy. Microsoft is going to have to say things like: “We’re going to put the entire .NET initiative on hold, probably for years, while we work the security problems out.” They’re going to have to stop all development on operating system features while they go through their existing code, line by line, fixing vulnerabilities, eliminating insecure functionality, and adding security features. Security works best when it’s designed into the system from the beginning, so a lot of what they’ve already done is going to have to be rewritten.

It’s going to take work to make this stick. Microsoft has built a monopoly business by throwing features into their products and dealing with the problems later. It’s what they do naturally. It’s what all software developers do naturally. Some pretty strong leadership is required to reverse this mentality: to delay release schedules, pare down functionality, and potentially lose short-term market share.

And they’re going to have to reverse their mentality of treating security problems as public-relations problems. I’d like to see honesty from Microsoft about their security problems. No more pretending that problems aren’t real if they’re not accompanied by exploit code, and attacking the security researcher if they are. No more pretending security problems aren’t caused by bad code in the first place. No more claiming that XP is the most secure operating system ever, simply because it’s the one they want to sell.

While we congratulate Microsoft for this change, let’s not forget the two forces that led them to this decision. Don’t think it’s some magnanimous gesture for the good of the Internet; Microsoft is too smart to spend all those resources out of the goodness of their heart. Give the credit to the full disclosure movement, which has repeatedly shown that Microsoft’s security is far worse than it claims. Analysts like Gartner have recommended that enterprises switch away from Microsoft IIS and delay installing Windows XP, both because of security concerns. It’s the full disclosure movement that allowed Gartner, and everyone else, to accurately assess the risks of Microsoft software. Microsoft knows that it doesn’t have a future unless it can convince the public that Windows XP and .NET are secure, safe, and trustworthy. Keeping vulnerabilities secret will only reduce the pressure on Microsoft, allowing them to revert to pretending that they’re secure when they’re really not.

Also give credit to the increasingly loud calls for software liability. More and more experts and industry groups and advisory panels are supporting the notion that software be held to the same liability rules as any other consumer product. It makes no sense that Firestone can produce a tire with a systemic flaw and be liable, while Microsoft can produce an operating system with a new systemic flaw discovered every week and not be liable. I think Gates sees this liability juggernaut on the horizon, and is doing his best to dodge it.

Security is a process, not a product. It’s an endless, arduous, thankless process. I have no illusions that Microsoft can make its products secure with a press announcement and a month of developer training, but it’s a start. The technical difficulties are immense—there are some things Microsoft needs to do that are currently beyond the abilities of current science—but Microsoft has the resources to tackle them. And because of Microsoft’s monopoly software position, their actions can significantly affect the security of the Internet. During the decade in which they ignored security, security steadily worsened. At least if they’re headed in the direction of trustworthy computing, they’re likely to get closer.

Gates memo:
<http://zdnet.com.com/2100-1104-817343.html>

Craig Mundie’s (Microsoft VP) commentary:
<http://news.com.com/2010-1078-818543.html>

Commentary and Analysis:
<http://www.infowarrior.org/articles/2002-02.html>
<http://www.securityfocus.com/columnists/54>
<http://zdnet.com.com/2100-1107-819752.html>
<http://www.zdnet.com/anchordesk/stories/story/…>
<http://www.zdnet.com/anchordesk/stories/story/…>
<http://www.pbs.org/cringely/pulpit/pulpit20020117.html>
<http://www.informationweek.com/story/IWK20020131S0004>

A Q&A with Steve Ballmer, where he demonstrates that he doesn’t get it yet. My favorite quote: “I think the core code is OK. It’s not like the core design is bad…” My optimism is fading.
<http://news.com.com/2008-1082-830229.html>

On the other hand, this is good news from Microsoft:
<http://www.theregister.co.uk/content/55/23922.html>

Bill Joy’s commentary:
<http://news.com.com/2010-1072-831385.html>

And Microsoft hired Scott Charney as their new Chief Security Officer. I was hoping they would choose someone who knew about software security, rather than a former hacker prosecutor and IP lawyer.
<http://www.securityfocus.com/columnists/59>

This essay originally appeared on news.com:
<http://news.com.com/2010-1078-818611.html>

A large SNMP vulnerability has been announced, affecting hundreds of products. This vulnerabilty has been known in the security community since at least October, but has been held from the public for so long so that vendors would have time to patch their products. I’ll write more about this next month.
<http://www.counterpane.com/alert-snmp.html>
<http://www.cert.org/advisories/CA-2002-03.html>
<http://www.ee.oulu.fi/research/ouspg/protos/testing/…>
<http://www.counterpane.com/pr-snmp.html>


Judging Microsoft

Last month, Bill Gates published a company-wide memo outlining a new strategic direction for Microsoft. Comparing this to the change when the company embraced the Internet, Gates elevated security Microsoft’s highest priority. By focusing on what he called “Trustworthy Computing,” Gates plans on transforming Microsoft into a company that produces software that is available, reliable, and secure.

“We must lead the industry to a whole new level of Trustworthiness in computing.”—Bill Gates internal memo, 15 January 2002.

Trust is not something that can be handed out; it has to be earned. And trustworthiness is a worthy goal in computing. But unlike performance goals or feature lists, progress toward it is hard to measure. How can we determine if one piece of software is more secure than another? Or offers better data integrity than another? Or is less likely to contain undiscovered vulnerabilities? How do we know if Microsoft is really committed to security, or if this is just another performance for the press and public? It’s not as easy as measuring clock speeds or comparing feature lists; security problems often don’t show up in beta tests.

As longtime security experts, we’d like to suggest some concrete ways to evaluate Microsoft’s (and anybody else’s) progress towards trustworthiness. These are specific and measurable changes that we would like Microsoft to make. This is not intended to be an exhaustive list; building secure software requires much more than what we delineate here. Our goal is to provide a list of measurable recommendations, so that the community can judge Microsoft’s sincerity.

Some of our recommendations are easier to implement than others, but if Microsoft is serious about security and wants to take a true leadership position, they can’t shirk any of them. Some of our changes are easier to verify than others, but it is our goal that all of them be independently measurable. In the end, the pronouncements and press releases don’t mean a thing. In security, what matters are results.

If we can distill our recommendations into a single paradigm, it’s one of simplicity. Complexity is the worst enemy of security, and systems that are loaded with features, capabilities, and options are much less secure than simple systems that do a few things reliably. Clearly Windows is, and always will be, a complex operating system. But there are things Microsoft can do to make even that complex system simpler and more secure. Microsoft must focus its programmers on designing secure software, on building things right the first time.

I. Data/Control Path Separation

“Security models should be easy for developers to understand and build into their applications.” -Bill Gates memo on security, 15 January 2002.

He’s right. And one of the simplest, strongest, and safest models is to enforce a rigid separation of data and code. The commingling of data and code is responsible for a great many security problems, and Microsoft has been the Internet’s worst offender. Here’s one example: Originally, e-mail was text only, and e-mail viruses were impossible. Microsoft changed that by having its mail clients automatically execute commands embedded in e-mail. This paved the way for e-mail viruses, like Melissa and LoveBug, that automatically spread to people in the victims’ address books. Microsoft must reverse the security damage by removing this functionality from its e-mail clients and many other of its products. This rigid separation of data from code needs to be applied to all products.

Microsoft has compounded the problem by blurring the distinction between the desktop and the Internet. This has led to numerous security vulnerabilities, based on different pieces of the operating system using system resources differently. Microsoft should revisit these design decisions.

We recommend the following modifications in the next release of these Microsoft products. In short: illustrate the actions, and provide a sandbox environment. This should be a release focused only on removing insecure features and adding security.

Office: Macros should not be stored in Office documents. Macros should be stored separately, as templates, which should not be openable as documents. The programs should provide a visual interface that walks the user through what the macros do, and should provide limitations of what macros not signed by a corporate IT department can do.

Internet Explorer: IE should support a complete separation of data and control. Java and JavaScript should be modified so they cannot use external programs in arbitrary ways. ActiveX should eliminate all controls that are marked “safe for scripting.”

E-mail: E-mail applications should not support scripting. (At the very least, they should stop supporting it by default.) E-mail scripts should be attached as a separate MIME attachment. There should be limitations on what macros not signed by a corporate IT department can do.

.NET: .NET should have a clear delineation of what can act and what cannot. The security community has learned a lot about mobile code security from Java. Mobile code is very dangerous, but it’s here to stay. For mobile code to survive, it should be redesigned with security as a primary feature.

Implementation of Microsoft SOAP, a protocol running over HTTP precisely so it could bypass firewalls, should be withdrawn. According to the Microsoft documentation: “Since SOAP relies on HTTP as the transport mechanism, and most firewalls allow HTTP to pass through, you’ll have no problem invoking SOAP endpoints from either side of a firewall.” It is exactly this feature-above-security mindset that needs to go. It may be that SOAP offers sufficient security mechanisms, proper separation of code and data. However, Microsoft promotes it for its security avoidance.

II. Default Configurations

“Our products should emphasize security right out of the box.” -Gates memo.

Microsoft software, by default, installs with many more features than most users need or want. This makes the software more vulnerable than necessary. There are many recent examples of this. The recent Universal Plug-and-Play bugs work even if you don’t know what UPnP does, or whether or not you’re using it. The SuperCookie bug in Windows Media Player works even if you don’t use WMP. Code Red successfully attacks IIS installations, even in Windows setups that aren’t being used as a Web server.

Additionally, features must be installable one by one. In UNIX, for example, a Web server and an ftp server are separate, and must be installed separately. With IIS, installing a Web server not only installs a Web server, but also an ftp server, a Gopher server, and Bill himself probably doesn’t know what else.

It’s not enough to give users the ability to turn off unneeded features. Users don’t even know which features are turned on, much less how to turn them off, and the features might accidentally get turned on again. The best prevention for attacks against a feature is for the feature not to be there.

We recommend that the next release of all Microsoft products have default installations with the most minimal feature set possible, and that additional features require special installation activity to make them work. We also recommend that this installation be visible to the user, so that the user knows the features are there. We recommend that Microsoft ensure that all features can be installed and uninstalled separately, as well as in common packages. We recommend that unneeded features not be installed, instead of being installed and disabled. Additional controls should be implemented to allow a corporate IT department to prohibit certain features from being installed.

We also recommend that .NET come with the ability to use configurations from a variety of sources, including Microsoft, its competitors, and public interest/advocacy groups like the Electronic Frontier Foundation.

III. Separation of Protocols and Products

“As software has become ever more complex, interdependent and interconnected, our reputation as a company has in turn become more vulnerable.” -Gates memo.

Today Microsoft builds large, complex services that intermingle many smaller services. For example, the Microsoft file-sharing protocol contains file sharing, registry sharing, remote editing, printer sharing, password management, and a host of other services. If a user wants one of those services, he has to implement them all. These need to be split into separate services, running on separate bits of server software so that a user can choose which to install where. Absent that, the complexity of the software grows to demonstrably insecure levels.

We recommend that Microsoft separate functionality so that the user can install only the specific functions they need. We also recommend that Microsoft provide, and allow others to provide, a variety of pre-bundled functions. Most users don’t want to install individual functions, and will rely on others to tell them what they need.

IV. Building Secure Software

“So now, when we face a choice between adding features and resolving security issues, we need to choose security.” -Gates memo

Commercial software is full of bugs, and some of those bugs harbor security vulnerabilities. This is not meant to excuse Microsoft’s long-standing apathy towards security; it’s merely a statement of fact. These bugs are caused by bad software specification, design, and implementation. Much of what is discussed above (data/command separation, default configurations, separate software for separate protocols) has the effect of minimizing the effects of software bugs by reducing the amount of software on a computer. However, there will still be a great deal of software on any computer, and that software needs to be resilient to attack. This means that the software doesn’t easily break when attacked. And if it does break, the system as a whole doesn’t fall apart. Today, we can worry that a single bug in Windows will render a server completely insecure, or a single bug in IIS will expose all the data in .NET. Today Microsoft software is brittle; it needs to be resilient.

There is much Microsoft can do to make its software more resilient, and our recommendations could go on for pages. But generally speaking, certain features are more fragile than others. We recommend the following:

Microsoft should drop all plans for automatic software updates via the Internet until they can be done securely and reliably. Today there are too many problems with updates and patches to allow them to occur without the user’s knowledge, and too many problems with authentication to prevent others from maliciously using the capability to attack systems.

Microsoft should eliminate all centralized customer databases in its .NET services. These databases are too dangerous to keep in one place; the ramifications of a security breach are too great.

Microsoft is already moving towards signing code files. While we recommend that Microsoft continue this practice, we also recommend that Microsoft not rely on code signing for security. Signed code does not equal trustworthy code, something the security community graphically demonstrated through the many ActiveX vulnerabilities. Microsoft should drop the code-signing security paradigm in favor of the sandbox paradigm.

Today, too many Microsoft server components run as Administrator. When a service runs as Administrator, it is much easier for a security flaw to result in the machine being fully compromised. In UNIX, servers are often designed to run as a normal user. This should be the default configuration for Microsoft servers as well.

All other Microsoft features should be evaluated for resilience. Those that are too risky should be removed until they can be rewritten and secured.

V. Transparency and Auditability

“If there is any way we can better protect important data and minimize downtime, we should focus on this.” -Gates memo.

Too much of the Microsoft operating system operates invisibly, without the user’s knowledge. Too much of it functions without leaving audit records of what happened. With each successive version of the Microsoft operating system, it has become increasingly difficult for a user to control his own system or to examine what his system is doing. This has disastrous security ramifications, and should be reversed.

We recommend that Microsoft add strong auditing capabilities to all products, both operating systems and applications software. We recommend that Microsoft provide configuration tools along with its operating system, as well as tools for an IT department to manage the configurations of its computers.

We would also like to see Microsoft abandon the Registry in favor of a less opaque and more user-friendly system. In particular, the use of undocumented registry keys, which take effect when you create them; the confusing issue of keys containing both other keys and values makes it challenging to decide what to modify; and the lack of change management and history features which would enable users to make changes with less fear. Microsoft has often created systems which are inviting and easy to use; they have not done so here.

VI. Advance Publication of Protocols and Designs

“There are many changes Microsoft needs to make as a company to ensure and keep our customers’ trust at every level-from the way we develop software, to our support efforts, to our operational and business practices.” -Gates memo.

If there’s one thing that security experts have learned over the years, it’s that any system, when initially proposed, will have security bugs. The only reliable remedy is to publish system details early, and let the security community examine them. Microsoft needs to publish specifications for protocols in advance and encourage public comment. This is doubly important for security protocols and systems. If a portion of the software is critical to security, then there is no way to achieve trustworthiness without publication. Publication does not ensure security, but it’s an unavoidable step in the process. We’re not suggesting that Microsoft must give up all proprietary rights to its protocols and interfaces, or allow anyone to implement or use its standards. We are saying that they must be public, not secret.

The published specifications must be complete, readable, and generally available. It’s not sufficient to make the specifications available to specific researchers, or to people who have signed non-disclosures or paid for the privilege. Again, this is not easy from a business point of view, but if Microsoft is serious about putting security first, it needs to engage rather than ignore the security community. And Microsoft should wait before implementing those specifications in products

We recommend that all protocols and interfaces used in Microsoft software be immediately published, and a one-year moratorium be placed on all non-security modifications to those protocols. We also recommend that Microsoft publish any new protocols or interfaces at least one year before implementing them in products.

In addition to making its protocols and interfaces public, we suggest that Microsoft consider making its entire source code public. We’re not advocating that Microsoft make its products open source, but if they really want to impress everyone about their newfound security religion, they will make their code available for inspection. Honestly we don’t expect Microsoft will do this. It’s too much of a cultural change for them to even consider.

VII. Engaging the Community

“Compensation plans of Microsoft product engineers, such as raises and bonuses, will also be tied to how secure their products are.” -Associated Press article on Gates memo, 15 January 2002.

Tying security to compensation is the best way to effect a cultural change inside Microsoft. We feel that Microsoft needs to go further, and reward not only Microsoft employees but independent researchers. Microsoft can no longer threaten, insult, or belittle independent researchers who find vulnerabilities in their products.

Microsoft needs both automated security reviews and evaluations by security experts. A great deal of work in this area has already been done outside Microsoft. We recommend that Microsoft devote resources towards comprehensive security reviews for all of its code, using security experts both inside and outside the company. We also recommend that Microsoft set up an independent body to evaluate security vulnerabilities found by researchers outside the company.

Conclusion

“Eventually, our software should be so fundamentally secure that customers never even worry about it.” -Gates memo.

Our recommendations are by no means comprehensive. There’s substantially more involved in building secure software than the seven items we list here. These items are intended to be near-term milestones; they’re recommendations more about implementation than about architecture. Buffer overflows, everyone’s favorite whipping boy, are a comparatively easy implementation-level problem to fix. Higher-level constructs, such as implementing a scripting engine or securing inter-process communications, are more complicated design-level issues. But if Microsoft doesn’t start with the simpler stuff, they’re never going to get to the hard stuff.

Security isn’t easy, nor is it something that you can bolt onto a product after the fact. Making security Microsoft’s first priority will require a basic redesign of the way the company produces and markets software. It will involve a difficult cultural transition inside Microsoft. It will involve Microsoft setting aside short-term gains in order to achieve long-term goals. It’s a difficult goal, and we believe that Microsoft can do it. We hope that they remain committed.

Amusing comments:
<http://slashdot.org/comments.pl?…>
<http://slashdot.org/comments.pl?…>

This essay was written with Adam Shostack. Comments were provided by Steve Bellovin, Jon Callas, Vrispan Cowan, Greg Guerin, Paul Lalonde, Gary McGraw, David Wagner, Sean Rooney, and Elizabeth Zwicky. It originally appeared on Security Focus:
<http://www.securityfocus.com/news/315>


Crypto-Gram Reprints

Hard-drive-embedded copy protection:
<http://www.schneier.com/crypto-gram-0102.html#1>

A semantic attack on URLs:
<http://www.schneier.com/crypto-gram-0102.html#7>

E-mail filter idiocy:
<http://www.schneier.com/crypto-gram-0102.html#8>

Air gaps:
<http://www.schneier.com/crypto-gram-0102.html#9>

Internet voting vs. large-value e-commerce:
<http://www.schneier.com/crypto-gram-0102.html#10>

Distributed denial-of-service attacks:
<http://www.schneier.com/…>

Publicizing vulnerabilities:
<http://www.schneier.com/…>

Recognizing crypto snake-oil:
<http://www.schneier.com/…>


News

The Web is becoming ever more vulnerable to attacks and viruses. Security measures are constantly being outstripped by new threats.
<http://www.ananova.com/news/story/sm_501457.html>

Measuring Internet security risk:
<http://zdnet.com.com/2100-1105-819713.html>

Good article on surveillance measures in the wake of 9/11:
<http://www.latimes.com/news/nationworld/nation/…>

SatireWire on Microsoft breakup. Funny stuff.
<http://www.satirewire.com/news/jan02/patchsoft.shtml>

Government fearmongering. The FBI warned law enforcement and high-tech companies to be on guard for possible terrorist activity that could use or affect the Internet. This could happen at some random time and in some random location. Basically, the FBI wants everyone should worry, but they’re not sure about what.
<http://www.usatoday.com/life/cyber/tech/2002/01/17/…>

Long and interesting interview with Gene Spafford, about the infosec threat landscape; privacy; the challenges of digital certificates, CRLs, public key infrastructure standards and interoperability; key escrow, backup and recovery; identity fraud; trust on the Internet; and the problems of security education today. Sample quote: “Security doesn’t work as an add-on. It really needs to be built-in from the beginning.”
<http://pkiforum.com/books/interview_spafford.html>

12% of all online databases were breached in 2001. At least, 12% were known to be breached. Who knows about the others?
<http://www.newsbytes.com/news/02/173832.html>

Should it be unlawful to produce insecure software products?
<http://news.bbc.co.uk/hi/english/sci/tech/…>
<http://slashdot.org/articles/02/01/16/1534252.shtml>

Should software companies be liable for insecure software products?
<http://news.com.com/2100-1023-821266.html>
<http://www.zdnet.com/anchordesk/stories/story/…>

2600 Magazine is appealing the DeCSS decision:
<http://zdnet.com.com/2110-1104-814246.html>

Microsoft security patches sometimes cause more problems than they solve.
<http://www.eweek.com/article/…>
This is something that you should expect to be fixed, almost immediately, in Microsoft’s new security focus. This is also one of the reasons why automatic download and installation of patches is bad idea.

Excellent essay from last October on Microsoft and bad (insecure) software design. The author tells me that the original title was “Developer Arrogance and Buffer Overflows,” which captures the feel of the essay a little better.
<http://www.osopinion.com/perl/story/14306.html>

Last summer, Microsoft claimed that they eradicated all buffer overflows in Windows XP.
<http://www.pcweek.co.uk/News/1125281>
<http://www.theregister.co.uk/content/archive/21316.html>

Here’s an example of a hack causing a company to go out of business. Cloud Nine, a UK ISP, ceased operations this week after being the victim of a DOS attack. The network needed to be rebuilt as a result of the attack, and the company’s insurance wouldn’t cover the repairs.
<http://www.ispreview.co.uk/cgi-bin/ispnews/…>
<http://www.ispreview.co.uk/ispnews/comments/…>
<http://www.wired.com/news/business/0,1367,50171,00.html>

Really interesting anecdote about the prevalence of casual software piracy:
<http://www.AmbrosiaSW.com/cgi-bin/ubb/…>

Clever identity-theft scam. Victim gets an e-mail “confirming” an eBay order, saying that the victim’s credit-card will be charged unless he cancels. The cancellation Web page asks for all sorts of personal information: credit-card number, Social Security number, bank name, address, phone, etc. There’s other cleverness in how the data is harvested.
<http://www.newsbytes.com/news/02/173962.html>

Good essay on software security:
<http://www.securityfocus.com/infocus/1541>

The Atlantic, generally a good magazine, published a terrible essay on unbreakable encryption and its effects on a terrorist-filled world. The author completely misses the point that security is more than mathematical encryption, and security failures are more often in procedure than in mathematics. The author says even says it, at one point: “Signals intelligence is not completely dead, of course.” But then he goes on, oblivious to his understatement.
<http://www.theatlantic.com/issues/2002/02/budiansky.htm>

“I have never, ever, said that I was totally, or even mostly, a complete innocent.”—Kevin Mitnick
<http://www.wired.com/news/politics/0,1283,50298,00.html>

Here’s how increased electronic identity helps terrorists. They commit identity theft, too.
<http://www.boston.com/dailyglobe2/031/nation/…>


Counterpane News

Bruce Schneier is speaking at a Senate briefing on CyberSecurity, at the U.S. Capital on February 14th. The public is invited.
<http://www.counterpane.com/pr-briefing.html>

Bruce Schneier is speaking at the RSA Conference at three times:
His main talk: “Fixing Network Security by Hacking the Business Climate,” Wednesday at 8:00 AM
Schneier is moderating the Cryptographer’s Panel, Tuesday at 10:45 AM
Schneier is participating in a panel on security liabilities (this is going to be interesting) on Friday at 8:00 AM

Schneier is giving a presentation on Counterpane’s Managed Security Monitoring service in a variety of cities: San Jose (2/20), DC (2/25), Detroit (2/26), Honolulu (3/8), Phoenix (3/12), San Diego (3/13), Philadelphia (3/14), Chicago (3/20), New York (3/22), and Boston (3/26). If you’re interested in attending, please visit:
<http://www.counterpane.com/seminars.html>

Counterpane announces strong 2001 perforamance:
<http://www.counterpane.com/pr-growth.html>

Counterpane has announced a bunch of new resellers:
<http://www.counterpane.com/pr-cr.html>

Excellent radio interview with CEO Tom Rowley:
<http://www.counterpane.com/pr-ceocast.html>


Oracle’s “Unbreakable” Database

Last November, Oracle started touting its security with an “Unbreakable” ad campaign and the slogan: “Oracle9i. Unbreakable. Can’t break it. Can’t break in.” This was a ludicrous claim then, but I decided to wait until it was actually broken before writing about it.

Well, it’s been broken. In several places. Using some pretty basic attacks. Unbreakable, it’s not.

On the one hand, I (and most people reading this newsletter) always knew that. We knew that the claims were exaggerated. We knew that the Oracle marketing department was lying. But it’s a sad commentary on the state of security discourse that Oracle wasn’t immediately laughed out of the room. Oracle9i won’t ever be unbreakable, unless the company makes some major changes in the way they design and develop software.

On the other hand, maybe it’s not just hubris. Maybe Oracle management actually believed that their product was unbreakable. Maybe they’re that clueless about security. If that’s the case, the problems run deeper than they look. The problem with believing your product is unbreakable is that you don’t bother to secure it in depth. If you think your walls are impenetrable, you’re not going to bother with guards and alarms and anything else. This is the case with Oracle9i. The attacks completely take over the database. Once the attacker has broken the “unbreakable” security, there’s nothing else to stop him.

In their backpedaling, Oracle has said that “unbreakable” didn’t mean what normal people take the word to mean. Oracle’s security chief, Mary Ann Davidson, claims that the campaign “speaks to” fourteen independent security evaluations that Oracle’s database server passed. This, to me, is the real story here. What good is a security evaluation, what good are FOURTEEN different security evaluations, if none of them can catch something as trivial as a buffer overflow? Security is hard. Think of a chain; any single weak link can break the chain. Buffer overflows are an obvious link: easy to avoid, easy to test for, easy to fix. Catching all buffer overflows doesn’t make your software secure; it’s the price of admission. The hard stuff is really hard.

So, I tried to find the fourteen independent security evaluations. I wanted to make fun of them: “Look at the fourteen security evaluations that don’t even guarantee buffer-overflow-free code.” Unfortunately, I could only find five: TCSEC, ITSEC, Common Criteria, Russian Criteria, and FIPS 140-1. Oracle marketing turned five into fourteen by counting multiple levels of TCSEC and ITSEC as independent security evaluations, and counting identical evaluations of different Oracle products as independent security evaluations. I don’t know about you, but when I hear “fourteen different,” I don’t think it means “five different, some of them multiple times with different products or different levels.” Seems like Oracle has trouble with math as well as with English.

“Unbreakable” has a meaning. It means that it can’t be broken. It doesn’t mean “Unbreakable, except by people who know how to break things.” It doesn’t mean “Passes five or so questionable security evaluations, but is still vulnerable to buffer overflows.” I don’t care who Larry Ellison is; he can’t rewrite the dictionary.

The breaks:
<http://www.securityfocus.com/columnists/45>

Oracle’s backpedaling:
<http://www.securityfocus.com/news/309>
<http://www.businessweek.com/bwdaily/dnflash/jan2002/…>

Oracle’s “fourteen” security certifications:
<http://www.oracle.com/ip/deploy/database/oracle9i/…>


Comments from Readers

From: Geoff Lane <zzassgl twirl.mcc.ac.uk>
Subject: The Usefulness of Identity

Markus Kuhn’s description [Crypto-gram Jan 15, 2002] of his EU passport not being considered as sufficiently identifying for the rental of mere video tapes is interesting. Far from illustrating the usefulness of identity cards, it demonstrates why identity cards are mostly useless in civilian life.

The shop rejected the passport _because_ all it does is provide some certainly of the identity of the holder (I’m ignoring the fact that in most country’s passports are issued after almost no external checks are made of identity). What the shop was after was a _billing_ address that it could use should the tapes be lost or stolen. The shop didn’t care who was actually standing there renting tapes so long as they had an address that was known to be able to pay bills—utility bills are particularly favoured because even crooks need electricity, gas, and water, and will normally pay their utility bills. Of course, obtaining other people’s utility bills is not exactly difficult, so this isn’t a very secure; but it is much better than accepting a passport or identity card which doesn’t include an address, nor any indication of the ability to pay bills.

As far as I can see, most calls for identity cards coming from commerce and government are the result of a desire to reduce fraud of various kinds (the anti-terrorist arguments are trivially dismissed). The thinking seems to go, “X has taken goods and/or services and not paid for them—if X had to present an identity card this would not happen.” Of course this is just confusing identity with honesty, not the same thing at all.

So, identity cards are useless for anti-crime and anti-terrorism, and are no good as credit references. Indeed, anyone who has thought about it knows that identity cards alone are not even useful in proving someone’s identity. (For that you need the famous trusted third party which usually reduces to calls for huge computer systems, millions of terminals and vast expense with little return—anyone familiar with government computer projects knows the chances that such a project would succeed within time and budget limits are remote.)

From: Robert Searle <robert.searle tait.co.nz>
Subject: Re: Software and Liability

I think that anyone wanting to come into this debate should read Les Hatton’s Master of Law thesis on the applicability of UK laws to software.

<http://www.oakcomp.co.uk/TechPub.html>

There is a software spectrum from shrink-wrapped to completely bespoke software systems.

Shrink-wrapped software is marked by almost complete lack of user/customer input to the requirements process. Bespoke software is marked by the total involvement of user/customer input to the requirements process.

Shrink-wrapped software therefore has a low duty of care and a high merchantability requirement on the supplier; i.e., because the supplier cannot reasonably determine the use of the software, they cannot determine precisely what duties they have to the user but the software must be of saleable quality in some sense.

Bespoke software has a high duty of care and a low merchantability requirement on the supplier; i.e., the supplier must take specific care to meet the expectations of the customer and is bound by a contractual obligation for acceptability.

The thesis also contains indications from marine law which could be used as precedent that software companies must supply their workers with tools to ensure quality and must also ensure that the tools provided are used.

Finally, after a very long discussion of whether software is a good (thing) or a service (expertise), I think that Mr. Hatton determines that shrink-wrapped software is neither because the common law only allows a customer to return a good after a limited time because it becomes “used” and therefore, the reasonableness argument assumes that the failure is more likely to have developed after creation as a consequence of use. Goods (in UK law) are also supposed to be free of minor defects unless the defects are specifically identified to the customer. Mr. Hatton argues forcefully that the software state of the art in even the very best of suppliers is incapable of this standard (and that it is impossible for the effects of a defect to be predicted in general with the example of the single missed line on the AT&T switches which cut the phone services to New York). Contracted for bespoke software might be argued to be a service.

In comparing UK law with US law in the software area, Mr. Hatton notes that US law appears to have missed the distinctions between shrink-wrapped and bespoke software and is overprotective to the supplier.

Finally, the normal expectations of a contract are that it is a last resort between the involved parties and is devised to bring both parties into the state which existed before the contract. In software projects failure is almost inevitable and therefore, the contract will be used and both parties should try to make sure that it is constructed in such a way that both parties can take something from what has been achieved even if it is less than was originally desired.

From: AMurray cmp.com
Subject: Liability for insecure software

You wrote “Until software companies are held liable for the code they produce, they will continue to pack their software with needless features and neglect to consider their associated security ramifications.”

Holding software companies liable sounds appealing, but I think it may cause more problems than it fixes. For one, if you think Microsoft is secretive about vulnerabilities now, what happens when those vulnerabilities could lead to lawsuits? Rather than create more incentive to be proactive, I think the threat of liability would give software companies more incentive to be secretive. For instance, how would legal liability affect the creation and distribution of patches? If a company has to release a patch, is that a legal admission of a defective product?

I think software companies should be held liable in the marketplace, rather than in court. I’m willing to bet that after all the pain Code Red and Nimda caused to network administrators, they’ll think twice about buying Microsoft products again. And if they continue to buy Microsoft products, they should be fully aware of the risks that entails.

I’m not saying that Microsoft should be relieved of any responsibility for buggy code and for products that emphasize features to the detriment of security, but I also don’t think the company will survive if they keep putting out a shoddy product. I say let market forces act as a check on bad software rather than trial lawyers or the government.

From: Nathan Myers <ncm-nospam cantrip.org>
Subject: Microsoft security P.R. upheaval

If Microsoft’s claimed change of policy about the security of their
software is, in fact, a sham, we should see detectable consequences.
As you noted in your news.com article, any actual change must result
in a major slowdown in releases of new products and product features.

Before any such change (or lack of one) is evident, though, the first hint must be a change in their P.R. approach to discovered holes. Until now their spin has been that security holes just don’t matter very much. They posted patches on their (indifferently maintained) site, but wouldn’t do anything so expensive as recalling the faulty product from the distribution channel, or notifying affected customers, or offering refunds (never mind paying customers’ expenses).

Now that security holes have been officially recognized, they can’t be treated as merely cosmetic—the equivalent of a Cracker Jack box with no toy—but a real response is expensive. If the new security focus is a sham, expect to see more official denial. Most security holes will get only P.R. treatment, portrayed as “ordinary” bugs, or blamed on incompetent users, insufficient firewall protection, or “terrorist” hackers. There might be a quota, where no more than four holes per year may be treated as (expensively) real, while the rest are officially buried.

Their problem is that secure software isn’t just software that has been audited for buffer overflows. Software is so complex that almost any fault can have mysterious consequences, any of which may (also) be a security hole. As the OpenBSD Project has explained for years, the only secure software is correct, reliable software. You don’t get that by adding a security officer or auditor to each product team. It takes a complete overhaul of the software production process, and a complete turnaround in the attitudes of the entire engineering and engineering management staff. Without such a wholesale overhaul, the flow of bugs and (consequent) security holes will continue unabated, despite any management prohibition.

I sat next to a Microsoft coder (and sometime manager) on a flight from Seattle recently. He explained that as long as a coder’s bug count was below some level, the bugs could be ignored, and the coder could continue implementing new features. If the bug count crossed the threshold, he would have to stop until it was brought back down—not to zero, just to the limit. This systematic tolerance for faults of all kinds is why their software is so bad today, and it won’t change quickly. Nothing in the press release suggested that they saw security as inextricably connected with reliability.

In the meantime, P.R. games are far cheaper, and arguably more effective. Is the problem really that Microsoft products are shabby and insecure, or that they are now perceived so? Everybody who would like to continue business-as-usual will say it’s the latter. They will play up the effectiveness of Microsoft’s “responsiveness” to security holes, and pretend that “effective response” is a substitute for shipping reliable code to begin with. Reliable code, after all, doesn’t generate fawning press, or indeed any press at all.

I saw a similar process in action, starkly, sixteen years ago. IBM and HP had both introduced their first PCs with internal 10-megabyte disk drives. The HPs cost a little more. IBM offered theirs with a “service contract” at about twice the price difference. Over the course of the next year *all* the IBM drives failed—which, it turned out later, IBM had expected—while HP’s mostly survived. IBM got reams of favorable press about how good their service was, for replacing the drives on the spot (albeit only for customers who had bought the service contract!). IBM came away with a reputation for good customer service. HP got creamed.

In summary, if the new security policy is a sham, expect to see Microsoft engage in periodic, massively orchestrated “responses” to selected embarrassments, and to become much more reticent about the rest. Expect no change in their warranty disclaimers. Expect analyst reports proclaiming that MS products are now *more* secure than the competition. The effect will be a net decrease in the ability of their customers to maintain secure servers, yet if the P.R. campaign succeeds, most customers will perceive the “security problem” as solved, and continuing reports as stubbornly persistent old news.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available on <http://www.schneier.com/crypto-gram.html>.

To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of “Secrets and Lies” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography.

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide.

<http://www.counterpane.com/>

Sidebar photo of Bruce Schneier by Joe MacInnis.