Thoughts on the Security of qmail

Dan Bernstein wrote an interesting paper on the security lessons he's learned from qmail.

My views of security have become increasingly ruthless over the years. I see a huge amount of money and effort being invested in security, and I have become convinced that most of that money and effort is being wasted. Most "security" efforts are designed to stop yesterday's attacks but fail completely to stop tomorrow's attacks and are of no use in building invulnerable software. These efforts are a distraction from work that does have long-term value.

Very interesting stuff, some counter to conventional security wisdom.

I have become convinced that this "principle of least privilege" is fundamentally wrong. Minimizing privilege might reduce the damage done by some security holes but almost never fixes the holes. Minimizing privilege is not the same as minimizing the amount of trusted code, does not have the same benefits as minimizing the amount of trusted code, and does not move us any closer to a secure computer system.

Posted on November 16, 2007 at 6:47 AM • 43 Comments

Comments

William MorrissNovember 16, 2007 7:38 AM

I'm not sure I agree with the author's argument that money is being wasted because "Most 'security' efforts are designed to stop yesterday's attacks but fail completely to stop tomorrow's attacks and are of no use in building invulnerable software." It seems to me that there's good value in defending yourself from yesterday's attacks. For example, you can avoid being victimized by people looking for soft targets using known exploits. Invulnerable software might be nice, but the pursuit of (likely unattainable) invulnerable software shouldn't distract from the value of learning from the past.

I posted in more detail, including some discussion of legal implications of security at http://ephemerallaw.blogspot.com/2007/11/...

simpletonNovember 16, 2007 7:53 AM

Thank you for the insight, Bill. I don't think anyone has considered that point before.

GuillaumeNovember 16, 2007 8:40 AM

I'm not sure about the least privilege security wisdom...

I always use a limited user account (even under Vista, I do not use the "one click away from being admin" account). I use it on purpose, exactly because I know there are vulnerabilities. (I run XP, 2003, Vista, Debian and Solaris, same goes for all of them).

Maybe it's just the interpretation of least privilege that is wrong ? Least priv. is one thing, and limiting trusted code is another. I don't know qmail, but even if there weren't a single line of trusted code in it, I would not run it as root. I would apply least privilege to it.

It doesn't cost anything and it might help protect againts tomorow's attacks.

Unfortunatly, not all software work in a least privilege environment...

HansNovember 16, 2007 8:46 AM

Now whats wrong with having both code executed with least privileges needed _and_ reducing the amount of trusted code?

krupaNovember 16, 2007 8:49 AM

If you think adhering to least privilege is going to make your code secure, you need help. Least privilege prevents certain classes of attacks. E.g. If you run as a limited user on Windows an attack/attacker can't write to system-wide registry keys or install a service or read/write other people's data.

Least privilege, like every other security mechanism, is not a panacea. His example in the paper proves that. It's good to prevent code from accessing things it doesn't need, but that's not all you have to worry about. This is why security is hard!

RrNovember 16, 2007 8:57 AM

I'm confused on Dan's argument about least privilege. It seems to me that much of the 'eliminating trusted code' is applying the principle of least privilege. Maybe just not in the 'traditional Unix sense'.

Still a worthwhile read.

Sam GreenfieldNovember 16, 2007 9:08 AM

William--

You wrote, "Invulnerable software might be nice, but the pursuit of (likely unattainable) invulnerable software shouldn't distract from the value of learning from the past."

Most of Bernstein's paper discussed vulnerabilities with other MTA agents by comparing them with the design of qmail. Many points of the paper focused on sendmail, one of the oldest pieces of software on the Internet still in common use today (first shipped in the early 80s).

I'm curious how you drew the conclusion that Bernstein was ignoring the past in the pursuit of bug-free code. If anything, I believe his design for qmail was based on fundamental security flaws with other MTAs.

An open question I wish the paper had addressed is when is it appropriate to dump an entire product because of fundamental design flaws and rewrite the entire product from scratch?

derfNovember 16, 2007 9:17 AM

Semantics. Least privilege for code is the same as "untrusted code". For example, there's no valid reason for calc.exe in Windows to have read/write privilege to the network, hard drive, user files, memory locations of other programs, or any other location except possibly to access the clipboard. No user action should be possible to allow this program to be given access, because the program, by design, has no built in facility to do so. However, the security frameworks of current operating systems have no way to set this. A simple bug in calc.exe would allow someone to completely undermine the entire operating system's security because the program itself is not under any "least privilege" or "untrusted code" restraints.

MilanNovember 16, 2007 9:17 AM

I really wish GMail had the option of telling you when and where the last login happened from. As it exists now, someone else can access your account at the same time you do (provided they got your password somehow) and there is no way to know it.

HenNovember 16, 2007 9:19 AM

"I have become convinced that this "principle of least privilege" is fundamentally wrong. Minimizing privilege might reduce the damage done by some security holes but almost never fixes the holes. Minimizing privilege is not the same as minimizing the amount of trusted code, does not have the same benefits as minimizing the amount of trusted code, and does not move us any closer to a secure computer system."

I'm not sure I understood correctly. I would agree with opinion that "principle of least privilege" is not enough, but why fundamentally wrong ?

Logical ExtremesNovember 16, 2007 9:28 AM

William makes a good point. Like Security & Freedom, closing past holes & developing better architectures are not mutually exclusive. Just think how much cleaner the Internet would be in terms of SPAM and Windows malware if all PCs were cleaned and updated.

Brandioch ConnerNovember 16, 2007 10:06 AM

I agree ... sort of ... not not completely.

First off, systems can only be attacked if there is an avenue of attack available. Since we're talking an email service, that pretty much means that it has to accept connections from unknown machines on the Internet.

Meanwhile, my Ubuntu workstation does not. So it is completely invulnerable to outside attack (unless someone can crack the TCP/IP stack).

Anyway, I agree with him about running the least amount of code. But I disagree with him that means running it with the least privilege is "fundamentally wrong".

You cannot know that your code is perfect. So you have to operate on the assumption that it is not.

Which means you have to take steps to limit the access that that service has. In case it is cracked.

That means running it in a jail (if possible) and running it with the minimum rights needed (and looking for ways to further reduce the rights needed).

And there's nothing that says you cannot do all of those concurrently.

My Ubuntu workstation doesn't have any open ports, but I still log in as a regular user with restricted rights.

AnonymousNovember 16, 2007 10:15 AM

@Brandioch, not necessarily true.
accepting email != accepting connection

Your machine accepts connections only from trusted machines, but if a malicious email originating elsewhere is forwarded to you by one of those machines you can still be toasted.

If you use defense in depth (as you seem to) it's less likely, but for clarification it's not the connection it's the content that matters.

Bryan FeirNovember 16, 2007 10:55 AM

@Anonymous

As you said to Brandioch Conner, not necessarily true.

For an email server, most of the security issues don't really involve the contents of the message; that can be considered an opaque block for the most part. The bigger security issues for the server involve the SMTP commands that act as the message envelope. Remember that the destination for the email isn't necessarily in the message itself, it's in the SMTP envelope; the presence of To: and Cc: headers in the message is not required by SMTP.

If your machine only accepts connections from trusted machines, then presumably you trust those machines not to use you as a relay or twist the SMTP commands in an attempt to cause a security breach.

Now, a malicious payload in the email is something else, but that is really more to do with the security of the mail READER application than the mail server.

AlanNovember 16, 2007 10:56 AM

I wish Dan would learn some other lessons involving QMail.

Ever try to set up greylisting or spam filtering or amavisd or any other process to filter mail on it? It is painful. Most of them require a software hack to make them work correctly at all. (And give up if you are using Plesk. They use a version that is just plain useless.)

Today's Dilbert applies to Qmail. It is restricted to the point of uselessness.

And my final comment on Qmail...

Never trust a queue structure designed by a cryptographer.

havvokNovember 16, 2007 11:04 AM

The principle of least privilege and reduction of trusted code are not the same thing; least privilege is the concept of restriction of privilege to perform a specific task, while trusted code is a component or section of a program which requires increased privilege to perfom its task.

It may seem like splitting hairs, but it is very crucial to understand. By understanding which components of an application require increased privilege it is possible to isolate them; once they are isolated the developer can separate them into independant modules, and then apply the concept of Least Privilege to the environment.

Example: An application that accepts incoming requests to process payment data. A service listens for incoming requests and hands them off to a process that parses the data into a uniform structure and performs validation. The parser then supplies the data to payment processor which inspects the structure and validates its contents. If everything meets the criteria it processes the payment and returns a result.

By splitting it across multiple processes you now have the ability to jail each process, or have each process running on different hardware to prevent compromise of one component from affecting another. In addition to this you can also use all manner of firewall rules, operating system hardening, and such to reduce the access in each environement such that everything is locked down.

I understand that the above example is ridiculously contrived, but hopefully it illustrates the difference between reducing trusted code and implementing least privilege. Or maybe I should just grab another cup of coffee :P

partdavidNovember 16, 2007 11:17 AM

I don't see why anyone who's read the article would be confused about the distinction between the "principle of least privilege" and "minimizing trusted code."

Untrusted code, by definition, cannot violate the security regime. Code running under "least privilege" can violate it "a bit." There are several examples in the paper describing the distinction.

Analogy: Invaders are pests, and we have a screen door. What Bernstein is saying is that opening the screen door allows pests in, even if you're opening it *just enough* to allow your partygoers in and out.

Thanks to whomever gave the screen door analogy, I've forgotten who it was (it was here or on slashdot).

Pat CahalanNovember 16, 2007 11:29 AM

DJB, as always, has something interesting to say, but he's got more than a few fundamental assumptions in this paper that I find disconcerting.

"Furthermore, to the extent that measurements indicated a bottleneck (as they eventually did for the message files on busy sites), I should have addressed that problem at its source, fixing the filesystem rather than complicating every program that uses the filesystem."

Absolutely true. Totally irrelevant to the real world, and completely un-scalable. If everybody wrote software like this, we'd have a bunch of completely unusable software; can you imagine if you had to install and use djbfs in order to run qmail, install and use httpfs in order to run apache, etc? At some point, when writing software, the developer has to assume constraints; they can only use the system that they're developing the software *for*. DJB, on one hand, seems to understand this ("Why
write the same code again?") while simultaneously denying it.

The second complaint refers to the entire section 3.3, and is reflected by Alan's comment above -> Qmail is built to be secure (and built pretty well), but at the expense of... well, it's difficult to use with anything else, but DJB won't incorporate anything else into the base code itself. Qmail sitting on version 1.03 since 1998 is not a recommendation. Qmail itself might be as secure as all get-out, but if you want to do anything more complicated than receive mail from the entire world and deliver it directly, you have to bolt large pieces of other software onto qmail, which causes a number of problems.

In short, DJB thinks of security inside the context of his products. Which is great, but he has fundamental problems dealing with both the system and the user, as evidenced by this paper. If the system's service (file service) is too slow to support qmail's dependency on the file service, that's the fault of the file service. Fix the file service. Since parsing is a difficult security problem, just don't do it.

Well, if you build software while ignoring system dependencies and user requirements, it's a lot easier to write secure software.

RennieNovember 16, 2007 11:30 AM

partdavid:

The problem is, how does one implement anything useful as "untrusted code"? If it has any mechanism of passing data back to "trusted code", then it /can/ violate security. This suggests that the whole system must be "untrusted" -- which is essentially the same as the whole system being "trusted", except that you admit that its behaviour can be Byzantine.

DwayneNovember 16, 2007 12:10 PM

Rennie, I suspect you haven't read DJB's paper in depth (his headings can be somewhat confusing). He's not suggesting doing away with all "trusted code", he's suggesting reducing the amount of trusted code through code reuse.

Brandioch ConnerNovember 16, 2007 12:12 PM

@Bryan Feir
"If your machine only accepts connections from trusted machines, then presumably you trust those machines not to use you as a relay or twist the SMTP commands in an attempt to cause a security breach."

How do you validate that it is a "trusted" machine?

Look up "phishing" and "spoofing".

havvokNovember 16, 2007 12:18 PM

@Pat Calahan
Given DJBs track record, I can't fault you for assuming that he would write and ship djbfs, but his argument is sound. Rather than optimizing an FS for qmail, it would be better to develop a patch that optimizes the filesystem in general to leverage the perfomance benefits in all software that uses the filesystem.

Leonardo HerreraNovember 16, 2007 12:25 PM

It seems that many people think that Daniel is saying that running applications with lesser privileges is wrong. What he actually wrote is that this "principle of least privilege" is fundamentally wrong; that's different in my opinion. Maybe he should elaborate more on this, because it is a fascinating read.

Pat CahalanNovember 16, 2007 12:53 PM

@ havvok

His argument isn't sound. Or rather, it does make complete sense, but it doesn't apply to real world scenarios.

Yes, obviously, the right way to solve file system optimizations is in the file system. But you cannot assume, when writing software, that you will be able to optimize the file system.

If you're building a car, you can't say, "Well, this suspension design is absolutely perfect for running on flat smooth roads. It's a pinnacle of suspension design, and thus ought not to be changed. The fact that all roads are neither smooth nor flat isn't the issue. Now we need to go out and rebuild all of the roads that this car might run on to be flat smooth roads." Sure, flat smooth roads are great for lots of reasons, and we really ought to have flat smooth roads for all of those reasons, but the agenda of the people who are responsible for maintaining the roads isn't your agenda. Their priorities aren't your priorities.

Yes, in a holistic sense, DJB is correct. Practical life is much more ugly, however.

This is my problem with DJB's design philosophy. He builds his software inside of a system of constraints that are defined by the software. When the software has to interact with the system, or the users, or anything outside of itself, the interface between the software and the outside world is constructed based upon the demands of the software without taking into account what is outside of it. It's very logical, very consistent, and very correct. It also assumes that the rest of the universe will adapt to your constructs' needs.

If you're building an end-to-end system, this is fine, and absolutely the right way to go. You control the user interface, the software, the system, the hardware, and the network. What you build is going to be excellent. If you're not building an end-to-end system (and who is, really?), you're not only making assumptions, you're making assumptions about what you can force the outside world to do, which is bad design.

RrNovember 16, 2007 1:51 PM

@partdavid -

It's seems like a semantics game. While I agree that removing unnecessary functionality such as the screen door is a good thing, you don't always have that option - and Dan recognizes that, no doubt. So if I read the paper right (in particular the example in section 5.2) a solution is to make the screen door invisible by mechanisms such as sandboxing - which to me is a least privilege control.

cdmillerNovember 16, 2007 2:15 PM

@Alan "Ever try to set up greylisting or spam filtering or amavisd or any other process to filter mail on it? It is painful. Most of them require a software hack to make them work correctly at all."

Yes been there, done that, worked great. The pain appears to be proportional to the skillset of the sysadmin tackling the task. A mail server with 10 - 20 lines of configuration versus most other's requiring 150+ lines of configuration...

havvokNovember 16, 2007 3:00 PM

@Pat Calahan
From the qmail perspective, the requirement is to design a secure mail system. Performance is a feature not a requirement. In this case the design decision is made to leverage a trust relationship with the underlying platform to provide organization to the application configuration (i.e. hierarchal file system controls).

Since the author is leveraging a trust relationship that does not impact the security requirement, it is perfectly reasonable to say that the correct way to optimize this is to correct the platform issue, not the system issue.

From a practical perspective, this is also a reasonable solution as it allows the user to improve the performance of the system by addressing the bottleneck, which in this case the filesystem. If the FS is too slow, the user can change FS within the operating system, use RAID, or offload it to a SAN that may provide better performance. Like any well-designed modular system, the modules that can be swapped out are not just the end-user components, but components that host the system as well. This can be accomplished with qmail to yield improvements without compromising qmail.

To extend your mechanical analogy, DJB is saying the car has extremely durable suspension assembly, but you are going to be in for a rough ride if you use the wrong springs. If this is how the statement is made, which is more reasonable, replacing the springs, or the assembly?

Pat CahalanNovember 16, 2007 4:17 PM

@ havvok

> From the qmail perspective, the requirement is to design a secure mail system.

No, it's not.

From the qmail perspective, the requirement is to design a secure MTA. DJB did that, no argument. In the late 90s, that was a laudable goal. However, an MTA != mail system.

Look, I'm not saying that qmail is useless. It's definitely better than the comparative monstrosity that was Sendmail. I'm saying that DJB's design process belies prejudices, that's all. He has limitations with dealing with the boundary conditions of software (where it hits the system, and where it hits the users).

> In this case the design decision is made to leverage a trust relationship with the
> underlying platform to provide organization to the application configuration
> (i.e. hierarchal file system controls).

Yes, and as a result there is a dependency upon this trust relationship. Which is fine, in and of itself, but DJB considers this to be unidirectional -> Any security problem that exists outside of the actual qmail code is by definition not the fault of qmail, but qmail's practical security depends upon outsourced trust.

I'm sorry, but I've always considered this to render his famous guarantee basically worthless. If you outsource the trust of part of your design to something else, you don't get to absolve yourself of responsibility of ensuring that the outsourcing provider isn't secure. Or, more to the point, you can absolve yourself of this responsibility (it is free software, after all), but then claiming that your software is "secure" is hardly justifiable.

Beryllium Sphere LLCNovember 16, 2007 4:58 PM

Dan Bernstein's approach of identifying and isolating "trusted code" sounds a lot like OpenBSD's "privilege separation".

It gets confusing because of the example he gives of a JPEG renderer which is sandboxed by forking it into a process with quotas set to prevent file creation and so on. To me, that seems like a least privilege control.

NixNovember 16, 2007 5:07 PM

@Pat Cahalan, quite so. I suspect that this reinvent-everything habit of DJBs came first, and that the security justification is just that, a justification made after the fact. After all, qmail and all of DJB's other projects assume a new daemon starting and monitoring tool to replace inetd et al, even a new non-Unixlike *filesystem layout* for goodness sake. I doubt DJB could reasonably state that putting binaries in /usr/{bin,sbin,libexec} is a security problem: he reinvented them because he just didn't care how the existing system worked. Raging perfectionism can be useful sometimes, but over and over again in the design of qmail it proved extremely annoying and not especially helpful.

Ilya O. LevinNovember 17, 2007 5:03 AM

> I have become convinced that this "principle of least privilege��? is fundamentally wrong.

I’m looking forward to see the majority of security folks understand it. This principle is not only wrong, it is dangerous. Instead of solutions, it introduces other threats. Not to mention a misleading impact.

Steve ParkerNovember 17, 2007 8:05 PM

On "least privilege" - think about Solaris RBAC. You can say "these users can use printers"; "those users can use tapes", "they can use CD/DVDs", "admins can run shutdown", and so on.

That's all well and good, but what Dan is saying, is that you might be pretty careful about who you assign Admin privs to (they can run shutdown, after all), but you might give anybody access to printer-based commands. If there's a bug in "lpstat", they can all exploit it.

John SimpsonNovember 18, 2007 1:18 AM

Two points.

(1) For those who don't know, qmail itself is structured as a set of little tiny programs, each handling one specific part of the overall "mail server" job. The only part which runs as root is "qmail-lspawn", which starts local deliveries by setuid()'ing to the recipient's userid and then exec()'ing "qmail-local" to do the delivery. Other than that, qmail does not run as root- in fact it has seven different userids and two group ids, to ensure that different parts of qmail can't interfere with each others' data.

(2) Regarding his remarks about the "principle of least privilege"... I don't think he was saying that it's a bad idea in and of itself, but that many developers rely on it, to the exclusion of writing secure code to begin with. What I got out of it was that instead of spending so much effort trying to figure out how to minimize the damage which can be done by exploiting future bugs, developers should spend that effort to make their code bulletproof to begin with.

Of course, I think it's even better to pursue both goals- writing better code, and structuring it to limit the damage which can be done by future bugs. Nobody's perfect- I know I've found more than my fair share of bugs in my own code over the years...

stephanieNovember 18, 2007 1:13 PM

Hm. DJB correctly cites Saltzer & Schroeder 1975 for articulating least privilege, but doesn't credit them for any of their seven other principles. "Economy of mechanism" seems particularly apropos here...

This seems to match my general observations, though - lots of talk about least privilege, to the [near] exclusion of any of the other principles.

cliffNovember 18, 2007 10:30 PM

@Pat Calahan
>>From the qmail perspective, the requirement is to design a secure mail system. Performance is a feature not a requirement.

This is not true, performance is a requirement. I have been testing out self-destruct and other types of emails with security features from a new BigString.Com email account. And, it took one day for their emails to deliver and I cannot even open the attachments.

NMONNETNovember 19, 2007 8:13 AM

"qmail and all of DJB's other projects assume a new daemon starting and monitoring tool to replace inetd et al,"

Pretty much, no, you can run most of qmail through inetd without changes to the binaries, basically. The interface of between tcpserver and its spawnees is the same as between inetd and its owns.
What he did though is push the recipe further, and turned inetd into svscanboot/svscan/supervise/tcpserver, and it works rather nicely IMO.

" even a new non-Unixlike *filesystem layout* for goodness sake. I doubt DJB could reasonably state that putting binaries in /usr/{bin,sbin,libexec} is a security problem: he reinvented them because he just didn't care how the existing system worked."

djb's tools don't need no stinkin' layout :) Seriously, I think he changed the layout *because he could* ... IE his way of doing things is such that it doesn't require a specific layout. And therefore is less prone to bugs involving them. Indeed, most tools work with i/o limited to:
- cwd
- env vars
- standard input
- standard outputs
Additionally some tools read or write on a few unconventional fds.

"Raging perfectionism can be useful sometimes, but over and over again in the design of qmail it proved extremely annoying and not especially helpful."

The main problem with djb is his unwillingness to release his work under an explicit free software license. He might have an unpleasant personality, but I can, and you should, get over it; overall it doesn't matter, results do.

CGomezNovember 19, 2007 8:20 AM

@derf:

"For example, there's no valid reason for calc.exe in Windows to have read/write privilege to the network, hard drive, user files, memory locations of other programs, or any other location except possibly to access the clipboard."

You've pretty much nailed it. However, there is a lot of work to do here, and for now it's being pawned off on the developer. You, as developer, decide what privileges to assert and what to deny yourself. That's a good start, but the problem is most developers are not paid to think about these things. They are paid to ship... ship... ship.

The corporate mindset is _still_ "who wants to attack us? no one is interested in us..." and developers are told there's no time to think about such trivial things as what privileges does our application _really_ need.

Least privilege is what you as a user have to defend yourself from code you want to use, but can't trust. Sorry, but we want to use code we don't completely trust all the time... web sites, emails, games...

And the makers of these things, especially games, know that you want to use them so much more than anything else that they can print right on the box "requires administrator rights" and you will still buy it and play it! If those games were boycotted, then there's one development industry that would have to clean up its act.

Tomi PoNovember 30, 2007 5:24 AM

DJB is right. I think he means that minimizing privileges of the running code is fundamentally wrong, because it does not enforce a quality of the code.

Minimizing privileges is a reaction to the bad code in order to minimize the damages. Same applies to firewalls and other "security software" that try to block the malicious code. They don't fix the code they are trying to protect. All security software would be totally unnecessary if the code they are trying to protect would be good enough not to accept input that can break the code.

I think he is saying that we should concentrate on the code we are writing instead of laying our trust on principles and security frameworks.
Please don't think: "I have used X, so I must be safe". If you can't do that, maybe you shouldn't be a software developer.

He is saying that if we really know what we are doing it will lead to simpler software, with less code and less bugs. It's about improving programmers. That's what code minimization is all about.

Tomi PoNovember 30, 2007 6:43 AM

“Problems cannot be solved from the same state of awareness that created them��?. −Albert Einstein

Someone else said "Problems cannot be solved by the same process that created them".

We can't fix software bugs by writing more software.

So, if there is a one bug per N lines and you'll add XN lines in order to improve security, it's likely that you have only added X bugs into your code.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..