Making an Operating System Virus Free

Commenting on Google’s claim that Chrome was designed to be virus-free, I said:

Bruce Schneier, the chief security technology officer at BT, scoffed at Google’s promise. “It’s an idiotic claim,” Schneier wrote in an e-mail. “It was mathematically proved decades ago that it is impossible—not an engineering impossibility, not technologically impossible, but the 2+2=3 kind of impossible—to create an operating system that is immune to viruses.”

What I was referring to, although I couldn’t think of his name at the time, was Fred Cohen’s 1986 Ph.D. thesis where he proved that it was impossible to create a virus-checking program that was perfect. That is, it is always possible to write a virus that any virus-checking program will not detect.

This reaction to my comment is accurate:

That seems to us like he’s picking on the semantics of Google’s statement just a bit. Google says that users “won’t have to deal with viruses,” and Schneier is noting that it’s simply not possible to create an OS that can’t be taken down by malware. While that may be the case, it’s likely that Chrome OS is going to be arguably more secure than the other consumer operating systems currently in use today. In fact, we didn’t take Google’s statement to mean that Chrome OS couldn’t get a virus EVER; we just figured they meant it was a lot harder to get one on their new OS – didn’t you?

When I said that I had not seen Google’s statement. I was responding to what the reporter was telling me on the phone. So yes, I jumped on the reporter’s claim about Google’s claim. I did try to temper my comment:

Redesigning an operating system from scratch, “[taking] security into account all the way up and down,” could make for a more secure OS than ones that have been developed so far, Schneier said. But that’s different from Google’s promise that users won’t have to deal with viruses or malware, he added.

To summarize, there is a lot that can be done in an OS to reduce the threat of viruses and other malware. If the Chrome team started from scratch and took security seriously all through the design and development process, they have to potential to develop something really secure. But I don’t know if they did.

Posted on July 10, 2009 at 9:44 AM112 Comments

Comments

Gweihir July 10, 2009 10:16 AM

I may be off here, but isn’t Chrome just another Linux distribution, this time targetted at the masses?

Steven Scott July 10, 2009 10:16 AM

This is like the claim made at Apple stores that “Macs can’t get viruses”. That OS wasn’t popular until recently. The truth is, Chrome OS won’t have any viruses, not for a little while at least. It’s a silly thing for a company to say, but the end user’s experience will likely reflect it for the time being, so it makes sense from a marketing perspective.
I wouldn’t worry about being overly harsh in your reaction to it, someone has to balance out all the fluff.

Cheers,
Steve

FP July 10, 2009 10:23 AM

It’s perfectly possible to make an operating system virus free. Start with making just about everything read-only. But in the process it would cease to be a general-purpose operating system and becomes a glorified typewriter, gaming console, or, in case of Google, a browser.

Jon Solworth July 10, 2009 10:25 AM

There are two interesting questions that come to mind.  First, is there a provable statement that some bad thing (i.e. violation of a safety property) does not occur?  Second, is the property guaranteed significant?

Design of new OS can make progress on these fronts w/o recognizing malware.  For example, by ensuring only code from trusted sources executes.

Jon

Nigel Sedgwick July 10, 2009 10:26 AM

Whilst agreement on the definition of operating systems and of viruses make it difficult to apply absolute certainty to this issue, I would suggest the following might be useful.

A computer system that has its operating system and principle support programs installed on a primarily read-only medium: one that can be rendered writeable only by a switch (big, red, with lots of warning messages and perhaps key operated). In this way, and with suitable constraints on what will be automatically run by the operating system at boot time or later, and when write access is allowed to the O/S storage medium, a lot of the obvious ways of corrupting an operating system are removed. Such a hardware mechanism removes many of the problems with security faults in system software.

In particular, and subject to certain modest constraints on users, rebooting the system would provide a clean virus-free environment, at least until users started doing things.

Next, it would be better to avoid or reduce use of the sorts of plug-ins that run arbitrary code from unsupervised sources. Such things as unsandboxed macros within word processor files (eg MS Word .doc format) should be avoided; .rtf is a much safer format to use for emailing or downloading most word processor documents. Likewise scripting from web pages.

Distributing O/S upgrades and installation of application as executables gets ‘null points’. At least leave control of upgrading to a piece of software that, on a clean machine, has the ability to check what is being installed before installing it.

All the above make more secure computers in the real world, in which there are ignorant and careless users, and helps to limit the opportunities for security mistakes to be exploited.

Best regards

Calvin July 10, 2009 10:33 AM

Shouldn’t there be a distinction between virus proof OS and a checker proof virus? Wasn’t the latter what the paper specified? I can think of many cases, some not even obtuse, where a computer system would be immune to remote attacks for all practical purposes.

dragonfrog July 10, 2009 10:42 AM

How about this scheme – don’t rely on virus detection (enumerating badness) at all, rely on the user explicitly signing each executable before it’s run (an owner-maintained list of intended uses).

Issue a smart card with each computer that holds the private key (and make it easy enough to get it replaced, or you’ll brick the things fast). At installation, the executables that are part of the OS are pre-signed.

Optionally you could require an SSL-cert-like basic identity validation process, so every executable that comes out must be signed by an issuer.

Assuming you include the issuer-signing part, each time an executable seeks to run, the OS’s choice tree goes:

  • is the executable signed by both the issuer and the computer’s owner? Run it.
  • is the executable signed by an issuer but not the computer’s owner? Ask the user to sign it or reject it. (At that point you could potentially have the issuer’s certificate contain a requested capability list for the app, and the user can choose which of those capabilities to grant – e.g. Yes run this word processor, but don’t let it talk to the Internet like the vendor requested)
  • is the executable not signed by an issuer? Don’t even bother the user, reject it out of hand.

This misses two kinds of viruses, as I see it:

  • viruses written in a scripting language, such that the executable that’s run is a trusted one (a shell or interpreter)

  • honour system viruses, which simply ask the user to download and run a program with an enticing come-on line (see Britney Spears dance with Hamsters at the Super Bowl in a Hurricane)

Carlo Graziani July 10, 2009 10:59 AM

Focusing on OS security is missing the point. The weak link in the security of consumer computers is the user, not the system. A clueless user browsing the web over OpenBSD is a more vulnerable target than an experienced user using Windows.

What is really required to control the malware epidemic is user education, and possibly liability. Networked computers are like cars — massively useful but potentially dangerous consumer devices. If we treated careless computer users the way we treat careless drivers, most of the malware problem would disappear, in my opinion. In my view, even suggesting that any technical fix is possible absent measures to reform consumer behavior is tantamount to peddling snake oil.

UrbanSage July 10, 2009 10:59 AM

@dragonfrog
“rely on the user explicitly signing each executable before it’s run”

Yeah, because we all know that users don’t click “Yes…” when they should click “No…” and that we live in a world with no botnets because of it. Or did I get it backwards?

Peter July 10, 2009 11:00 AM

It seems to me that there’s a big difference between a perfect virus scanner and a virus-proof OS. For example, if I were to design a trivial OS that didn’t allow the user to run any outside code at all, wouldn’t this be virus proof?

Thunderbird July 10, 2009 11:12 AM

“It seems to me that there’s a big difference between a perfect virus scanner and a virus-proof OS. For example, if I were to design a trivial OS that didn’t allow the user to run any outside code at all, wouldn’t this be virus proof?”

Yes. However, much of what we think of as “useful stuff” we do on computers requires the ability to interpret programs loaded from elsewhere. The first and nastiest example, the typical web page is a gigantic collection of Javascript. You can argue that web pages shouldn’t work like that, and you might be right, but that’s just the way it’s come to work.

Mark R July 10, 2009 11:23 AM

I think the real question is whether security will trump other concerns like user convenience in the design process. As somebody pointed out, there is already an OS (OpenBSD) that emphasizes security at every stage of the design process. It’s not going to give Windows a run for its money in terms of mass appeal, but it’s there for people who are willing to give up support for all the latest-and-greatest features in exchange for a highly trusted OS.

Likewise, most people will tell you they care about anonymity on the net, but when they see the performance hit incurred by using TOR, they will decide they don’t care that much after all. I’m not saying they’re wrong, either… it’s a question of priorities.

Given that this new OS is pitched for mass market appeal, I can’t see how they will magically eliminate the security vs. usability problems that have plagued other software companies.

dave July 10, 2009 11:25 AM

@dragonfrog:

Configuration/data files must necessarily be modifiable, and must be interpreted by the signed software. Your system is one buffer-overflow away from having a virus…

An interested member of the community July 10, 2009 11:37 AM

Bruce,

Off-topic I know, but given BT’s recent announcement, is there any change you’re now free to break the Omerta on Phorm?

Just thought that it can’t hurt to ask.

Knox July 10, 2009 11:47 AM

The whole business about educating users is way overblown. There is simply no practical way for any user, no matter how educated, to know whether it is completely safe to point your web browser at any given URL.

Turning off Javascript is no longer an acceptable solution, by the way.

Swashbuckler July 10, 2009 11:49 AM

“In fact, we didn’t take Google’s statement to mean that Chrome OS couldn’t get a virus EVER”

Hmmm… users “won’t have to deal with viruses” sure sounds like Google’s claiming it could never get a virus. The only way a user won’t have to deal with viruses is that the OS could never be infected by one…

Bateau July 10, 2009 11:52 AM

While I agree that it is likely, if not completely, impossible to develop a system that could (without false positives/false negatives so as to render the system terribly problematic for normal use) detect, and then react to a virus, I believe that that is the wrong approach to take should one wish to develop a truly virus-free operating system (and I hope that the developers involved in this project are aware of this, and have instead taken a known good route)

First of all, and most obvious in this field, I’d like to point to the good work done on this subject by the National Computer Security Center at the NSA on the “Orange Book”, DoD 5200.28. In this it developed a formal method to evaluate the security features present within a computer system as a whole, with focus on high assurance systems. Of note in this context are the goals to be demonstrated by an A1 system, including, most importantly, the fact the security kernel of the system is to be formally defined (most often with a finite-state model), and as such, it can be proven that no process can violate the security policy enforcement of the system.

This doesn’t need a “read-only” environment, it doesn’t work to “detect” a virus, and then react, rather it takes advantage of the fact computers are complete finite-state machines (as was given in, for the earliest examples, Shannon papers from the 30s and 40s) and as finite state machines, if we use a mandatory access control, rather then the more common discretionary, we can guarantee that no process, malicious or benign, can modify data that should be protected (such as executable code stored for another program, or data in memory).

The earliest well-worked example of this, though without high enough assurance not to fail, was of course Multics. Multics was developed based on the research from Ware’s 1970 “Security Controls for Computer Systems”, James Anderson 1972 – “Computer Security Technology Planning Study”, and David Bell/Len LaPadula’s 1973 “Secure Computer Systems: Mathematical Foundations” (which of course lead, once Multics was a working example, to their paper: “Secure Computer Systems Unified Exposition and Multics Interpretation”, a major basis for the TCSEC and much of the other content in the “Rainbow Series”)

Following this flurry of research into the field, and the proof-of-concept given by Multics, and the additional research its development and deployment prompted, multiple systems were developed at the A1 level as proofs of concept (though as far as I know, most, if not all of them remain classified to this day, beyond the most basic of information (and in the case of Gemini Computers’ GEMSOS system, virtually unused when variations on them were made available))

In summary, I would say that not only has it been proven mathematically possible to create a computer system that is immune to malicious software (within the context of that software’s ability to spread to other programs on the system and externally, since it cannot access the trusted path within the security kernel to authorize its action), there are several proofs of concept and worked examples of this fact (such as Blacker).

NM July 10, 2009 12:21 PM

It’s a matter of definition. Linux doesn’t have a virus problem; that doesn’t mean it doesn’t have malware: it does. But viruses are executables that infect other executables. This is different from worms or rootkits in that simple respect.
It’s not just a question of popularity that Windows get infected by viruses and not Linux. I believe it’s mostly due to RPM/DEB. You never download executables on Linux. In fact you rarely even download packages; you use apt-get or yum, the packages are signed.
This is by no means an absolute protection, but as long as the upstream is secure, getting malware in downloaded executables is a million times less likely than suffering from a remote exploit or a weak password.

Swashbuckler July 10, 2009 12:26 PM

@bateau

“if we use a mandatory access control, rather then the more common discretionary, we can guarantee that no process, malicious or benign, can modify data that should be protected (such as executable code stored for another program, or data in memory).”

That doesn’t stop infection from a virus.

Consider CVE-2009-1633 (http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-1633)

Buffer overrun, which could allow for remotey code execution in the kernel (very hard in this particular case, but not impossible) – and in CIFS no less, which means it’s delivering network data to other processes which can be infected prior to handing it to the process. Even a system like Mach cannot prevent infection in this scenario.

There are always some subsystems that must be trusted for the OS to work properly. If those subsystems are compromised then malware cannot be stopped.

casey July 10, 2009 12:33 PM

The claim on the google blog was even more fantastic.

“…so that users don’t have to deal with viruses, malware and security updates”

Not only virus-proof, but without the need for security updates of any kind. This is the kind of statement that noone will remember when we see the first security flaw. Chrome (the browser) has already had such problems. Semantically, if the O/S’s only job is to start the browser, then you will never need an update. If the chrome browser needs patches and it is the only thing running then you have effectively patched the O/S.

The penultimate paragraph is where the real story is. You will not have to worry about backing up your files and they will be availble anywhere. What do you think that means?

Jason July 10, 2009 12:34 PM

Google’s biggest issue here is that Chrome OS is not an operating system, but a window manager for Linux.

If a kernel bug is exposed, Chrome OS will be vulnerable.

To say that users won’t have to worry about viruses, doesn’t imply that their system won’t be targeted or infected, just that the end user won’t notice (no performance impact) and won’t have to fix it (Google Update will automagically take care of it).

I imagine they will design a system that can “handle” being infected by malware without causing inconvenience to the end user. Consider how Chrome (the browser) isolates its tabs as separate instances. Taking this approach at the windowing system level would use more resources, but would keep an infected process from stepping on clean processes.

Clive Robinson July 10, 2009 12:53 PM

@ Bruce,

The “knowledge” that it is not possible to protect 100% against malware actually predates digital computers and relates to the meanings of “undecidable”.

Of relevance there are two distinct meanings of the “undecidable” one in mathematics and one in computer science (yeh I know one is a subset of the other ;).

In mathmatics “undecidable” is used in the proof-theoretic sense used in relation to Kurt Gödel’s first and second incompleteness theorems. That is of a statement being neither provable nor refutable in a specified deductive system.

The second meaning of interest relates to computer science, and is used in relation to computability theory and applies not to logical statements but to decision problems (such as halting) independently discovered by both Church and Turing. Put overly simply the problem is to decide in advance the answer (yes/no) about a set of questions (such as “will this program halt”). Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the set.

So asking the question “is this malware” is undecidable in all cases. Further as your computer is trying to judge within it’s own limited logic it bumps into Kurt Godel’s little theorms.

Then if that where not enough to nail the lid down firmly, skulking away in the wings is Heisenberg’s uncertainty principle. We know from an IBM researcher that a bit of information has a certain minimum energy value which means (indirectly) that the uncertainty principle applies to information.

And the search for nails continues with entropy which is realy a measure of posability and is effectivly bound only by an istant of time and energy/mater in our universe.

All Fred Cohen did (and it actually was very good work for the time) was show that the theories of Gödel, Church, Turing, Heisenberg, Shannon et al applied to a specific type of undesirable information aka “malware”.

peri July 10, 2009 1:09 PM

@Clive Robinson: “is this malware”

Corollary: “is this Skynet” is also undecidable. For us, Skynet and Skynet’s Skynet.

Nora Rom July 10, 2009 1:17 PM

Is saying

“It was mathematically proved decades ago that it is impossible — not an engineering impossibility, not technologically impossible, but the 2+2=3 kind of impossible — to create an operating system that is immune to viruses.”

really the same as saying

“…it [is] impossible to create a virus-checking program that [is] perfect. That is, it is always possible to write a virus that any virus-checking program will not detect.” ?

I think not. I think the latter statement is most assuredly true but the former is neither necessarily true nor does it follow from the truth of the latter.

I think (intuitively) it is completely possible to design and produce an OS that is immune to viruses. I am less certain but still inclined to believe it is possible for an OS to be immune from all logical intrusion, though maybe not from physical intrusion.

Jakub Narębski July 10, 2009 1:19 PM

I think that there might be the same problem with this proof that virus detection must fail like with proofs about encryption systems.

Also no to perfect virus detection does not imply no to virus-free operating system; the separation of user and admin (root) account alone would reduce greatly the probability and severity of virus infection.

Nora Rom July 10, 2009 1:22 PM

@Swashbuckler:

“Buffer overrun, which could allow for remotey code execution in the kernel (very hard in this particular case, but not impossible) – and in CIFS no less…”

What if the OS was designed so as to be immune to buffer overrun? Seems to me that would make it immune to viruses and many other types of malware.

alt July 10, 2009 1:38 PM

@Peter
“For example, if I were to design a trivial OS that didn’t allow the user to run any outside code at all, wouldn’t this be virus proof?”

Not necessarily. Your trivial OS could have buffer overruns, which cause outside code to run.

It could also fail to check incoming parameters in such a way that an injection attack would cause outside code to run.

http://xkcd.com/327/

Also google the keywords: HP calculator synthetic programming.

Finally, this OS couldn’t possibly host a modern web browser, because then it would have to support JavaScript, and JavaScript on web pages is, by definition, “outside code”. Arguably, HTML or any other interpreted language is also “outside code”. Forbidding all those things would likely make this OS increasingly secure. It would also make it increasingly useless.

Tom July 10, 2009 1:38 PM

What gets missed in all of these discussions about secure operating systems is IBM’s mainframe operating system, z/OS. Part of z/OS security comes from software and part of it from hardware.

The mainframe was designed many years ago with security in mind but the mind share in public discussion is how do I make consumer software more secure rather focusing on the fact that there is an operating environment out there that is very secure and most businesses use it for their critical applications.

It would be very helpful to have Bruce study the z/OS operating environment and then give us his analysis of the mainframe’s secure OS.

AppSec July 10, 2009 2:32 PM

“so that users don’t have to deal with viruses”

It isn’t saying it is immune.. It isn’t saying they won’t happen.. It’s saying that if it DOES happen, things will happen that will be behind the scenes and the user won’t be bothered by it.

What is google good at? Managing data. Managing Lots of data. Indexing lots of data. Manipulation of data. See where this is going.

And didn’t Chrome start doing “sandboxing” the browser process?

So.. You have a company that is good with managing data and identifying changes to date… Building an OS based on a Browser which is supposedly good at Sandboxing it’s processing..

Maybe I’m idealist.. but who knows..

Peter July 10, 2009 2:33 PM

@Those tearing down my lovely OS design.

Obviously you can have security vulnerabilities in any software, but that’s beside the point. My point is that it’s fairly easy to imagine an OS that really is immune to viruses. (The OS I proposed isn’t terribly useful, I agree, but that’s similarly irrelevant.)

Captain Oblivious July 10, 2009 2:54 PM

*** “There is simply no practical way for any user, no matter how educated, to know whether it is completely safe to point your web browser at any given URL.”

Agreed. I really hate the “don’t browse untrusted sites” argument – it’s like saying “don’t watch untrusted TV channels, or your TV might explode”… it’s a total cop-out on the part of the browser/OS makers.

A Nonny Bunny July 10, 2009 4:47 PM

@Captain Oblivious

I really hate the “don’t browse
untrusted sites” argument – it’s like
saying “don’t watch untrusted TV
channels, or your TV might explode”…
it’s a total cop-out on the part of the
browser/OS makers.

I don’t see why. I can just google a site first, and that alone will tell me a number of things about it; in some cases even if it is infected with malware. AVG also scans the links in my google results.
And most urls I get, I get from people/sites I know and (somewhat) trust, and so it is by transitivity a trusted site. If, say, Bruce links to another site, I will trust it not to be infested with malware.
I rarely just type in some random url and wait in suspense where I end up.

fusiom July 10, 2009 6:05 PM

Maybe it wasn’t Cohen’s paper but one mentioned in a discussion on BBV:

In “An Undetectable Computer Virus” David Chess and Steven White of IBM show that you can always create a vote changing program (a virus in their context) that no ‘verification software’ can ever detect. They do this by a very clever argument which you can pursue in that paper, but the important thing to realize is that their results are not in doubt. You also should know that these arguments apply to every computer system that can ever be created. Therefore, if you use a computer anywhere in the vote counting process, you cannot be certain of the result.”

http://www.research.ibm.com/antivirus/SciPapers/VB2000DC.htm

sed July 10, 2009 6:37 PM

@Peter
Obviously you can have security vulnerabilities in any software, but that’s beside the point. My point is that it’s fairly easy to imagine an OS that really is immune to viruses. (The OS I proposed isn’t terribly useful, I agree, but that’s similarly irrelevant.)

Well, yes, it is easy to imagine. It’s even easy to do. Right now. With any OS. Without installing any antivirus software, either.

Simply install the OS on an isolated computer. No network connection. No removable media. No physical access. Boom: completely immune to viruses. Guaranteed.

Of course, the existence of that isolated computer is also pretty much irrelevant if you’re planning to do any work with it. In fact, it may as well be turned off, which is another way of making it completely immune to viruses.

gilberto July 10, 2009 8:38 PM

IBM z/OS operating system, properly configured with RACF or TOPSECRET, can be highly secure and reliable, but it’s not for laptos, rather for mainframe servers

Brandioch Conner July 10, 2009 9:03 PM

This comes up all the time. And it is seldom addressed correctly.

All that it takes to make an OS “immune” to viruses is for the infection rate to fall below the removal rate.

It’s computer SCIENCE, not magic.

Now, how to make it easier to remove viruses. How about we focus on identifying the files that SHOULD be there so that the other files can be quarantined?

Harry Johnston July 10, 2009 9:05 PM

I believe it would be possible to design a useful OS that, if bug-free, would be essentially immune to viruses and malware not deliberately introduced by a trusted administrator or by someone physically opening the case. The basic idea is for each document to be opened in a separate process which cannot interact with other processes or documents (except via controlled, user-initiated processes such as copy-and-paste).

It may one day even be possible to design such an OS with a formal proof that it is in fact bug-free, although it isn’t within the state of the art at present and I doubt it will be in the foreseeable future!

It should, however, be possible to significantly reduce the likely number of bugs by adopting safer coding practices, e.g., only allowing read-only pages to be executable, prohibiting function pointers, keeping the call stack in a separate address space, and so on. Multiple layers of protection could also be used to make it likely that it would be necessary for malware to exploit multiple bugs simultaneously to inflict severe damage such as a persistent rootkit.

Of course, this is all a statement of principle. I don’t for a moment believe that Chrome OS is based on these ideas!

robche July 10, 2009 11:49 PM

One way to design a very secure OS (and I am not sure if it would be completely secure) is to follow a path of the iPhone and the App store concept.

If you don’t jailbreak the iPhone all the applications that are installed on your iPhone have been checked by Apple. If the malicious application is not caught by the screening process there is still an option to mass remove the application later by Apple.

The only two question that remain in this scenario is how to provision the installation the applications approved by the company you work for and how to prevent that a computer is infected by visiting websites.

I suppose the later is easier to prevent if the OS assumes that only applications from the App store can be installed.

nemryn July 11, 2009 1:18 AM

I assume that the null OS would be truly (albeit trivially) virus-free. Not very useful, though.

henchan July 11, 2009 4:12 AM

Computer virus is a word that has never been questioned since it was first coined. For good reason. The analogy with biological viruses is beyond apt. Now how many organisms have managed to evolve complete immunity to viruses over our collective 3+ bililon years of existence ?

Clive Robinson July 11, 2009 5:02 AM

@ Brandioch Conner,

Your statment,

“All that it takes to make an OS “immune” to viruses is for the infection rate to fall below the removal rate.”

Is trivialy disprovable. For any given virus to be removable it has to be detectable. Which means one of two things, a perfect scanner whilst the system is working, or when a system is not operating a perfect method of detecting changes.

The first has been shown to be imposible (Godel/Church-Turing) and the later is only possible if all parts of the system effecting the operating system are effectivly imutable (that is the system will be in a fully known state before being restarted).

The later constraint can be difficult for people to get their heads around as it also includes microcode in CPUs and IO devices, but not the ordinary RAM if it is fully initialised on start up.

But such a system would be of limited use in reality because effectivly it is an embedded system implementing a state machine that does not take action bassed on the data it is processing examples of which are DSP systems etc, effectivly they are “blind filters” to the data.

Then there is the simpler point of your odd meaning of “immune” your statment,

“the infection rate to fall below the removal rate”

Clearly indicates that infection by one or more viruses is not just expected but will happen, and at some later point in time it will be removed.

This is a little like saying humans are immune to bacterial infections because we have anti-biotics that can cure a patient when they are infected (which has been shown to be not true).

This is very like the definition of a “zero day” attack which are probably the most devistating in terms of the fact that in any given mono culture of sufficient size the chance of infection is nearly random in appearence.

The only members of the culture that cannot be infected are those in strict isolation from other members of the culture that are infected. From the human perspective it would be like living in perfect isolation which we know from experiance is generaly not a productive state to be in.

Your following statment,

“It’s computer SCIENCE, not magic.”

Is as I said not true (halting problem).

Although I have a great deal of agrement with your view point as it is probably the engineering / practical way the issue will be resolved not just now but in the future the way you are saying it is wrong.

Alex July 11, 2009 5:35 AM

That’s surprising that nobody mentioned the way modern Linux become PRACTICALLY virus-free: did word “SELinux” rings a bell, someone?
All the compiler-level protections like “propolis, address randomization” and so on just makes life for potential viruses really difficult… but SELinux is exactly the technology that makes it PRACTICALLY impossible for virus to populate.
Of course if we exclude letters like “please send 10 copies of this idiotic spam to your friends” from defenition of virus.
Btw, where can I see abovementioned Ph. D. – I wonder how “virus scanner” and “virus” were defined… not sure if it were correct.

Mark R July 11, 2009 6:39 AM

Re: “Don’t browse untrusted sites”

Surely you’re all aware of the explosion in legitimate sites (well-established businesses, major news outlets, etc) unwittingly hosting malware? SQL injection vulnerabilities in legitimate websites are used to inject malware and infect the sites’ visitors.

It’s not just warez & porn sites anymore. Lots of people who “didn’t browse untrusted sites” got infected anyway. “Run scripts by exception” is much better advice these days, though admittedly your grandma might not know what you’re talking about.

Vit Fargas July 11, 2009 6:55 AM

BS! Does programmable VCR have problems with viruses? If you smack it with hammer then yes, but normally not.

The virus concept is result of pure bad engineering design – permanent memory + ability of program to write in it anywhere they can + ability to run programs from anywhere…

The prove you can’t get virus – run everything in separate sandbox, and OS CAN’T get any virus, malware, nothing… If applications are so stupid they mess themselves up because of data they took as input, they are badly written and it’s their fault, but no other application or OS has to suffer because of that.

To bring up analogy – applications will always be like organisms – if they are bad written they can mess up, but OS is much on level of physics laws – no matter what, there is NO virus for physics laws.. And if it gets virus, it’s of bad design…

bf skinner July 11, 2009 7:59 AM

It’s probably a good thing to have a blog to correct the record but how do you link what was reported-said-intended to the original source of mistake-error-malintent?

Eric H July 11, 2009 8:34 AM

I have an OS that will never get a virus. Not “can never” but “will never”. That’s because I have it on a 5-1/4″ FD and refuse to give copies to anyone else. Trivial solution? Sure.

Creoe July 11, 2009 9:16 AM

If the OS really is just a browser that runs web applications. Is it possible to get a virus since nothing really installs or runs on the local machine other than the browser?

a July 11, 2009 9:29 AM

Bruce,

Maybe I’m just really picky (especially since the other 54 comments don’t appear to mention this) but I’m wondering why you would answer questions about a claim without actually reading the claim for yourself. We all know reporters get things amazingly wrong, and we know that reporters are people and some of those people will be pushing agendas. Thus, it would seem appropriate to distrust the claim “repeated” to you by the reporter, and investigate the claim for yourself.

Just wonderin’.

Bruce Schneier July 11, 2009 9:52 AM

“I’m wondering why you would answer questions about a claim without actually reading the claim for yourself.”

It’s a good question. Basically, that’s not the way reporters work. They call me, and if I want to get in a story I have to comment. I try to limit my comments to what I know is going on, and if I don’t know the details of something I try to read up on it and call them back. But what sometimes happens is that we have general conversation about what’s said.

And — honestly — reporters aren’t looking for detailed assessments of the details. If there’s another national ID story, for example, they’d rather have me trot out my generic talking points than explain the minutia of the particular proposal.

In the Google reporter incident, it was easy. The reporter said something about Chrome being virus-free, and I knew that was a mathematical impossibility. So that’s what I commented on. Yes, it would have been better if I could have said, “you know, that’s not really what Google is saying here.” But I didn’t.

Clive Robinson July 11, 2009 10:28 AM

@ Les Stroud,

“then the user doesn’t have to ‘worry’ about viruses/malware because the simple practice of rebooting fixes the problem.”

Ah no, some systems have areas of memory that are not reinitialised on just a reboot but only on a full power re-cycle.

Also there is the issue of what happens after infection of a system untill the owner does reboot / power cycle it?

There are many systems that do not get rebooted / power cycled for extended periods of months or even years.

Further,

“If the OS is in firmware and there are not documents stored locally”

Even if the documents are not stored localy if the computer can access them then so can the virus.

Also even if an OS is in firmware this does not stop the issue of virus infection if there is semi permanent memory around that can be changed (hard disk / flash etc).

The most common application run on a PC prior to a web browser was the word processor.

As was shown with MS Word, it is possible to write a virus that will infect a users documents or other files. The reason is that Word alowed macros which are effectivly “programable code”.

So to prevent infection not only must the OS be virus proof so must all the application programs. And this is a problem because to be usefull many classes of programs need to be augmented either directly or by alowing the data they work with to also be programable code.

What Google has done with the Chrome browser is to recognise that it does not matter how secure your OS is, if it trusts the user application which then effectivly behaves an insecure OS (which unfortunatly most browsers do).

What it appears Google are trying to do is make the browser more secure in a number of ways.

However even if the get five nines (99.999%) of the way there it will not be 100% which is the point Bruce makes.

And because of this all systems that alow data to be code will be vulnerable at some point for a period of time. And it is this time window that experiance has shown will alow the virus to propergate from system to system. As in the case of the Morris worm simply disconecting a system from the network and restarting it cleared the worm out of it. Therefor if all the infected systems where disconected and restarted the worm would have died, but this is logisticaly difficult to achive.

BFuniv Rector July 11, 2009 11:24 AM

I think we can agree; as long as there are networks, no data on them will be fully secure. Any statement of infallibility, even third party misquotes, may draw attention like red flags are supposed to excite bulls.

This could get interesting.

Anyfish July 11, 2009 12:21 PM

Hmm… making an secure and safe operating system would start with an object capability security foundation such as KeyKos descendants Coyotos or Capros. (I prefer the latter) And then build up from there and take security seriously all through the design and development process of all applications.

Please note this is no “Silver Bullet” but it is an start.

For more information see:

http://www.capros.org/
http://wiki.erights.org/
#erights on freenode

Brandioch Conner July 11, 2009 1:07 PM

@Clive Robinson
“Is trivialy disprovable.”

Then it should be trivial for you to disprove it.

“For any given virus to be removable it has to be detectable.”

Incorrect. It is far easier to identify the known good code and simply quarantine everything else. But that is remediation and validation.

And then you go on about mathematical proofs and such.

You can argue theory all you want. I’ll stick to what is demonstratively achievable.

Alex July 11, 2009 2:07 PM

@ Clive Robinson:

Thank’s for the link but it’s not the thesis, it’s just an abstract for it.

Anyway, there is “limited transitivity systems” mentioned there as “systems with potential for protection from a viral attack” with no definitions given.

Am I guessing right that example of such system is GNU/Linux with SELinux enabled?

Maybe that’s why there are no know real-life virus for GNU/Linux so far?

I’ve heard childish explanation like “it’s not popular enough” but it’s obviously idiotic – HPC, embedded systems, servers… GNU/Linux outpast windows everywhere except gamer’s desktop long time ago.

Clive Robinson July 11, 2009 3:14 PM

@ Brandioch Conner,

“Incorrect. It is far easier to identify the known good code and simply quarantine everything else. But that is remediation and validation.”

Oh dear, oh dear oh dear,

First off read what I said carefully,

“For any given virus to be removable it has to be detectable.”

What you are saying is remove any code (good or bad) that is not part of the recognised code base (by whatever method it is recognised).

What I am saying is that to ONLY remove an instance of bad code you have TO BE ABLE to identify it as bad.

And that is the real issue IDENTIFICATION good or bad.

Show how you can scan any item of non trivial code and provably show it is unquestionably good, not only by it’s self but on interaction with other code?

You cann’t any more that you can make a scanner to do the oposit.

It is exactly the same problem as the Church-Turing “halting problem”, there is no piece of code that can scan another piece of non trivial code and definatly say YES or NO to the question “Is this malware?” you can not do it.

Which means that the method you are proposing is not reliable, it has to use some other method such as a CRC / Signiture / whatever. None of these methpds testify to the goodness or badness of the code, only to that somebody else belives it to be good and has put their signiture to it (remember the original MS Word macro virus came on a MS certified CD…).

As I said in my previous post about your post, I don’t have an argument with the method you sugest it is probably the only engineering or practical solution. What I have difficulty with is the way you express it, which is the same issue between Bruce and Google.

And there is an obvious problem with the method, which is one of time.

There is a time window between a system being infected and the bad code being removed. In that time window it can go on and infect other systems or activate it’s payload or both.

This issue was seen with the Morris Worm years ago and has been seen ever since.

One of the things Fred Cohen and others have discussed is good-v-bad virus use and it is a subject that keeps coming up. As far as I can remember researchers at AT&T originaly sugested the virus/worm propagation methods to ensure patches got deployed to all systems that could be affected by not being patched.

The problem with this is whatever method you use you cannot stop 100% somebody putting out bad code that passess the checks as a good code.

Clive Robinson July 11, 2009 4:21 PM

@ Alex,

There is a lot more there than “part1” the problem is I can’t find the index to it (Blaim Fred Cohen it’s his site 😉

Try incrementing the number I know a google search throws up other bits of it in the same server directory.

Alternativly drop Fred an EMail and ask him.

Brandioch Conner July 11, 2009 6:41 PM

@Clive Robinson
“The problem with this is whatever method you use you cannot stop 100% somebody putting out bad code that passess the checks as a good code.”

Who cares if vendor X put out “bad code” if there is no way for it to get onto my machine?

HumHo July 11, 2009 6:47 PM

I wouldn’t recommend critisizing Google or their statements. Google is like a baby, or a religion, and if you critisize them you will get a DDOS response in a mass of googlepositive statements from their fanclub.

Besides why do you call the response to your statement “accurate”?

It was not accurate to write something like this:

Google says that users “won’t have to deal with viruses,” and Schneier is noting that it’s simply not possible to create an OS that can’t be taken down by malware.

In fact, we didn’t take Google’s statement to mean that Chrome OS couldn’t get a virus EVER; we just figured they meant it was a lot harder to get one on their new OS – didn’t you?

The reason why above is not accurate is obviously because saying “users won’t have to deal with viruses” is a big difference from “Chrome OS couldn’t get a virus EVER”.

SantaClaus July 11, 2009 6:56 PM

And why would Bruce’s statement become an issue? Most likely only because he used the term “idiotic” about a statement made by Google.

What is idiotic here is peoples reaction to Bruces statement.

Besides the problem with Google’s OS is not whether the virii can affect the PC (meaning the users hardware) – it is whether the virii can affect the users OS located within the cloud. That OS can be affected just the same as any OS – of course Google might have e.g. some sort of automatic backup processes in place but time will show how effective these are.

Besides that there is the issue that by having your documents and files on Googles servers means having your documents and files on someone elses server. For me my privacy is important so that is something I would not do.

Clive Robinson July 11, 2009 7:51 PM

@ Brandioch Conner,

You appear to be in an argumentative mood which is unfortunate, further you have come across as more than somewhat insulting.

I have tactfully sugested that there was something wrong with some of your statments in your original post. Specificaly,

“All that it takes to make an OS “immune” to viruses is for the infection rate to fall below the removal rate.

It’s computer SCIENCE, not magic.”

Of the first I said it was trivialy disprovable and instead of going back and looking at what you had written you said,

“Then it should be trivial for you to disprove it.”

So as you have asked for it here you are.

I asked clarification of your meaning of “immune” and you decided not to bother.

From my copy of Collins,

Immune to,

Resistant to , free from, protected from, safe from, not open to, spared from, secure against, invulnerable to, insusceptible to, unaffected by , not affected by, invulnerable to, insusceptible to.

Which of these do you think is most fitting to your meaning of ‘”immune” to’?

However that is a matter of some triviality compared to,

“the infection rate to fall below the removal rate”

A simple example,

If a person puts three marbles in your hand, how many can I remove?

You claim that the removal rate can be greater than the infection rate…

If you realy belive this then you also belive you have lost more than your marbles you have been given.

It is akin to pulling rabbits out of an empty hat, which I’m sure most people would agree is the oposit of what you say,

“It’s computer SCIENCE, not magic.”

However with regard to your last post where you said,

“Who cares if vendor X put out “bad code” if there is no way for it to get onto my machine?”

To my earlier statment,

“The problem with this is whatever method you use you cannot stop 100% somebody putting out bad code that passess the checks as a good code.”

Unless you are writing your own OS and applications, then you are reliant on others.

Which means your “trusted code base” could come from “vendor X”, as I pointed out the original of the MS Word macro virus was on a CD from MS and supposadly certified as OK…

What illusory argument would you like to conjure up next?

yt July 12, 2009 1:40 AM

@Alex “Maybe that’s why there are no know real-life virus for GNU/Linux so far?

I’ve heard childish explanation like ‘it’s not popular enough’ but it’s obviously idiotic – HPC, embedded systems, servers… GNU/Linux outpast windows everywhere except gamer’s desktop long time ago.”

Perhaps this is a case of correlation being confused with causation. What if the type of person who is likely to use Linux is just also more likely to be aware of security issues, and more likely to take steps to secure their computer/use it safely? Installing and running Linux has a higher learning curve than plugging in a Windows machine out of the box. You have to know Linux even exists before you can install it, and you have to know WHY you would rather use Linux than Windows before you decide to go to the effort of running it. I would guess that the average Linux user is generally more computer literate than the average Windows user.

Clive Robinson July 12, 2009 4:03 AM

@ yt,

“What if the type of person who is likely to use Linux is just also more likely to be aware of security issues, and more likely to take steps to secure their computer/use it safely?”

As a hypothesis it is quite reasonable, but untill relativly recently not testable…

However the advent of the low end netbook with Linux being a “handbag computer” (lets face it the Acer Aspire One is cute enough to beat Apple at being a fashion accessory) has moved linux into a select group of “out of the box” users.

It will be interesting to see if virii are developed for these machines and if they get infected.

Better still as MS’s nose got put out of joint we now also have a striped down Win XP on the same hardware to compare against…

Harry Johnston July 12, 2009 6:10 PM

@Alex “Maybe that’s why there are no know real-life virus for GNU/Linux so far? I’ve heard childish explanation like ‘it’s not popular enough’ but it’s obviously idiotic – HPC, embedded systems, servers… GNU/Linux outpast windows everywhere except gamer’s desktop long time ago.”

Except aren’t there a lot more gamer’s desktops than servers? And they’re more likely to have interesting data like credit card numbers too. Nowadays, it seems that when a server is compromised it was as likely as not in order to serve up viruses to desktop machines.

Brandioch Conner July 12, 2009 8:20 PM

@Clive Robinson
“You appear to be in an argumentative mood which is unfortunate, further you have come across as more than somewhat insulting.”

And you follow that with various ramblings.

Try to stay on topic, okay? This isn’t about marbles.

It is a simple fact that if the infection rate falls below the removal rate, that virus will “die” in “the wild”.

It is not magic. It is computer SCIENCE.

The “viruses” in this case are just computer code. There needs to be some means of getting that code onto a computer for that computer to become “infected”.

Preventing that is simple.

If you want to claim otherwise, you may demonstrate your claim by infecting my computer. Go ahead.

Here, I’ll help you out. What you are arguing is NOT immunity to viruses but immunity to any cracking. Viruses are a sub-set of cracking.

For a simple example to demonstrate that for you, being infected by a “worm” is impossible if you do not have open ports. (Exception made for a crack involving the TCP/IP stack.)

Go ahead. I’m running the latest version of Ubuntu. Stock. No open ports. Get a worm or a virus on my machine.

Clive Robinson July 13, 2009 12:02 AM

@ Brandioch Conner,

Are you reading what you are writing?

“It is a simple fact that if the infection rate falls below the removal rate”

It is not possible for the infection rate to fall below the removal rate.

The removal rate can rise to the infection rate it cannot go above it.

You cannot remove what is not there.

It you feel you can please explain how?

Brandioch Conner July 13, 2009 12:29 AM

@Clive Robinson
“It is not possible for the infection rate to fall below the removal rate.”

Again, this is why the term “virus” is misleading as some people tend to, incorrectly, think in biological terms about computer code.

When more machines are cleaned in a 24 hour period than are infected in that 24 hour period, and that same pattern continues, the virus will “die” in “the wild”.

So part of ending the virus threat is making it easier to remove the virus than it is to become infected.

Or are you now going to argue about the “definition” of “rate”?

Henk July 13, 2009 12:37 AM

Mary Jo Foly has a nice article on ZDnet* where she goes into the Chrome OS.
According to her post if Chrome will not allow the installation of applications (thus not being an OS as we know it) it will make it a lot harder for a virus to get in there. Not impossible, but certainly harder.

Guest July 13, 2009 3:58 AM

virus-free not mean “virus impossible” Word “free” have a wide sense in English, espesially, in reklam. In marketing this is normal situation.

BF Skinner July 13, 2009 5:09 AM

Virus Impossible was a GREAT movie. Tom Cruise rocked! Can’t wait for VIII

Charles July 13, 2009 9:43 AM

If users won’t have to worry about viruses and updates, does this mean that Google will be doing all the worrying and most likely patching and purging without worrying the users?

As an aside, by being in complete control of the toolchain used to build their OS, Google can and hopefully will specify a number of features that have been available for some time but not used pervasively due to conflicts with third party software. These include kernel level memory address randomisation and buffer overflow detection, the combination making buffer overflows much harder if not impossible to initiate, and making it very difficult to consistently exploit the overflow. As very few applications other than chrome itself will need to run, Google need not worry about breaking compatibility with other applications.

Rob Lewis July 13, 2009 12:17 PM

@Bateau,

You are more correct than wrong with your assertion that the principles of TCSEC – Orange book are valid. Even the B3 level for manually verifiable reference monitors would be valid with other protections, something that would prevent privilege escalation. One way to do that is separation of root from system.

@Nora,

Tell swashbucker to find a way to mark memory pages read only.

Some people get the difference between AV filtering and behavior enforcement. Without enforcement, there is no security. With it, well..?

kangaroo July 13, 2009 1:36 PM

@Peter: For example, if I were to design a trivial OS that didn’t allow the user to run any outside code at all, wouldn’t this be virus proof?

I don’t think folks are getting that this is a Goedel problem, like halting or completeness.

Yes, you can reduce a system so that it’s not powerful — so it can only do a limited subset of operations. But if it can do any calculation, you will have an at least theoretical problem.

So, I disallow you to “download” software. But you download a document — theoretically a possible viral vector, since it could produce a document that could lead you to copy some transform of that document to an other computer.

So, we disallow you to “download” documents. But you can still enter documents by hand? Well, that could be a viral vector… And so on.

You must reduce the system from a general computer to basically just an old-fashioned calculator that can only perform operations that exclude recursion or general loops. Otherwise, you are guaranteed to have bugs, and therefore malware.

There is no escape from Goedel, short of calculating every possible state of the computer and identifying every one of them as safe (as you can do with small digital circuits).

Erica July 13, 2009 2:58 PM

What is needed is an operating environment that makes viruses economically not viable.

Either they cost too much to write, perhaps needing too much expertise to do successfully.

Or they have such a short halflife (time to permanently reduce infections to half of the previous peak) that the payback for the malware distributors is too low to be worth pursuing.

Clive Robinson July 13, 2009 3:44 PM

@ Brandioch Conner,

I asked you to read what I had said carefully, it is becoming obvious that you have at best been selective in what you have chosen to show you have read.

In my original post to you I said,

“Although I have a great deal of agrement with your view point as it is probably the engineering / practical way the issue will be resolved not just now but in the future the way you are saying it is wrong.”

You had chosen to make a short series of interconected statments about,

‘All that it takes to make an OS “immune” to viruses’

and that what you where saying was,

“computer SCIENCE, not magic.”

Because I had looked at what you had said and noted that you had put double quotes around immune, I assumed that you had highlighted it in that maner to indicate a diferent to normal meaning of immune that you had not mentioned.

This is because a number of people often double quote in the air with their fingers when they are talking to indicate either sarcasam or negation as in “he’s a real winner” actualy meaning the opposit, usually the context of the statment or tone of voice clarifies their actuall meaning, if not a raised eyebrow or puzzled look usually gets clarification. Unfortunatly such meanings do not travel well on the Internet and thus I assumed there may have been more to it than you had communicated.

I even asked you to clarify what you actual usage was, however you chose to ignore it. Even when I presented you with a list of common meanings you chose to still ignore the request, therefor am I to assume you are happy with any and all of them?

And that therefor I can use whichever I so chose in my arguments?

This patern of not answering questions and moving on to other untouched aspects without addressing those in question apears indicitive of your mode of reasoning on this page. It is also often found in people who use expressions like “50% of infant fatalities are caused by co-sleeping” it makes a good sound bite but usuall and often they think it gives either gravitas or authority. However as a statment of fact it usuall does not hold up under any real examination.

Infact you do just this with your

“It’s computer SCIENCE, not magic.”

Which when pointed out to you it is not what mathmatics and computer science have established you dismiss with,

“And then you go on about mathematical proofs and such.

You can argue theory all you want. I’ll stick to what is demonstratively achievable.”

Further if you had bothered to read the intro to this blog page properly you will find that, “what is and is not computer science theory”, is what this particular page is about. And what you so casualy dismiss in the way you do is also dismissing the opinion of our host.

Such casual behaviour of dismisal and ignoring questions etc I would say is indicitive of your general attitude shown on this page. I had actualy previously pointed out the problems with the system you where holding up as a shining ability of your level of “proof” but you have chosen to ignore them.

Because of your behaviour of ignoring questions and dismissing actual computer science, when responding to your,

“Then it should be trivial for you to disprove it.”

I chose to do it a little bit at a time to limit your behaviour. And less than amazingly you have continued with,

“this is why the term “virus” is misleading”

Before saying,

‘When more machines are cleaned in a 24 hour period than are infected in that 24 hour period, and that same pattern continues, the virus will “die” in “the wild”.’

You make another sound bite argument.

Your argument as presented fails in your presented solution irrespective of the time limited rates you give as long as one machine still has the virus code on it.

Because your chosen solution (reset to your “good code” base) will not prevent reinfection.

And a brief history tells us that a 100% eradication of a virus does not happen, as frequently it returns in one form or another. And further it is known that such code is kept in a number of collections or databases by various people and made available to others for further use. Are you going to argue that these DBs are not within your meaning of “in the wild”?.

Which means that the infection rate from the start of the infection will remain above the removal rate.

If however you wish to change your chosen model to alow modification to prevent re-infection you are saying that the model you originaly presented was incompleate or open to interpretation by others?

If so it is possibly the reason you said,

“So part of ending the virus threat is making it easier to remove the virus than it is to become infected.”

In your last post.

Now I can go on disproving your assumptions in your original chosen solution if you wish?

But to be fair to others particularly this blogs host, althought I don’t think you are going to change the behaviour you have so far exhibited I realy must ask you to do as you have asked me to do which is,

“Try to stay on topic, okay?”

And within your original arguments, and not indulge in

“various ramblings.”

To avoid answering questions about the meanings or failings of your model or argument.

Also to be honest as I will be being discharged from hospital in the morning I will have a lot of other things of more importance to do so if you wish to continue I would much rather further discusion was placed on a firm footing rather than have you constantly “moving your goal posts”.

Brandioch Conner July 13, 2009 3:56 PM

@Clive Robinson
“I asked you to read what I had said carefully, it is becoming obvious that you have at best been selective in what you have chosen to show you have read.”

You are wrong. And you still have been unable to put a virus on my Ubuntu box. Complain all you want, but facts are facts.

Clive Robinson July 14, 2009 1:02 AM

@ Brandioch Conner,

“You are wrong. And you still have been unable to put a virus on my Ubuntu box. Complain all you want, but facts are facts.”

If your answer only answer is effectivly,

“You are wrong because you have not committed a crime or act of terrorism (1) in response to my crime (2)”

Then not only is that very far off topic it is entirely unethical.

I do not know what juresdiction you live in but in the UK we have a number of laws put in place since Robert Schifren and Steven Gold where needlessly dragged through the courts.

1) These laws are to stop misuse of ICT (MCA1990) by others, for which there are some very severe penalties (five years) even for what might be considered ethical self protection. Further such misuse is now effectivly treated as terrorism in the UK (TA2000),

http://www.out-law.com/page-1409

Having been involved with both Schifren & Gold and others with what went on at the time with both BT’s Prestel and Gold services, it was only natural caution that stoped me being on the wrong end of fraud/impersonation charges as well.

The UK also has other laws to deal with people who decide to,

2) incite others to commit a crime, again with severe penelties, and again in the UK this can (and has been) used under terrorism legislation for as little as writing a poem.

Also even without the legal problems of being in the UK, you did not communicate which machine at which IP address etc or what you would accept as proof, so your statment is at best another “sound bite” and more than a little silly, especialy as you know (if you had read my post) that I’m currently in hospital and therfore it is very unlikley that I would have access to the resources needed to carry out your chalange.

As I said in my previous post I will from today have better things to do with my time, it you or others wish me to point out the problems with your proposed system then that I will do, but I again will ask of you that which you asked of me,

“Try to stay on topic, okay?”

Greg July 14, 2009 5:32 AM

@Brandioch Conner

Facts are facts. Its impossible for you to prove I haven’t put a virus on your computer…

Brandioch Conner July 14, 2009 10:26 AM

@Clive Robinson
“If your answer only answer is effectivly,

“You are wrong because you have not committed a crime or act of terrorism (1) in response to my crime (2)””

And, once again, I’ll tell you to try to stay on topic.

You are wrong.
Facts are facts.
And trying to change the topic will alter either of those.

Brandioch Conner July 14, 2009 1:39 PM

@Greg
“Facts are facts. Its impossible for you to prove I haven’t put a virus on your computer…”

No it is not. It is very simple to do. All I have to do is demonstrate that every file on the file system was released by a vendor and intentionally installed.

And that is accomplished with a “Live CD” and a lot of checksums.

Clive Robinson July 14, 2009 7:42 PM

@ Greg,

I realy do not know what Brandioch considers a “fact” or why.

He makes such curious comments about things and claims them as “computer science” whilst also disregarding fundemental tenants of computer science with,

“You can argue theory all you want. I’ll stick to what is demonstratively achievable”

He clearly does not understand the difference between 100% and !100%

And appears to regard putting “Facts are facts.” as some incantation that magicaly lends credence to his arguments.

For instance in his reply to you he says the following,

“All I have to do is demonstrate that every file on the file system was released by a vendor and intentionally installed.”

How quaint he only mentions a “file system” and ignores all the other mutable memory on a modern system such as the flash ROM that holds the BIOS, other flash ROM in various IO devices and amazingly he even ignores the flash ROM holding the micro code in the CPU…

Further he appears to be under the assumption that various Linux coders produce perfect bug free code with absolutly no coding design or specification errors (I’ve yet to see even moderatly complex system/kernel software that forfills that lot). I wonder if it has crossed his mind to check the revisions of his software against various databases of vulnerabilities.

Again there is no mention of other files such as for the all important “user data” that most people use a computer for (I guess he just looks at the web with javascript turned off).

Oh and he neglects to factor in what other damage malware on his system will do to other systems in the time window between infection and all his checksums running possibly alert him to the presence of malware.

Also he apears a little hazy on the difference between a program in memory and information for a program in memory (ie none it’s all bits and bytes, and it’s what the metadata is that counts).

I’m fairly certain you could add a few more holes to his ‘bootifull’ system design.

But there is a minor problem in that he has made a chalange (all be it illegal to take up from the UK) but has neglected to provide details of where his ‘bootifull’ system might be found, or importantly provided any evidence it exists in anything other than his fevrent imagination.

But hey there it is…

Brandioch Conner July 15, 2009 10:08 AM

@Clilve Robinson
“I realy do not know what Brandioch considers a “fact” or why.”

You have made claims about certain actions being “trivial”. I’ve given you criteria that you should be able to meet if you were correct about them being “trivial”.

You have not been able to do so. That is a fact. Argue that all you want, but you have not been able to demonstrate your claims.

Nick July 15, 2009 11:14 AM

To those talking about Orange Book, the old high assurance designs have been superceeded by the Separation Kernel paradigm. (SKPP) Three highly secure/reliable systems emerged: Integrity PC; LynxSecure; OKL4 Hypercell. Integrity-DO178B got EAL6+ from NSA, and seL4 (secure L4) is almost formally verified down to code.

The problem with modern OS’s is exposure. There’s a ton of code running in kernel and POLA isn’t enforced at all. The SKPP model puts only separation/communication functions in kernel mode and runs all others deprivileged. Unlike Mach, IPC on these is incredibly fast. Nizza, Perseus, and Micro-SINA architectures showed how this can be used to build highly secure systems such as VPN’s and digital signatures for email (with no keyloggers).

I really don’t care for Chrome OS. Personally, I’d like to see investments in trusted device drivers for desktops, laptops and servers with long lifespan. Put an evaluated separation kernel on top of it, securely decompose applications into separate partitions, and do all untrusted stuff in VM’s (Win or Linux). This is how Padded Cell works. Minimal trusted computing base, POLA, and isolatation of processes should be used in all systems labeled secure. Chrome OS will likely take a Linux-like approach, which is unfortunate.

One can only build a secure system on a secure foundation. INTEGRITY Padded Cell or OK Linux are more like what needs to be done. IOMMU’s let us use untrusted drivers. If we can get the bugs (100+) out of the Intel processors, that would be nice. The VAMP design was formally verified, so maybe use it. Add in functionality like VIA’s PadLock. Then we can use all of this, along with SDL-like process, to build damn-near bulletproof embedded systems. Desktop’s would benefit also. With SKPP, it’s partially already been done.

Clive Robinson July 15, 2009 3:39 PM

@ Nick

I had a look at SKPP () a while ago. Although it is actually a very sensible way of doing things it has a major achiles heal “legacy kernels”.

(For those that don’t know SKPP can simplisitical be seen as a secure framework in which independent entities can run and be audited etc)

And the legacy kernel issue is going to hold it back in many ways, not simply because it’s moving the problem (in the same way that IE moved the problem with MS kernels) but because the legacy kernels are triangular pegs in square holes, they just don’t fit.

However using legacy kernels is seductive to budget holders due to the “reuse as much…” principle.

SKPP realy should be used with very light weight kernels that are designed for a flush fit into SKPP.

But of course you have to “re-invent the wheel” as far as “apps” are concerned.

The whole thing comes back to Security-v-efficiency, the more efficient you try to be in any single domain the less secure you are. SKPP is heavy on (getting cheeper by the day) resources, legacy kernals in general where designed to get maximum “specmanship” or “best bang for the buck”.

Using SKPP with MS-OS’s or Linux or many other kernels is kind of like teaching an elephant to ride a unicycle. It’s not helped with the propriatary “blind fold” but being able to see (the source) is not going to make the elephant an acrobat.

One interesting thing was MILS in that it shows that compeating vendors can (if they wish) get together and produce secure components independantly that can be bolted together to make a secure whole (still has weakest link issue though).

I’m fully with you on “secure device drivers” not only do I want to see them I also want to see them from a distance.

One problem with all “formal proofs” on OS’s is the “elephant in the room” of system interupts on the “atomic requirment” of complex activities.

IO drivers need not only to be secure they need their own segregated area to run in where it is physicaly impossible for them to have direct access to any kernel/app space resources. The communication should be via a properly mediated and secure system that cannot break the atomic nature of complex activities.

You can have all the formal proofs in the world but if IO can do an “end run” and potentialy change an action that needs to be atomic then it is most definatly game over for security before the players even hit the field.

Nick July 15, 2009 7:28 PM

Thanks for the reply, Clive. I always look forward to reading yours. I agree that legacy kernels are a key issue. The only way I see they can work right is using virtualization on a platform like vPro. The drivers can be isolated properly by the IOMMU and INTEGRITY RTOS, in particular. With Intel’s VT the legacy OS’s can make all the use of kernel mode they want, but get no access to the real resources. Those need calls to the IPC of the separation kernel, which would run in hypervisor mode. If the number of partition switches can be minimized, then performance is often acceptable. A trusted path for password/key entry alone makes such a scheme worthwhile to me. Having the best crypto schemes running on a keylogger-bait OS isn’t comforting. 🙁

I agree that legacy can still kill the security benefits even if the above are implemented. The reason being that many apps have to be rewritten to utilize the benefis. I thought of rewriting key libraries or apps, for instance porting GPG so that at least your private key is protected. I figure that any securiy or crypto functionality should run outside the legacy OS to prevent sabotage. An example is this setup I made: legacy OS; firewall; network stack. A pretend network stack or ethernet device could be used to transparently insert a firewall, VPN, etc. between the legacy OS and network card. With a mandatory inter-partition communication policy enforcing this, the security software could not be circumvented w/out exotic methods and the VPN’s secrets (i.e. keys) could be protected. Those are some example situations where I think the separation kernels shine. Many embedded designs could benefit as well if apps were designed for it from ground up.

The formal verification is definitely hit and miss. Even the Common Criteria protection profiles are VERY specific about environment and assumptions. So, to use formal methods properly, one needs to: pick right method(s) for job; ensure real-world environment is modeled; specifically consider every likely type of attack; model the implemenation language and hardware correctly. Most don’t. Most fail. I expect more of the same. It’s still useful in some cases, such as seL4 and Peter Guttman’s crypto library. I thought Peter’s approach was quite innovative.

Links for State-of-the-Art Separation Kernels (MILS)

Integrity Padded Cell: http://www.ghs.com/products/rtos/integrity_pc.html

LynxSecure Embedded Hypervisor: http://www.lynuxworks.com/virtualization/hypervisor.php

Harry Johnston July 15, 2009 7:30 PM

@Clive:

I suspect that IO is best handled in user mode, with each driver having a separate address space. You’d need a mechanism to pass packets between each IO device and the appropriate driver; for example, this could be done by a dedicated IO core on the CPU.

However, unless you have enough cores to assign one per thread, code is still going to need to be interrupted for context changes. This means it will still be up to the programmer to ensure safety whenever multiple threads have to act on the same data, or interact with each other. This is a hard problem!

In my opinion, threads should be in separate address spaces wherever possible, and only interact in restricted ways, but this can be a serious limitation, particularly as the number of available cores increases.

One possible approach which might help a little would be the development of a standard “tool box” for parallel processing, providing well-tested code to allow a set of threads to share data atomically. This may be of interest:

http://www.infoq.com/news/2008/05/click_non_blocking

Ideally, only the toolbox code would have access to the shared memory, but this would require a very efficient way of changing context; perhaps the toolbox code could run in a separate ring from the application code, although I’d like to see a more flexible mechanism.

There are also other hardware design requirements for a reasonably safe system – for example, external devices need to be properly segregated from internal ones, and preferably from each other.

Erika Wirfield July 15, 2009 8:53 PM

I do agree that Chrome OS is going to be arguably more secure than the other consumer operating systems, in fact it already has been proven.

Erika

Clive Robinson July 16, 2009 12:01 AM

@ Harry Johnston,

“I suspect that IO is best handled in user mode, with each driver having a separate address space.”

From the “least privalage” aspect that is fine for ordinary work etc as it puts the IO at the same privalage level as the application.

However it still leaves the problems of non ordinary (privaledged) work and interupts. Unless they are disabled whilst kernel code is running.

I guess I could “hedge my bets” and ask for “the best of all ways” (knowing I’ll never get it 😉

However I’m somewhat “paranoid” about programers abilities to code securely. My view is that very few can do it and it’s usually the weakest link in the chain that breaks the system (or low hanging fruit from the attackers perspective).

The usual method of correcting such problems (bolt a bit on) is a method pre Victorian articifers used, and the advent of steam resulted in the birth of engineering solutions due to boiler explosions (ie colataral damage liability).

I’m thinking that what is actually needed is better hardware support for security, that is the capable few build it in nearly “sight unseen” so the rest of us do not have to worry and can concentrate on the tasks in hand (much the same reason for the original “Unix way”).

However the first of these hardware security solutions the venerable “ring 0” notion, was ok 30-40 years ago but we are still mostly still stuck with it today (I guess the cost of colateral liability is still to low).

As with cars you would not want 40 year old technology to be your sole security measure on the modern “highway” of today.

And it’s not as though we don’t have the available silicon “real estate” available. Modern multi core CPU’s in general use, only use one or two cores effectivly (yeh I’m aware of the application lag/ vacuum to fill a void arguments but in most cases it’s moot).

Also secure kernals for the likes of SKPP are more likley to arive via Hard RTOS kernals than general purpose kernels.

This is because they have to “clock their input and clock their outputs”. It is the inability of more general purpose kernels to do this that is why we have so many side channel issues currently.

There are of course many other aspects to worry about but as has oft been said “Rome was not built in a day” and “the foundations have to be laid first”.

Clive Robinson July 16, 2009 1:37 AM

@ Harry Johnston,

“I suspect that IO is best handled in user mode, with each driver having a separate address space.”

From the “least privalage” aspect that is fine for ordinary work etc as it puts the IO at the same privalage level as the application.

However it still leaves the problems of non ordinary (privaledged) work and interupts. Unless they are disabled whilst kernel code is running.

I guess I could “hedge my bets” and ask for “the best of all ways” (knowing I’ll never get it 😉

However I’m somewhat “paranoid” about programers abilities to code securely. My view is that very few can do it and it’s usually the weakest link in the chain that breaks the system (or low hanging fruit from the attackers perspective).

The usual method of correcting such problems (bolt a bit on) is a method pre Victorian articifers used, and the advent of steam resulted in the birth of engineering solutions due to boiler explosions (ie colataral damage liability).

I’m thinking that what is actually needed is better hardware support for security, that is the capable few build it in nearly “sight unseen” so the rest of us do not have to worry and can concentrate on the tasks in hand (much the same reason for the original “Unix way”).

However the first of these hardware security solutions the venerable “ring 0” notion, was ok 30-40 years ago but we are still mostly still stuck with it today (I guess the cost of colateral liability is still to low).

As with cars you would not want 40 year old technology to be your sole security measure on the modern “highway” of today.

And it’s not as though we don’t have the available silicon “real estate” available. Modern multi core CPU’s in general use, only use one or two cores effectivly (yeh I’m aware of the application lag/ vacuum to fill a void arguments but in most cases it’s moot).

Also secure kernals for the likes of SKPP are more likley to arive via Hard RTOS kernals than general purpose kernels.

This is because they have to “clock their input and clock their outputs”. It is the inability of more general purpose kernels to do this that is why we have so many side channel issues currently.

There are of course many other aspects to worry about but as has oft been said “Rome was not built in a day” and “the foundations have to be laid first”.

steve July 16, 2009 5:23 PM

Horrible thing to say to you computer scientists but the problem is that some of you may be very inbedded in coding or whatever and some may be salesmen and others are users.

I dont know what this forum thinks computer science really means but in my opinion it also encompasses an analysis phase where you speak a language that all understand (exkuz spellz and gramer) so that all understand the problem. This would probably mean speaking in language that a normal interested computer user of an ilk that is inerested in “security” (as the newsletters from mr schnier point out it is probably inaccurate to call them computer vulnerabilities as many are the same vulnerabilities encountered going down a dark street, going into a war zone, sailing a ship when mast has fallen etc..).

If you are speaking a specialised language – a linked reference may be useful to differentiate between those who need to know the jist of the argument and those who need to know the exact semantics.

tonyb July 17, 2009 5:52 PM

@ Brandioch Conner

  1. “You can not demonstrate you can infect my Ubuntu” is the refuge of scoundrels. One might as well claim to have a dietary regime that renders you immune to the Ebola virus, and then say “you have not yet infected me with Ebola” hence you have not disproven my dietary claim. Such arguments reveal an inability to address the topic intellectually.

  2. Many people act as if a computer hosts either “executables” (stuff that conducts processing) or “data” (inert bytes, like that letter to your grandma, that get manipulated by processes). However, in any general purpose computing device, this distinction is not absolute. To a word processor, that shell script is just a text file (like that letter to grandma) but to a shell interpreter, that same script is a machine capable of directing processing.

  3. Granted, if you could wholly trust your computing base (hard), and limit yourself to loading only foreign “non-executable” stuff (text for instance, and ONLY in a context where that text does not itself become interpreted) then you can be reasonably safe from viruses. But you have also largely reduced your system to something very static and less than a general computer. You cannot load new games, new applications, etc. You cannot even allow the existing vendor’s wares to be patched, should the vendor discover flaws (what if the new vendor-delivered codes are infected?)

  4. Lastly, the definition of “infection” may be subtle. To some, only hostile changes that would remain after reboot are really “infection”. But for many systems that remain “up” for days or weeks, the infection of live memory alone, even only the memory of a single (networked) process, is sufficient to cause your applications to raise havoc with other remote systems – and they will not be impressed that simply killing your application makes the problem go away, so you were not really “infected”.

Brandioch Conner July 18, 2009 9:26 AM

@tonyb
1. Stick to the subject. Argument by analogy is useless.

  1. Irrelevant – they’re all files. And the files can be checked to see if they were part of a vendor’s package or not.

  2. You have that backwards. Why limit yourself from installing new executables? All that is required is that those new executables are KNOWN to be installed and are RESTRICTED to where they can be installed.

  3. Again, you’ve missed the point. Every file (stop trying to differentiate between them) should be able to be identified as to the ORIGIN. Where did you get it so that you could install it on your computer? And to remove it later.

Clive Robinson July 18, 2009 11:58 AM

@ Brandioch Conner,

1, you have made a claim that your machine is uninfected.

Well,

1.A, you have not even shown it exists but you claim “facts are facts”

I think Tonyb is being quite nice to you with his analagy.

1.B, You have not defined how you intend to “prove your machine is not infected”

As Tonyb and myself have pointed out quite politly you have not defined your meanings even when asked repeatedly, yet you chalenge any definition you think weakens your argument.

2, You have shown no understanding of what others say,

As TonyB has pointed out you appear to have trouble differentiating between user files and what you call “vendor files”.

I likewise have raised this issue with you and you have ignored it I suspect because it is a huge hole in any argument you try to make.

For simplicity I will say that the “vendor files” are those that are generic and issued on a CD/DVD from your chosen vendor.

Likewise “Configuration files” are those created at instalation of those vendor files that are specific to your machine (this includes Knopix type files created on bootup).

Likewise “user files” are those created by the user at any point in time either on the system or downloaded to the system that are not “vendor files” or “configuration files”

You have not stated how often you will run you checksum program.

Or how the checksums will be generated for files

Nor how you will establish which files are good or bad.

Further you have not stated how your system will work with “user files” that have ben opened for creation, or modification (ie partial updating /deleation or appending).

I will stop at this point because you show a limited capacity to read understand and answer posts longer than a few words.

Brandioch Conner July 18, 2009 1:10 PM

@Clive Robinson
“1, you have made a claim that your machine is uninfected.

Well,

1.A, you have not even shown it exists but you claim “facts are facts””

That’s right, Clive. I’m posting this by whistling into a modem.

You do understand this “Internet” thing, right?

Right?

No? I didn’t think so. But I do think that’s the core of your inability to understand the situation.

I’m talking actual code running on actual machines. You can claim that they don’t exist, but they do. That’s a fact.

Clive Robinson July 18, 2009 4:42 PM

@ Brandioch Conner,

As I predicted with,

“I will stop at this point because you show a limited capacity to read understand and answer posts longer than a few words.”

You have failed to prove in any way the machine you claim to have is free from infection by any reasonable standard, or if it is configured in a maner even vaguly aproaching that you claimed earlier.

Further you apear to belive that because I do not know where your claimed machine is you can claim that I have failed to infect it and thereby to you that proves it is infection proof.

For something to be established as a fact it has to be objectivly assessed and survive the assessment of independent third parties. Failing to do the former and preventing the later is hardly a way to establish credability in yourself is it?

You ask,

“You do understand this “Internet” thing, right?”

Well the easy answer in your style is probably considerably better than most but not as well as others.

However that is not a statment of fact as it is unobjective so perhaps you might establish a measurand by which an objective assesment can be made before you claim,

“No? I didn’t think so. But I do think that’s the core of your inability to understand the situation.”

You specify no independent methodology by which you arived at your claim.

Which I’m sorry to tell you renders it to just a meaningless sound bite, that further detracts from your credability as an objective observer. Which in turn reduces any credability of your claims as to what is and is not factual by any recognisable definition of the word.

Untill you can learn the diffrence between an unverifiable statment and a verifiable fact then you will find that nobody with any knowledge of the subject will give you any credability.

Simply stating,

“I’m talking actual code running on actual machines. You can claim that they don’t exist, but they do. That’s a fact.”

In no way establishes what you claim has a factual basis. All I know is that I have read the words posted under your name on this blog page nothing more.

You could be using a Linux box that you have configured in a way that you have failed to specify, or you could be sitting in some Internet cafe using a no name generic PC running some version of a Microsoft Operating system.

There is no way legaly for me to tell which of the miriade of machines directly or indirectly connected to the Internet you are using because you have failed to provide anything that can be objectivly tested and measured.

I will remind you again that the subject of this post is the difference between 100% and !100% effectivness at preventing a virus infecting one or more machines with Googles OS.

I welcome an objective and reasoned debate on that subject but you have to be prepared to have your assumptions tested and striped back to find what is verifiable as and within the remit of the blog and computer science and mathmatics.

You have so far not actually answered any of the questions asked of you, and from this I doubt that you will take up the offer of reasoned debate suported by recognised and established proofs within those fields of endevor.

The choice is yours establish your credability or be held as lacking credability.

Over to you.

Brandioch Conner July 18, 2009 4:56 PM

@Clive Robinson
“You have failed to prove in any way the machine you claim to have is free from infection by any reasonable standard, or if it is configured in a maner even vaguly aproaching that you claimed earlier.”

Incorrect. I have accounted for every file on it.

You can claim that such is impossible, but I have done what you claim is impossible.

Clive Robinson July 19, 2009 9:30 AM

@ Brandioch Conner,

“You can claim that such is impossible, but I have done what you claim is impossible.”

No you have not.

As you say,

“I have accounted for every file on it.”

That is in no way the same as proving a file does not contain malware or can behave like malware when subjected to selected data input.

If you can not see the difference you need to think quite a bit further than you have.

Oh and go back and read the other comments on this blog page properly.

So over to you.

Brandioch Conner July 19, 2009 9:39 AM

@Clive Robinson
“That is in no way the same as proving a file does not contain malware or can behave like malware when subjected to selected data input.”

I’m sure you can figure out how to search through this forum for the word “malware”.

Then you’ll see that I never made the statement that you are now claiming I have not proven.

Again, try to follow the discussion, okay?

Or have I over estimated your capabilities? Yes?

Clive Robinson July 19, 2009 12:34 PM

@ Brandioch Conner,

“Then you’ll see that I never made the statement that you are now claiming I have not proven.”

Nobody is sure what you are claiming for instance,

“No it is not. It is very simple to do. All I have to do is demonstrate that every file on the file system was released by a vendor and intentionally installed.”

Which does not stop a virus being on your machine (unless of course you have some quaint definition of what a virus is, or infection is)

But then you probably do,

“this is why the term “virus” is misleading”

And many others as well, you have been asked on more than one occasion to define your usage of things like “Infection” but you have declined to do so.

The only time you actually make a statment it is compleatly unsupported by any evidence and you immediatly change the argument.

Now produce your definitions and your argument otherwise why bother posting to the blog?

Brandioch Conner July 19, 2009 1:59 PM

@Clive Robinson
“Nobody is sure what you are claiming for instance,”

Well I’m so glad that you speak for everyone.

Maybe you need to reconsider whether you do speak for everyone.

ed July 19, 2009 2:30 PM

@ Brandioch Conner
“No it is not. It is very simple to do. All I have to do is demonstrate that every file on the file system was released by a vendor and intentionally installed.”

This assertion ignores the fact that unused blocks (or portions of blocks) on the hard disk can store data persistently. If one of the intentionally installed programs has an exploitable defect that allows arbitrary code execution, then malware can infect the system and write itself to unused blocks on disk, but be completely undetectable by any amount of checksumming of files. It avoids detection because the detectors are only looking in files, not in other storable locations.

Brandioch Conner July 19, 2009 4:24 PM

@ed
“This assertion ignores the fact that unused blocks (or portions of blocks) on the hard disk can store data persistently.”

Again, this is about whether there are files on the computer that were not intentionally put there by the user.

Moderator July 19, 2009 10:32 PM

Brandioch and Clive, this exchange of views between you has become repetitive and unilluminating. Please let it go.

ed July 20, 2009 12:38 AM

@ Brandioch Conner
“Again, this is about whether there are files on the computer that were not intentionally put there by the user.”

You may think it is about files, but it is really about data. If you believe the only way to store data on a disk is in files, you are sadly mistaken.

Brandioch Conner July 20, 2009 8:22 PM

@ed
“If you believe the only way to store data on a disk is in files, you are sadly mistaken.”

And exactly why would it sadden me?

Way back in the 1980’s there were applications that would “wipe” the “blank space” on a hard drive. And those apps still exist.

Overwriting a sector on a disk is very easy.

And that is the reason why I do not care about the method used to store data.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.