Comments

greg September 19, 2007 6:10 AM

I like the comments about programming language. Unfortunately current languages, although an improvement over C and C type buffer overflows, are not really what i would “secure” yet. But and improvement to be sure. Its just hard to write a OS in them with such a large code base in C.

Of course there are other ways to avoid overflows. But they are almost never used.

Toby September 19, 2007 6:15 AM

I would argue that KeyKOS (http://www.cis.upenn.edu/~KeyKOS/) has at least as much, if not more, to teach us than Multics.

In particular, one can argue that Multics’ identity-based access control scheme inspired the models employed today by Linux and Windows which have shown themselves to be woeful in preserving the principle of least privilege, which has in trun led to much of the current insecurity.

Ian Miller September 19, 2007 6:25 AM

some ideas from Multics where included into VMS.

I particularly noted this from the article “(4) people are unwilling or unable to recognize the compelling need to employ much better technical solutions.”

Nostromo September 19, 2007 6:33 AM

More mindless C-bashing, I see. “a C programmer has to work very hard
to avoid programming a buffer overflow error.” Using strncat and strncpy instead of strcat and strcpy is “working very hard”?

Of course life is easier with C++ instead of C, but in the end, it’s the algorithms and the modularization, not the programming language, that determine the security.

Baka Zoona September 19, 2007 7:32 AM

I don’t think so. Anyone who recieved a software tape from HIS during that time frame know who this is. My name was on the cover page of every printout that accompanied every software product sent to our customers.

Our developers built in a backdoor into evry OS developed by HIS to let “us” get on to evry system (GCOS, ACOS, Multics, etc.).

Ian Mason September 19, 2007 7:35 AM

@Nostromo “More mindless C-bashing, I see.”

I can speak from the perspective of a man who has programmed in C on many operating systems, and in PL/1 on Multics.

The way PL/1 features were used in Multics allowed you to program to get the effects (and efficiency) of pointer arithmetic without doing any. In that respect the Multics variant of PL/1 was superior to C.

An example is in order.

First off, Multics didn’t have files as such. Instead it had named segments that were mapped directly into memory (‘initiated’ in Multics terminology). Thus accessing a file required you to manipulate the file contents as one large memory buffer. The classic Multics way of doing this was to describe the file as one PL/1 structure (very similar to a C struct).

PL/1 has BASED and REFER as part of its structure definition language. BASED names a pointer variable that points to an instance of the structure.

The following two are equivalent:-

PL/1:
declare fred pointer;
declare 1 a_structure based (fred),
2 a_number fixed bin (35);

i = a_number;

C:
struct A { int a_number; } a_structure;
struct A* fred;
fred = &a_structure;

i = fred->a_number;

REFER allowed you to set the size of arrays dynamically based on a variable; which could be part of the same structure.

declare file_pointer pointer;
declare 1 file_mapping_struct based (file_pointer),
    2 title_size fixed bin(35),
    2 title string varying (refer title_size),
    2 some_more_sub_structure ...

call hcs_$initiate (">udd>admin>imason>example" ... file_pointer);

From this point on you can access parts of file_mapping_struct directly – PL/1 takes care of all the pointer arithmetic for you. There isn’t a C counter example because C doesn’t do this – you can achieve the same end result, but it requires the programmer to do a bunch of pointer arithmetic.

The combination of BASED and REFER leaves the compiler to do the error prone pointer arithmetic while having the same innate efficiency as the clumsy equivalent in C. Add to this that PL/1 (like most contemporary languages) included bounds checking and the result is significantly superior to C.

The issue isn’t merely which string routine you pick from the library but every bit of pointer arithmetic and every array indexing operation being automatically protected in one language and completely unprotected in the other.

(My PL/1 syntax is probably slightly off somewhere, it’s 21 years since I wrote a PL/1 program)

If anyone has any relevent Multics questions I’ll try to answer. I’m highly probably the only person in this conversation who’s ever actually used a Multics machine (as opposed to reading mentions in papers and textbooks).

greg September 19, 2007 7:50 AM

@Nostromo

More knee jerk programming language dribble. If its so dam easy, Why so many buffer overflow bugs then?

I have over 10 years C and C++ commercial experience. I was not bashing other than pointing out a limitation or weakness. The ability to do pointer manipulation makes these non secure languages by nature. Dam C is really just syntactic sugar for a assembler.

Nor did I start trumping other languages as perfect solution for every job.

For the record I use assembler or C when its the right tool for the job. Which these days is getting less and less often these days. The last time was programming a micro controller.

Writing secure code is hard. Writing it in C or C++ is harder.

Andy Dingley September 19, 2007 7:51 AM

Segmentation is only as good as the segment access control.

Years ago I worked on (ghastly) ITL minis running Modus. This had one of the most rigidly segmented architectures I’ve ever seen, due to hardware limits rather than security, although this had been “re-engineered by marketing” into a “security” benefit.

Step #2 for any new coder, right after “Hello World”, was to learn the “magic poke” that made your process “trusted” and allowed untrapped cross-segment access. Debugging was simply impossible without. Segmentation was a nightmare on this machine (no data structures bigger than 16k without hard work), but it offered nothing for security.

supersnail September 19, 2007 7:56 AM

I would seriously question some of the conclusions of this article.

  1. Small sample size.
    Honeywell claimed MULTICS was running at 80 sites. Say 300 servers running mostly in secure Military environments, mostly when hackers and script kiddies were stealing apples at kindergarten.

  2. PL/1 vs. C.
    Most buffer overflows are caused not by C but by careless socket library calls. I see no reason to think PL/1 programmers would be anymore careful when processing “receives”. And you can cause buffer overflows with VARCHARS you just stick a larger value in the length field (easily done with pointers!).

  3. The conclusion that kernels need “certification”.
    There is no evidence of any wide spread hacks affected by distribution of corrupt kernels or corrupted applications. And god knows it would be easy to distribute a hacked “Linux” kernel, just stick it on a Web site and wait for people to download it.

  4. The assertion that modern OSes do not distribute security fixes.
    My two “home” OSes automatically get security fixes installed (although I retain the option to reject fixes). As one OS is the best selling “windows XP” and the other is the ever popular “SUSE” Linux. I think this covers most modern OSes.

Andy Dingley September 19, 2007 7:57 AM

I’ve written very little PL/1 (and barely more PL/M) but I’ve certainly run into buffer overflow problems with it.

Just because the language spec allows structure size limits to be specified, doesn’t mean that the compiler has implemented such a feature, or that it hasn’t been switched off by an optimisation flag. We were developing for tiny embedded systems and our PL/1 compiler was painfully much a subset of the overall behemoth language – not guarding against buffer overflows being just one of the more painful memories.

greg September 19, 2007 8:02 AM

@supersnail

On 4).

But wouldn’t it be nicer to not need to download fixes as often as once a week (Or more, if you really keep up with the updates)?
And how many fixes introduce new weaknesses?

One thing is clear. The current level of security in modern OS’s is keeping a lot of people employed. For the wrong reasons.

Another Kevin September 19, 2007 8:05 AM

@Ian

As a fellow Multician (actually, I worked primarily over on the GCOS3/GCOS8 side), I quite agree that much of Un*x security has been clumsy reimplementation of either the Multics ring-and-segment system or else the capability-descriptor architecture of GCOS8.

@Toby

Multics access control was not solely identity based, although the literature can give that impression. A “userid” was made up of three components: person ID, project ID, and classification level. When I was logged in as “Kenny.G8Arch.u” I had a different set of privileges than I had as “Kenny.SERBoard.u”, etc.

Project IDs in practice were more like role IDs – although most of them were “what project are you working on,” some were functions like “SysAdmin”. Unlike what we have with groups on Unix, a session couldn’t have more than one project at a time. There were also nondiscretionary access controls, although the Multicians never figured out how to control bandwidth of covert channels to the point where the Government was comfortable with multiple compartments on one mainframe. (To Multics’s defense, no other vendor did, either!)

supersnail September 19, 2007 8:15 AM

@greg.

Yes it would be really nice if my home PCs didnt need security fixes once a week.
It would be even nicer I didnt consume two or three hours cpu a week security related processing (fixes, forewalls , anti-virus anti-spyware) compared to my meagre minutes of e-mailing and googling.

Windows has probably the best technical security foundations of any modern OS. It was written by ex-DEV/VMS guys who were inspired by the MULTICs system among others. The NT security model is technicaly excellent. The problems arise because in the consumer versions security is turned off by default, and, almost every application (e-mail, database, web server, web browser) allows arbitary pieces of data to be executed as scripts.

greg September 19, 2007 8:52 AM

@supersnail

A secure OS would not be sensitive to insecure user tools. And configuring those tools with “security” on is often not even possible. That is not secure or designed with security in mind.

The current crop of OSes just weren’t design from the ground up with security in mind.

Multiplexed September 19, 2007 9:07 AM

Kudos to Multicians!

(Open)VMS: Still great after all these years, despite the fact that no one wants to use it any more.

Just because Cutler and company did initial work on Windows NT does not imply that Windows is “secure.” It’s astounding the compromises made, mostly due to the people on the business side.

So what does that leave us with today? OpenBSD?

Karellen September 19, 2007 9:18 AM

Hmmm…..reading the article:

2.1 The security in Unix-type OSs is one of its main features.

2.2 The security is Unix OSs is built in to all versions.

2.3.1 Sounds like C++ is just as good as PL/I here. Has safe strings, does link-time checking for function arguments due to name mangling, provides rich (checked) access to arrays, etc…

2.3.2 No-Execute – Depends on hardware. On hardware that’s supported it Unix has supported it for ages. (Why does author appear to think that contemporary OSs only run on x86?)

However, interesting points on segmented pages and stack direction. I don’t know enough to go into more detail about that.

2.4 – Good point. What about the security of non-SE-Linux though. It’s not as capable, but it is less complex. Not sure how Multics security rates against starndard Unix permissions.

3.2.1 – Depends on development process for Unix system in question. Linux seems fairly resistant to injection of malicious code, as it’s been picked up in the past. Can’t talk about other unices.

3.2.2 – Many Linux distros use signed archives/repositories to distribute updates which makes this attack hard. Other commercial unices tend to ship via post, which has been faked before, but is not common.

3.2.3 – You need to be root to write to boot sectors on unix systems. If the attacker can run code as root to do this, you’ve already lost.

3.3 – Unix has always supported remote logging. If the remote logging machine has no ports open apart from one to append logs and a second to read, which should be trivially auditable, and only allows console logins, then log erasure/modification is not possible.

Given that, I don’t see how the claim that Multics is so much more secure than Unix systems, combined with C++ as a programming language, a pretty contemporary system, holds up.

Anyone care to help me out?

W. Craig Trader September 19, 2007 9:41 AM

@Ian Mason

I was a Systems Operator on the Pentagon AFDSC cluster from 1986-1988, but I will grant that you probably had more experience programming in the Multics environment than I did.

If I recall correctly, the Multics mail utility could move data from one security context to a less secure context, as long as you had a target account in the less secure context. I demonstrated that problem in the first week of training, but it wasn’t on Honeywell’s list of things to fix…

joe September 19, 2007 9:46 AM

This on slashdot today: internet security moving toward white lists for desktop software.

http://it.slashdot.org/it/07/09/19/0436203.shtml

This looks a lot like the common authentication problem, for software. Someone somewhere has to certify that a software app deserves to be on the white list. There will be pressure for that review to be quick and cheap. And bad guys will be submitting their wares for approval alongside the good guys. The net effect might be to make it harder to deploy good software, while having limited benefit in excluding the bad.

Ian Mason September 19, 2007 10:44 AM

Coo, looks like there are either:-

1) Lots more Multicians than I thought, OR
2) They are unreasonably concentrated here.

@ W. Craig Trader

Big cock-up if that’s the case. From memory, mail used message segments and message segments were subject to non-discretionary access control just like everything else. The basic non-discretionary access rules said ‘you can’t write from a higher privilege to a lower privilege’ e.g. If your process is running at ‘confidential’ privilege you can’t write to a ‘unclassified’ segment, but you can read it. (write-up, read-down i.e the Bell-LaPadula model).

@Andy Dingley

We’re specifically talking about Multics and the Multics PL/1 implementation was a very full one. I’ll grant that is rare and most implementations go nowhere near the full language; especially the embedded programming subsets that have been around. In fact the later are more like “PL/1 for C programmers who prefer PL/1 syntax”.

However, the argument is basically that choice of programming language can go a long way to assisting programmers in producing a secure/correct program. That choice must extend to language implementation/feature set where that choice exists.


One thing Multics got right was restricting some privileged operations to a dedicated System Security Administrator role that was so restricted that most of the normal day to day commands didn’t work when you were in that role. This prevented:-

1) The ‘doing all your work as root’ syndrome,
2) Many trojan attacks simply because the commands and system calls to make them useful weren’t available.

Ilya Levin September 19, 2007 10:57 AM

@Ian Mason

No, you are not the only person here. I was familiar with Multics and have programmed in PL/I on VMS and VM/CMS. I believe there are another people in the comments already at the time I’m writing this post. But you are quite right.

Martin September 19, 2007 11:04 AM

Bringing the Multics folks out of the woodwork: I was a user as a graduate student at MIT. I was impressed by the elegance of PL/I and segmented memory, security rings, etc., visible even from the command line level.

However, compared with CTSS, Multics seemed really slow (for utilities & word processing), and it was pretty expensive compared to a typewriter! I was tempted to walk down to the PDP-8 in the lab or the PDP-1 downstairs.

As I recall those times, security beyond a user password had nearly zero significance to us. (There was no Internet!) What was more a worry was getting your fair share of CPU time and not having to twiddle your thumbs at peak demand times. As if you had hundreds of users on your desktop Linux box today. 😉

Kathryn September 19, 2007 1:09 PM

This is why I usually read and don’t talk here – the company can get pretty rarefied. I have a basic knowledge of C++, but I’m finding this discussion facinating.

What can we do, as individuals, to encourage software suppliers to create secure products? These are design choices that are made very early on, pretty deep in the development, and not something that will ever come up in marketing consumer surveys.

Its a great mental workout to discuss this stuff, and wonderful to be able to learn about it, but at the end of the day, I need to be able to have action beyond throwing up my hands and trying to pick the best of a bad lot for my meager skill level.

Jim Lippard September 19, 2007 1:14 PM

Toby: As Another Kevin pointed out, Multics access control lists (ACLs) included more than user-id level controls. The third field wasn’t used much, but distinguished interactive users from “absentee” (batch) users, and daemons. ACLs could be of arbitrary length, some files types had “extended ACLs” (like mailboxes, which had add, delete, read, own, status, wakeup, and urgent permissions), and there were per-directory IACLS (initial ACLs), which could also be of arbitrary length–much more powerful than umask. There were also mandatory access controls (known as AIM, the access isolation mechanism), and a key contribution of Multics, the ring mechanism.

W. Craig Trader: While Multics mailboxes could hold messages at different security levels, you could not see your Top Secret messages while logged in as Secret (no read up) and you couldn’t delete your Secret messages while logged in as Top Secret (no write down). I re-implemented the Multics message facility in the mid-eighties, and I’m not aware of any problems that allowed a regular user to copy messages read while logged in as Top Secret to Secret–are you referring to a bug in the old “mail” command?

All: There’s a lot more information on Multics at Tom Van Vleck’s Multicians website, and I’ve noticed that a lot of Multics documentation and code has found its way online of late.

Ian Mason September 19, 2007 2:28 PM

@Kathryn “the company can get pretty rarefied.”

Go on, come out and say it. You really mean “old” don’t you. 😉

Valdis Kletnieks September 19, 2007 4:27 PM

@supersnail:

“1. Small sample size. Honeywell claimed MULTICS was running at 80 sites. Say 300 servers running mostly in secure Military environments, mostly when hackers and script kiddies were stealing apples at kindergarten.”

On the other hand, some of the people who were doing security at the time were pretty sharp. For instance, read Ken Thompson’s famous Turing Award Lecture “On Trusting Trust” (http://www.acm.org/classics/oct95/) which references “an unnamed Air Force document”, which was in fact the original Karger&Schell Multics penetration test report
that the “30 Years Later” retrospective is in reference to….

Ian Mason September 19, 2007 4:58 PM

Also on the “small sample size” issue:

You must also remember Multics started back in the days of mainframes when computers were rare, expensive beasts. IBMs System/360 was introduced in the same year as work on Multics started. Eighty sites was a fair number of sites for this kind of big iron. Plus each site supported hundreds or thousands of users. So as a proportion of installed base of all computers 80 sites was not insignificant.

Furthermore, I’d bet that citations for “Multics” in peer reviewed journals outruns citations for “Microsoft Windows”.

UNTER September 19, 2007 7:37 PM

“To make matters worse, everyone is now expected to be a system administrator of their own desktop system, but without any training! ”

I think they underemphasize that line. In the world of the early ’70’s, there was a high ratio of “genius” to “user”. Administration was being done by some of the best in the world, with relatively few users, particularly low-expertise users. They could focus on technical issues of security.

It is impossible to even begin to deal with security issues when very few system’s administrator’s are any good, even those that are good have relatively little authority within their organizations, and the user base is composed of anyone who can point with a mouse. A great deal of administrative responsibility is then placed on mouse-clickers.

In that environment, how can you implement security? I don’t trust my own IT department (with good evidence), but my colleagues don’t even know that there exists a security and infrastructure problem (they are often one and the same). And this isn’t even my area of expertise – just a side-line so my work doesn’t get destroyed.

Technical issues are greatly submerged by administrative issues. We not only can go thirty years without significant kernel/os reform, I can guarantee that we will! Ubiquitous computing will bury that layer under more pressing issues of simple competence.

Lawrence D'Oliveiro September 19, 2007 9:29 PM

I don’t accept most of their claims that MULTICS is/was somehow inherently more secure than today’s Linux/BSD systems. I don’t think they ever got as far as implementing TCP/IP, let alone more complicated things like SMTP, HTTP, Kerberos and so on–protocols whose implementations did suffer from vulnerabilities. If they had to implement those in PL/I, they would have had just as many vulnerabilities.

They disparage SELinux because it’s bigger than their original kernel. Yet they don’t say how big their own “Kernel Design Project” would have got if it hadn’t been stopped.

And they never had anything like SSL or SSH, mutually authenticating between secure systems over an insecure network. That’s something we take for granted today.

lod September 20, 2007 1:43 AM

I find it fascinating to read all these computer security discussions based around multi-user systems and protecting different users for each other. This is an issue for big servers but it’s a fairly small one and I suspect that it will become basically irrelevant as virtual machines become more popular.

The problem I see is one of malicious applications rather than security levels. I really don’t give two hoots if one of my computers gets compromised and the attacker gets root access. I can just pull down the computer and reinstall the lot in a few hours. The far bigger issue for me is my documents and data and you only need my regular user permissions to attack them.

On a single user computer I’m concerned about my email program, my web browser and my online game. Anything that has online connectivity can be used to corrupt or delete all my documents. SELinux is promising as an improvement in this area but it didn’t look like Multics has anything to offer in this regard.

For now the best protection seems to be regular backups, but it’s becoming harder as hard drive size increases outpace backup media size.

Tom Van Vleck September 20, 2007 10:27 AM

I worked on the security internals of several operating
systems, including Multics and Unix-derived systems.
Multics was more secure. Not perfect, but better than the
systems in common use today. Systems like L4.verified and
Aesec seem like an interesting next step.

PL/I is a better language than C or C++ for writing secure
code. Again, not the ultimate. Because the compiler and
runtime were aware of the declared size of string objects,
some kinds of overflows were much less likely.

Security has three parts. First is the user interface, like
ACLs and mandatory controls, capabilities, etc. Second is
the reference monitor, the underlying machinery that
enforces the controls. The Multics reference monitor was
small, simple, and efficient, as a result of design,
langauge support, hardware support, and implementation
practices. Third is user behavior: the most secure system
in the world will fail if a user executes code from
strangers… and this is the most common current problem.

I think I will add some of the incorrect remarks here to
http://www.multicians.org/myths.html
which discusses a lot of Myths About Multics. Facts:
– Multics had one of the first TCP/IP implementations.
It ran in a less privileged ring than the kernel.
– My user registration code in Multics would not support
“hundreds of thousands of users” on a single site.
Maybe 1500 registered users.

Multicans are encouraged to register at and contribute to
http://www.multicians.org/

Key Stroker September 20, 2007 3:20 PM

@Kathryn

“This is why I usually read and don’t talk here – the company can get pretty rarefied. I have a basic knowledge of C++, but I’m finding this discussion facinating.”

I’ll second that. When I as taught (a little) programmimg, security was not on the syllabus.

@Bruce

This is one of the most interesting and thought provoking articles you’ve posted in a while. It makes me remember my student days and confirms my deep scepticism about modern systems (not just IT systems). More Please!

ranz September 20, 2007 10:16 PM

Jim, Tom, good to hear you again…

I too was a Multics “hardcore” engineer. The hardcore group had responsibility for the bowels of the OS. I also had the privilege of working with Roger Schell while I was in the military and later in the civilian world. Roger, one of the world’s pre-emminent uber-masters of computer and network security, is still at it, at the helm of Gemini Computers in Pacific Grove CA. The design VPN solutions on the A-1 rated GEMSOS kernel. Roger’s work on the Multics kernel project earned him a Phd from MIT in the late 1970’s.

Multics was, and is still, more secure than any Unix based solution. Tom Van Vleck pointed out some of the conceptual differences above. To dig in a little deeper you must look at the Multics marriage between hardware and the OS. Although today’s processors have ring structures available (I remember those discussions where Intel visited CISL) the OS’s rarely if ever make use of them. If a System Administrator changed your access to an object, the change occurred almost instantaneously for all users because a hardware fault was generated that required each process to revalidate its access to the object before any further references. These things (just like ACLs, MAC, privilege rings) were not add-on products as in Unix, but were fundamental to the design from day one (actually MAC was a major OS redesign in the early years of the project). The processor was specifically built for the OS and the OS was specifically designed and built to use the security features of the hardware. One could not exist or be secure without the other.

The claims that Multics is secure are substaniated by a grueling 2 year evaluation project by the Computer Security Center at the NSA. Multics was awarded a B-2 security rating in 1985.

I, like most Multicians I’m sure, have often wondered if continued to have been funded, how would the OS have translated to the distributed computing environment we have today. System Administration was difficult because you had to understand the security policy being enforced and know how to use the tools at your disposal to effect that enforcement. I don’t think I’ve ever seen a cogent description of these issues for Windows. -ear

Benjamin Random September 21, 2007 11:01 AM

@Kathryn: As an individual, the choices we make as to which products to buy is one of our most powerful methods of communication. I believe that increasing heterogeneity will increase general security. So there it is – always go for the underdog!

Scott September 21, 2007 11:11 PM

@Iod

You may not give two hoots if your box is rooted.

But I care if your box is rooted, because the botnet it joins is a hassle to me. (Whether or not you are a good admin and restore it promptly is not the point. There are plenty of bad admins out there.) The point is that it is not enough to say “My data is safe.”

Corruption or destruction of data is just one thing which must be defended against.

J. Spencer Love September 22, 2007 5:22 PM

Lawrence D’Oliveiro notes that he didn’t think that Multics had TCP/IP, much less “more complicated” things like SMTP. His “don’t accept” of old timers’ claims about Multics really reads more like “reject.” If so, it’s from a position of ignorance.

In the fall of 1982, just about 25 years ago, I received some prototype code from Michael Greenwald, then a grad student of David Clark’s at MIT, and was told I had to turn this (well, more like implement using that and the RFCs as learning aids) into a secure and adequately-performing service in time for the cutover of the ARPAnet from NCP to TCP/IP, which was scheduled for January 1st, 1983.

MIT-Multics was ready for the cutover on that date, but the VAX implementation (not from DEC, as I recall) was not widely available until October 1983, which is when the cutover became reality. My first TCP/IP ran in ring 3, which was protected from users (in ring 4), but the OS (in rings 0 and 1) was protected from it.

Speaking of “adequately performing,” although most of this was written in PL/1, I wrote a utility in assembler to rapidly convert between 8 9-bit Multics characters in a 72-bit doubleword and 9 8-bit bytes in the same doubleword for binary FTP transfers. The PL/1 compiler’s output for this task did not perform well.

During the first 9 months of 1983, I was assigned to work with Honeywell to develop a multi-level secure TCP/IP, which ran in ring 1 (where non-discretionary access control was largely implemented, not just in the ring 0 kernel), and the code we produced ran at the Pentagon on a LAN connecting several (four?) Multics systems. I never saw those systems, but my understanding is that Secret and Top-Secret data shared that network. Multics supported multiple compartments (as well as levels), but I don’t know how many it was trusted with on the same network.

SMTP was widely used before TCP/IP and DNS existed. There was no security in that protocol as specified, although optional authentication was added fairly early.

Even the final generation of Multics CPUs was only capable of a few (6-8) MIPS (each, although a system could have 4 or more). Processors we have now are (no exaggeration) hundreds of times faster. Network encryption, if used at all, was handled by dedicated processors in external boxes, typically at the link level. Mail within a Multics system was pretty secure; network mail was a different critter entirely. Secure file transfer involved encrypting a file before launching your FTP client.

Other services like Telnet and FTP also existed long before TCP/IP did. The ARPAnet was a going concern in 1973 when I arrived at MIT as a freshman and discovered Multics. I had already moved on to other projects by the time DNS was deployed. Multics hardware development ceased in the mid 1980s, but software development continued (in some form) into the 1990s. A web browser for Multics seems unlikely, but something of the sort may have been written. (Think lynx? What ever happened to gopher?)

ranz September 24, 2007 8:26 AM

And to think, all this was going on before anybody heard of Al Gore. Unless he was a grad student rooming with Greenwald, that is. -ear

Richard Lamson September 27, 2007 8:22 PM

One of the security aspects of access control not mentioned above is that there were separate controls for read, write and execute on segments. Typically writable segments were not executable; certainly temporary storage segments such as the PL/I stack and allocation areas were not. Executable program segments were pure code and not writable, i.e., no program would or could write into itself. Thus, even if there were an out-of-bounds error in a program, writing into that part of memory would not corrupt executable code, so it would not be possible to use buffer overflow errors as a security back door. This kind of hack is the secret to most of the buffer-overflow security problems.

By the way, the person who said IP was not implemented on Multics was dead wrong. Telnet/FTP/SMTP were first implemented over NCP (pre-IP/TCP Arpanet protocol) and reimplemented several times. I personally implemented a Telnet and FTP server for both IP/TCP and Chaosnet; they worked by being data pipes to a pseudotty, so when you logged in, it was just like coming in from a normal terminal, went through the same validation by the Initializer Daemon as a normal terminal; in short, it was not a source of access control problems. I suppose if MIME had been implemented someone could have sent Multics executables which perhaps some foolish person would have been willing to run, but it, too, would have been subject to the access constraints of the user doing so, and would not have caused catastrophic system problems (although the user him/herself might have been pretty upset). But that would be no different from running, say, the ‘cookie’ command that Chris Tavares wrote, which was a little Trojan Horse hack that would set a timer, and every few minutes your process would wake up and demand a cookie.

Jim Wilcoxson September 25, 2008 8:55 PM

@supersnail:

An advantage of PL/I over C when it comes to buffer overflows is that when passing arguments to external procedures, the compiler also passes “dope vectors” – a small data structure that describes each argument.

So, if you pass a char(20) string to an external procedure, the external procedure would declare it as char(), meaning “I don’t know how long this is”. If you try to write 100 characters into this argument, the PL/I runtime would automatically chop it to 20 characters and prevent a buffer overflow. And if you asked for length(arg), it would return 20, even though you declared it as char(). It truly was impossible to have a buffer overflow unless the programmer did something like a pointer overlay, ie:

dcl
arg char(*);
overwrite char(100) based;
p pointer;

p = addr(arg);
p->overwrite = ‘blahblah…’; /* a big string */

Now, you could declare the argument as char(100) in the external procedure instead of char(*), and that probably would cause an overflow. PL/I would allow you to do this for efficiency, since it wouldn’t have to access and calculate string lengths on assignments. But PL/I does have a very elegant mechanism for handling varying-length data that C simply doesn’t have.

This mechanism is completely separate from PL/I varying-length strings – char(20) varying. For these strings, a length field is carried around at the beginning of the string so that the programmer can always access the current length of the string. For example, if ‘abc’ is assigned to a char(20) var string s, then length(s) = 3. If ‘abc’ is assigned to a char(20) string s, then ‘abc’ is copied followed by 17 spaces. length(s) will always be 20. When char var strings are passed as arguments, the external procedure would declare them as char() var, and buffer overflows were still impossible because the PL/I runtime would truncate any assignments that were too big, just like with the char() case.

Another nice feature of PL/I is that if subscript checking was enabled, an external procedure with an array declared as:

dcl a(*) fixed bin;

meaning “I don’t know how big the array is”, could still be accurately range-checked. The PL/I runtime would access the dope vector to see how big the array really was to verify whether a subscript was out of bounds.

You can also continue to pass arrays and strings with unknown bounds (declared with *) to more external procedures, and the dope vectors are carried along at each level so that range checking works no matter the call depth.

Sure, it is much harder to implement all of this in the PL/I runtime, which is why C doesn’t do it. But it’s a shame some of the good aspects of PL/I were not carried forward into C.

Nick P July 15, 2015 1:58 PM

@ Bruce

I like how Karger and Schell pre-empted a lot of future “discoveries” in that paper. Had they read it, those discoveries would’ve happened much sooner and we’d know a lot more. MULTICS was a good design that was a bit too pricey. Like all the best stuff, the cost of hardware back then probably hurt it the most. Lots of systems that allowed excellent security or software engineering had this problem. Less excuse today given an Intel processor can do over 100,000 context switches a second while still being 95+% idle (per Lynx Inc).

There’s actually more opportunity today than ever given Intel and AMD are both doing semi-custom CPU’s. One can do a UNIX or Windows variant that removes some cruft while adding in some critical protections from academic literature. There’s literally no excuse outside of money or backward compability at this point.

@ Toby

I agree that KeyKOS was probably the best of the old OS’s. It had a microkernel, enforced POLA on whole system, used the capability model, and persisted all running data to make for better crash recovery. Met extra security requirements with the KeySAFE addition. A modern variant of its was EROS and CheriBSD leverages the approach on capability-secure hardware.

Note: I was also wow’d by the Burroughs Architecture, which I found in recent years. I think it’s superior to MULTICS architecture in a number of ways. Especially smart that they tried to deal with biggest trouble areas in hardware.

@ Nostromo

re C or C++ failure

Empirical studies done by military and defense contractors in 80’s-90’s showed C and C++ programmers wrote software with more defects than users of other languages. The reason, as others showed, is that the language does little to nothing to help developers. The Modula and Oberon languages gave code efficient enough for operating systems with safety features that knocked out a lot of problems. So, it can be done and even on 80’s hardware. C is just sloppy on purpose. On the far end, one system language was designed to knock out as many errors as possible and those same studies showed it got results. One variant straight-up proves their absence.

So, C is garbage. It always was: a watered-down, ALGOL descendent whose weaknesses were compromises to deal with hardware at the time. That hardware is gone outside embedded world. Today’s hardware is so powerful many apps run on heavy runtimes such as Java and C# with even OS prototypes written in those. Even mainstream has learned we can do better tradeoffs: Go, Rust, SWIFT, and especially Julia. So, I think moving our low-level efforts up a notch to something more like Modula-3 or Component Pascal to avoid all C’s safety pitfalls is more than past due. Such languages typically do have an unsafe setting for the few modules and functions that need it.

Also, pretending C doesn’t encourage problems doesn’t help credibility given all the alternatives going back to the 60’s that eliminate some of its problems. If the job is robust software, then might as well use robust languages and tools to build it.

@ supersnail

  1. Sample size doesn’t matter. Security evaluation happens on architecture, implementation, and so on. The paper was about the evaluation and pen testing of MULTICS along with commentary. Show me a UNIX appliance, I’ll show you risk all over the place with proven examples and even actual CVE’s from past or present. We’d have less to say about MULTICS during a similar review and even less about KeyKOS/EROS. That’s better design in action.
  2. I’ve illustrated C’s problems above. The best approach, taken by competitors, was to make things safe by default while also efficient and allowing unsafe constructs in modules where necessary.
  3. Most stuff designed has had poor security. If there’s no accountability, it’s usually even worse. A certification is an independent evaluation of claims against an agreed-upon criteria (eg Orange Book, Common Criteria). The pentests of IBM’s VM/370, MULTICS, and UNIX showed severe problems along with proposed fixes. That’s already proof of value of review process. Once good criteria developed, systems developed to it (esp A1-class systems) were pentested by NSA and did way better than what came before them with some going unbroken [with methods of the time]. Certifications can certainly be done in useless or weak ways but that’s an argument against bad certifications rather than certification in general. Most lessons learned papers had positive things to say of the impact on quality of B3/A1/EAL6/EAL7 processes.
  4. Modern OS’s do distribute security fixes. Yet, the high[er] assurance OS’s of past and present rarely needed security fixes to protect the system’s integrity: they get it right the first time with strong design, isolation, detection, and recovery. The fixes at that point are typically for low- or de-privileged components to protect their attack surface or availability. Modern OS’s, on the other hand, seem to have severe problems that compromise entire system all the time. Matter of fact, their architecture is so poor that you can compromise an entire system by opening an email. (!?) A sorrid state of affairs to say the least…
  5. Windows has among the worst security foundations. It used a monolithic kernel, insecure interfaces, overprivileged components, overprivileged drivers, undocumented functions, hard to parse formats/protocols, weak protocols, and unsafe language for all of it. Predictably, it was the hardest hit for year after year until they finally got their shit together by having Steve Lipner, who did strong security with Karger, embed better security-review into their development processes. They also did some design and tooling changes that helped. Still has unjustifiable vulnerabilities (esp in old code), lacks a trusted path, and lacks POLA for untrusted apps.

@ Multiplexed

Although I argue for opposite architecture, OpenVMS was impressive to me in terms of its improvements on monolithic architecture, better security features (esp SEVMS), and robust implementation. It achieved far more than most monolithic designs in terms of manageability, security, and reliability. Bad news is it was doomed second HP took it over and had a competing product (NonStop). Good news is HP recently handed it off to another company to port it to x86 and old versions can still run on increasingly cheap Alpha/Itanium boxes on eBay. May not be dead yet. 😉

One reason for quality was development process. They worked regular shifts with weekends off. They spent a whole week adding features and tests for them. They ran the tests, including regression tests, over the weekend. They spent the next week fixing any problems based on priority. Run tests over weekend. Rinse, repeat… results. A simple method whose resulting quality is still better than many commercial firms despite state-of-the-art in QA advancing far past this.

Anyway, I always ask Linux or Windows snobs laughing about VMS if they have any system with 17 years uptime, a high-throughput box with 5 years uptime, or if they’ve ever forgotten how to reboot their system because they never do it? Common experiences for OpenVMS users. Not so much for Windows or Linux crowd despite hundreds of millions to billions in development effort put into them.

@ Karellen

“2.1 The security in Unix-type OSs is one of its main features.
2.2 The security is Unix OSs is built in to all versions”

UNIX started out very insecure and still has many weaknesses. Data Secure UNIX, Trusted Xenix, UNIX emulation on secure kernels… people worked extremely hard to achieve security with UNIX compatibility. They still had to change system calls, isolated security-critical functionality from main UNIX codebase, and achieved medium assurance at best. The strong stuff all was clean-slate and built for purpose. Check out the EROS link above or especially these systems to see how differently they’re architected to prevent/isolate problems. I’ll add that there’s not a single case, outside hardware support, of anyone securing a monolithic kernel.

” Depends on development process for Unix system in question. Linux seems fairly resistant to injection of malicious code, as it’s been picked up in the past. Can’t talk about other unices.”

The kernel development team does seem better than average on that. The distro’s in general and various software that runs privileged? Plenty risk there…

“3.2.2 – Many Linux distros use signed archives/repositories to distribute updates which makes this attack hard. Other commercial unices tend to ship via post, which has been faked before, but is not common.”

They run on machines that nation-states have 0-days for. It’s why the high assurance systems of the past had to make a security argument from the bottom to the top. Strong crypto doesn’t matter if the kernel or app gets broken. To see what it takes, here is a copy of the framework for security analysis of systems I posted in a conversation with someone thinking it just takes secure coding.

“3.2.3 – You need to be root to write to boot sectors on unix systems. If the attacker can run code as root to do this, you’ve already lost.”

See setuid root. UNIX architects seemed all about getting attacker’s code in root. TIS’s Trusted Xenix OS cleverly removed the risk by making the kernel clear the bit during a write to an executable with admin or update process manually reseting it if change was legit. Couldn’t get UNIX developers to adopt that, though. Haven’t studied it in a long time so I’m not sure if setuid issues have been eliminated in modern distros. I just isolate the whole mess.

“Given that, I don’t see how the claim that Multics is so much more secure than Unix systems, combined with C++ as a programming language, a pretty contemporary system, holds up.”

(Multicians correct me if I’m wrong given I mainly read the papers…)

One point: microkernel. Take the number of flaws in UNIX’s monolithic kernel which give total system access. Take the number in MULTIC’s. The difference is how much more secure MULTICS is due to just that design choice. There’s also the reduced number of easily-exploitable bugs due to programming language choice. The use of rings and segments can sandbox compromises of certain components/apps to the point that attackers might need more than one flaw (i.e. chained exploits) to hit a specific target. It also had a stack where incoming data didn’t flow directly toward the stack pointer: a ludicrous design decision that inspired all kinds of UNIX workarounds (and stack overflows) that failed to fix the actual problem.

It’s not the most secure design built back in the day. It was an early attempt that made interesting decisions that put it ahead of many competitors, eventually got a positive security evaluation, and informed design of future systems. Sound architectural and implementation decisions put it way ahead of UNIX in many ways. UNIX’s name even comes from fact it was a knockoff of MULTICS for cheaper hardware. The more secure systems, such as GEMSOS or XTS-400, copied a number of MULTIC’s design/implementation tactics. MULTICS itself is probably far from secure if we did a thorough evaluation on it with modern knowledge. However, I believe its main goal was to be as reliable as a utility (eg phone service) and I’ll let the Multicians tell us if it achieved that. If it didn’t, Tandem’s NonStop Architecture certainly did later. 😉

@ Ian Mason

I believe this blog attracts a diverse and higher quality audience than most. Schneier’s articles range from the old wisdom to the modern take on things. Unsurprising that many Multicians would converge here. I’m not from that era but have scoured the literature to collect as much wisdom from past efforts as possible. I regularly dump that info here to try to apply it to modern problems. Forgetting what’s been learned, not applying proven methods, and reinventing the wheel are IT/INFOSEC’s biggest problems.

Although I’ve linked to better systems, I’d still have settled for a modernized MULTICS far more quickly than a modernized UNIX. “Worse is better,” though. (sighs) Although, KeyKOS and System/38 are my favorites of the old systems in sheer terms of what the architectures could accomplish to meet all ends. And were commercially successful with one still around to show up mainstream OS’s. 🙂

@ Kathryn

“What can we do, as individuals, to encourage software suppliers to create secure products? ”

Buy them and for a little more money. That simple. The tradeoff of secure systems is they might not support app X, feature Y, or price/performance ratio Z. Market wanted highest performance, lowest cost, and backward compatibility with inherently, insecure stuff. Vendors that made good stuff mostly went out of business (or that business) with AS/400 being sole survivor outside defense systems. Its security got watered down a bit, too, while functions were expanded and name changed to IBM i.

One difficulty in jump-starting this is that low-volume and niche market makes current offerings a bit expensive. Examples include Sentinel’s HYDRA, Secure64’s DNS on SourceT OS, LynxSecure virtualization, Mikro-SINA VPN, Cryptophone, and so on. Each do quite a bit better than similar products in their security engineering. They’re also going to cost more with some costing A LOT more. Until market favors security enough, the combo of high development cost and low sales volume will keep licensing or per-unit prices pretty high. It’s actually an ideal situation for non-profit or FOSS crowds to take over but they similarly insecure approaches as commercial sector. (sigh)

Best chance is some DARPA-funded, academic work being turned into a product. We’ve seen that happen plenty of times. A DARPA-funded, secure CPU and OS combo at least lets us build more secure appliances. Sales of those can gradually improve the platform and libraries toward an eventual general-purpose system. Right now, only one I know making headway on that approach with open-source is GenodeOS.

@ UNTER

Good points. Recognizing this problem was the brilliance of the AS/400, Mac OS, and Nitix product lines. They tried to hide as much complexity from users as possible. AS/400 and Nitix were largely self-managing. I still run into AS/400 systems that have been running largely unattended for almost a decade. New OS or security projects must embed security into architecture as such that day-to-day use in a secure fashion is the default and easy. Combex’s CapDesk was a nice attempt at that.

@ Blair

Singularity has been superceded by VerveOS, whose safety is verified down to assembler. A project with similar, probably better, approach to security than Singularity is JX Operating System. Figure you might like it. Code is available from their web site.

@ lod

You’re seeing the big picture a bit more than some. The actual problem starts with the hardware. I elaborate on that here with specific examples of how to do it better, old work on that, and recent work on it. In a nutshell, the basic constructs with which we build all software shouldn’t be so out-of-control by default. Make safe or secure the default, then fewer problems follow even with sloppy coders: mistakes are often an exception or crash rather than a code injection.

@ ranz

Schell and especially Karger were great. Yet, I think the Burrough’s, System/38, and capability designers (esp KeyKOS) were smarter in the end. Their mechanisms proved to enforce POLA for both business and military needs with greater efficiency for the better designs. Add easier updates, persistence, and mapping of requirements to model (easy as OOP, really) for a slam dunk argument. This is despite me spending years using and supporting Schell’s approach. My long-term takeaway from Schell & Karger was their design, implementation, and evaluation approaches (esp in Orange Book). The basic foundation they and rest of the old guard laid down has proven to be something we’re building on to this day. It’s why i tell the young crowd that high assurance security stands on the shoulders of giants.

Plus, the systems back then were so much more interesting and innovative (see Flex machine with Ten15). We’re seeing a ressurgence of innovation now due to cloud and embedded needs. Reinventing much old stuff. Getting fun again but not as secure or reliable. 😉

Note: I’ve strongly thought about seeing if Schell would port GEMSOS to SAFE or CHERI processors. The result would be a verified kernel with verified policy enforced at hardware level and with over 20yrs without breach. That would make a hell of a marketing pitch, eh? Also wouldn’t have to depend on Intel (and insecure baggage) given SAFE or CHERI could be put on an FPGA (esp antifuse) running standalone.

@ Richard Lamson

“One of the security aspects of access control not mentioned above is that there were separate controls for read, write and execute on segments. Typically writable segments were not executable”

The segmented protections of MULTICS and other B3/A1 systems had a strong effect on resultant security. Recent INFOSEC research has rediscovered that. The Secure64’s SourceT leverages similar protection in Itanium’s paging hardware & memory keys. Native Client uses segments albeit in a weaker way. Most interesting I’ve seen is Code Pointer Integrity, which protects pointers with them.

Even Intel’s literature on Atom processors said segments were 4x more efficient than the paging. So, fine grained protection, high efficiency, and still no adoption by most OS vendors. They could always get rid of management aspect by building that into their tools and libraries. No real effort though…

@ Multicians

Thanks for sharing your experiences. They were interesting reads as usual.

Zafer Balkan August 26, 2019 2:42 AM

I have been searching security of operating systems, starting with “Modern Operating systems” by Andrew S. Tanenbaum & Herbert Bos up to seL4 microkernel. Your comment made me look at Multics and contact with Tom Van Vleck. Thank you indeed.

Regarding the scope of this blog post, may I ask you for a few comments on Multics and security of OSs today, 2019?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.