The Multics Operating System

Multics was an operating system from the 1960s, and had better security than a lot of operating systems today. This article from 2002 talks about Multics security, and the lessons learned that are still relevant today.

Posted on September 19, 2007 at 5:44 AM • 47 Comments

Comments

gregSeptember 19, 2007 6:10 AM

I like the comments about programming language. Unfortunately current languages, although an improvement over C and C type buffer overflows, are not really what i would "secure" yet. But and improvement to be sure. Its just hard to write a OS in them with such a large code base in C.

Of course there are other ways to avoid overflows. But they are almost never used.

TobySeptember 19, 2007 6:15 AM

I would argue that KeyKOS (http://www.cis.upenn.edu/~KeyKOS/) has at least as much, if not more, to teach us than Multics.

In particular, one can argue that Multics' identity-based access control scheme inspired the models employed today by Linux and Windows which have shown themselves to be woeful in preserving the principle of least privilege, which has in trun led to much of the current insecurity.


Ian MillerSeptember 19, 2007 6:25 AM

some ideas from Multics where included into VMS.

I particularly noted this from the article "(4) people are unwilling or unable to recognize the compelling need to employ much better technical solutions."

NostromoSeptember 19, 2007 6:33 AM

More mindless C-bashing, I see. "a C programmer has to work very hard
to avoid programming a buffer overflow error." Using strncat and strncpy instead of strcat and strcpy is "working very hard"?

Of course life is easier with C++ instead of C, but in the end, it's the algorithms and the modularization, not the programming language, that determine the security.

Baka ZoonaSeptember 19, 2007 7:32 AM

I don't think so. Anyone who recieved a software tape from HIS during that time frame know who this is. My name was on the cover page of every printout that accompanied every software product sent to our customers.

Our developers built in a backdoor into evry OS developed by HIS to let "us" get on to evry system (GCOS, ACOS, Multics, etc.).

Ian MasonSeptember 19, 2007 7:35 AM

@Nostromo "More mindless C-bashing, I see."

I can speak from the perspective of a man who has programmed in C on many operating systems, and in PL/1 on Multics.

The way PL/1 features were used in Multics allowed you to program to get the effects (and efficiency) of pointer arithmetic without doing any. In that respect the Multics variant of PL/1 was superior to C.

An example is in order.

First off, Multics didn't have files as such. Instead it had named segments that were mapped directly into memory ('initiated' in Multics terminology). Thus accessing a file required you to manipulate the file contents as one large memory buffer. The classic Multics way of doing this was to describe the file as one PL/1 structure (very similar to a C struct).

PL/1 has BASED and REFER as part of its structure definition language. BASED names a pointer variable that points to an instance of the structure.

The following two are equivalent:-

PL/1:
declare fred pointer;
declare 1 a_structure based (fred),
2 a_number fixed bin (35);

i = a_number;

C:
struct A { int a_number; } a_structure;
struct A* fred;
fred = &a_structure;

i = fred->a_number;

REFER allowed you to set the size of arrays dynamically based on a variable; which could be part of the same structure.

declare file_pointer pointer;
declare 1 file_mapping_struct based (file_pointer),
2 title_size fixed bin(35),
2 title string varying (refer title_size),
2 some_more_sub_structure ...

call hcs_$initiate (">udd>admin>imason>example" ... file_pointer);

From this point on you can access parts of file_mapping_struct directly - PL/1 takes care of all the pointer arithmetic for you. There isn't a C counter example because C doesn't do this - you can achieve the same end result, but it requires the programmer to do a bunch of pointer arithmetic.

The combination of BASED and REFER leaves the compiler to do the error prone pointer arithmetic while having the same innate efficiency as the clumsy equivalent in C. Add to this that PL/1 (like most contemporary languages) included bounds checking and the result is significantly superior to C.

The issue isn't merely which string routine you pick from the library but every bit of pointer arithmetic and every array indexing operation being automatically protected in one language and completely unprotected in the other.

(My PL/1 syntax is probably slightly off somewhere, it's 21 years since I wrote a PL/1 program)

If anyone has any relevent Multics questions I'll try to answer. I'm highly probably the only person in this conversation who's ever actually used a Multics machine (as opposed to reading mentions in papers and textbooks).

gregSeptember 19, 2007 7:50 AM

@Nostromo

More knee jerk programming language dribble. If its so dam easy, Why so many buffer overflow bugs then?

I have over 10 years C and C++ commercial experience. I was not bashing other than pointing out a limitation or weakness. The ability to do pointer manipulation makes these non secure languages by nature. Dam C is really just syntactic sugar for a assembler.

Nor did I start trumping other languages as perfect solution for every job.

For the record I use assembler or C when its the right tool for the job. Which these days is getting less and less often these days. The last time was programming a micro controller.

Writing secure code is hard. Writing it in C or C++ is harder.

Andy DingleySeptember 19, 2007 7:51 AM

Segmentation is only as good as the segment access control.

Years ago I worked on (ghastly) ITL minis running Modus. This had one of the most rigidly segmented architectures I've ever seen, due to hardware limits rather than security, although this had been "re-engineered by marketing" into a "security" benefit.

Step #2 for any new coder, right after "Hello World", was to learn the "magic poke" that made your process "trusted" and allowed untrapped cross-segment access. Debugging was simply impossible without. Segmentation was a nightmare on this machine (no data structures bigger than 16k without hard work), but it offered nothing for security.

supersnailSeptember 19, 2007 7:56 AM

I would seriously question some of the conclusions of this article.

1. Small sample size.
Honeywell claimed MULTICS was running at 80 sites. Say 300 servers running mostly in secure Military environments, mostly when hackers and script kiddies were stealing apples at kindergarten.

2. PL/1 vs. C.
Most buffer overflows are caused not by C but by careless socket library calls. I see no reason to think PL/1 programmers would be anymore careful when processing "receives". And you can cause buffer overflows with VARCHARS you just stick a larger value in the length field (easily done with pointers!).

3. The conclusion that kernels need "certification".
There is no evidence of any wide spread hacks affected by distribution of corrupt kernels or corrupted applications. And god knows it would be easy to distribute a hacked "Linux" kernel, just stick it on a Web site and wait for people to download it.

4. The assertion that modern OSes do not distribute security fixes.
My two "home" OSes automatically get security fixes installed (although I retain the option to reject fixes). As one OS is the best selling “windows XP" and the other is the ever popular “SUSE" Linux. I think this covers most modern OSes.


Andy DingleySeptember 19, 2007 7:57 AM

I've written very little PL/1 (and barely more PL/M) but I've certainly run into buffer overflow problems with it.

Just because the language spec allows structure size limits to be specified, doesn't mean that the compiler has implemented such a feature, or that it hasn't been switched off by an optimisation flag. We were developing for tiny embedded systems and our PL/1 compiler was painfully much a subset of the overall behemoth language - not guarding against buffer overflows being just one of the more painful memories.

gregSeptember 19, 2007 8:02 AM

@supersnail

On 4).

But wouldn't it be nicer to not need to download fixes as often as once a week (Or more, if you really keep up with the updates)?
And how many fixes introduce new weaknesses?

One thing is clear. The current level of security in modern OS's is keeping a lot of people employed. For the wrong reasons.

Another KevinSeptember 19, 2007 8:05 AM

@Ian

As a fellow Multician (actually, I worked primarily over on the GCOS3/GCOS8 side), I quite agree that much of Un*x security has been clumsy reimplementation of either the Multics ring-and-segment system or else the capability-descriptor architecture of GCOS8.

@Toby

Multics access control was *not* solely identity based, although the literature can give that impression. A "userid" was made up of three components: person ID, project ID, and classification level. When I was logged in as "Kenny.G8Arch.u" I had a different set of privileges than I had as "Kenny.SERBoard.u", etc.

Project IDs in practice were more like role IDs - although most of them were "what project are you working on," some were functions like "SysAdmin". Unlike what we have with groups on Unix, a session couldn't have more than one project at a time. There were also nondiscretionary access controls, although the Multicians never figured out how to control bandwidth of covert channels to the point where the Government was comfortable with multiple compartments on one mainframe. (To Multics's defense, no other vendor did, either!)

supersnailSeptember 19, 2007 8:15 AM

@greg.

Yes it would be really nice if my home PCs didnt need security fixes once a week.
It would be even nicer I didnt consume two or three hours cpu a week security related processing (fixes, forewalls , anti-virus anti-spyware) compared to my meagre minutes of e-mailing and googling.

Windows has probably the best technical security foundations of any modern OS. It was written by ex-DEV/VMS guys who were inspired by the MULTICs system among others. The NT security model is technicaly excellent. The problems arise because in the consumer versions security is turned off by default, and, almost every application (e-mail, database, web server, web browser) allows arbitary pieces of data to be executed as scripts.

gregSeptember 19, 2007 8:52 AM

@supersnail

A secure OS would not be sensitive to insecure user tools. And configuring those tools with "security" on is often not even possible. That is not secure or designed with security in mind.

The current crop of OSes just weren't design from the ground up with security in mind.

MultiplexedSeptember 19, 2007 9:07 AM

Kudos to Multicians!

(Open)VMS: Still great after all these years, despite the fact that no one wants to use it any more.

Just because Cutler and company did initial work on Windows NT does not imply that Windows is "secure." It's astounding the compromises made, mostly due to the people on the business side.

So what does that leave us with today? OpenBSD?

KarellenSeptember 19, 2007 9:18 AM

Hmmm.....reading the article:

2.1 The security in Unix-type OSs is one of its main features.

2.2 The security is Unix OSs is built in to all versions.

2.3.1 Sounds like C++ is just as good as PL/I here. Has safe strings, does link-time checking for function arguments due to name mangling, provides rich (checked) access to arrays, etc...

2.3.2 No-Execute - Depends on hardware. On hardware that's supported it Unix *has* supported it for ages. (Why does author appear to think that contemporary OSs only run on x86?)

However, interesting points on segmented pages and stack direction. I don't know enough to go into more detail about that.

2.4 - Good point. What about the security of non-SE-Linux though. It's not as capable, but it is less complex. Not sure how Multics security rates against starndard Unix permissions.

3.2.1 - Depends on development process for Unix system in question. Linux seems fairly resistant to injection of malicious code, as it's been picked up in the past. Can't talk about other unices.

3.2.2 - Many Linux distros use signed archives/repositories to distribute updates which makes this attack hard. Other commercial unices tend to ship via post, which has been faked before, but is not common.

3.2.3 - You need to be root to write to boot sectors on unix systems. If the attacker can run code as root to do this, you've already lost.

3.3 - Unix has always supported remote logging. If the remote logging machine has no ports open apart from one to append logs and a second to read, which should be trivially auditable, and only allows console logins, then log erasure/modification is not possible.


Given that, I don't see how the claim that Multics is so much more secure than Unix systems, combined with C++ as a programming language, a pretty contemporary system, holds up.


Anyone care to help me out?

W. Craig TraderSeptember 19, 2007 9:41 AM

@Ian Mason

I was a Systems Operator on the Pentagon AFDSC cluster from 1986-1988, but I will grant that you probably had more experience programming in the Multics environment than I did.

If I recall correctly, the Multics mail utility could move data from one security context to a less secure context, as long as you had a target account in the less secure context. I demonstrated that problem in the first week of training, but it wasn't on Honeywell's list of things to fix...

joeSeptember 19, 2007 9:46 AM

This on slashdot today: internet security moving toward white lists for desktop software.

http://it.slashdot.org/it/07/09/19/0436203.shtml

This looks a lot like the common authentication problem, for software. Someone somewhere has to certify that a software app deserves to be on the white list. There will be pressure for that review to be quick and cheap. And bad guys will be submitting their wares for approval alongside the good guys. The net effect might be to make it harder to deploy good software, while having limited benefit in excluding the bad.

Ian MasonSeptember 19, 2007 10:44 AM

Coo, looks like there are either:-

1) Lots more Multicians than I thought, OR
2) They are unreasonably concentrated here.

@ W. Craig Trader

Big cock-up if that's the case. From memory, mail used message segments and message segments were subject to non-discretionary access control just like everything else. The basic non-discretionary access rules said 'you can't write from a higher privilege to a lower privilege' e.g. If your process is running at 'confidential' privilege you can't write to a 'unclassified' segment, but you can read it. (write-up, read-down i.e the Bell-LaPadula model).

@Andy Dingley

We're specifically talking about Multics and the Multics PL/1 implementation was a very full one. I'll grant that is rare and most implementations go nowhere near the full language; especially the embedded programming subsets that have been around. In fact the later are more like "PL/1 for C programmers who prefer PL/1 syntax".

However, the argument is basically that choice of programming language can go a long way to assisting programmers in producing a secure/correct program. That choice must extend to language implementation/feature set where that choice exists.

---
One thing Multics got right was restricting some privileged operations to a dedicated System Security Administrator role that was so restricted that most of the normal day to day commands didn't work when you were in that role. This prevented:-

1) The 'doing all your work as root' syndrome,
2) Many trojan attacks simply because the commands and system calls to make them useful weren't available.

Ilya LevinSeptember 19, 2007 10:57 AM

@Ian Mason

No, you are not the only person here. I was familiar with Multics and have programmed in PL/I on VMS and VM/CMS. I believe there are another people in the comments already at the time I'm writing this post. But you are quite right.

MartinSeptember 19, 2007 11:04 AM

Bringing the Multics folks out of the woodwork: I was a user as a graduate student at MIT. I was impressed by the elegance of PL/I and segmented memory, security rings, etc., visible even from the command line level.

However, compared with CTSS, Multics seemed really slow (for utilities & word processing), and it was pretty expensive compared to a typewriter! I was tempted to walk down to the PDP-8 in the lab or the PDP-1 downstairs.

As I recall those times, security beyond a user password had nearly zero significance to us. (There was no Internet!) What was more a worry was getting your fair share of CPU time and not having to twiddle your thumbs at peak demand times. As if you had hundreds of users on your desktop Linux box today. ;-)

KathrynSeptember 19, 2007 1:09 PM

This is why I usually read and don't talk here - the company can get pretty rarefied. I have a basic knowledge of C++, but I'm finding this discussion facinating.

What can we do, as individuals, to encourage software suppliers to create secure products? These are design choices that are made very early on, pretty deep in the development, and not something that will ever come up in marketing consumer surveys.

Its a great mental workout to discuss this stuff, and wonderful to be able to learn about it, but at the end of the day, I need to be able to have action beyond throwing up my hands and trying to pick the best of a bad lot for my meager skill level.

Jim LippardSeptember 19, 2007 1:14 PM

Toby: As Another Kevin pointed out, Multics access control lists (ACLs) included more than user-id level controls. The third field wasn't used much, but distinguished interactive users from "absentee" (batch) users, and daemons. ACLs could be of arbitrary length, some files types had "extended ACLs" (like mailboxes, which had add, delete, read, own, status, wakeup, and urgent permissions), and there were per-directory IACLS (initial ACLs), which could also be of arbitrary length--much more powerful than umask. There were also mandatory access controls (known as AIM, the access isolation mechanism), and a key contribution of Multics, the ring mechanism.

W. Craig Trader: While Multics mailboxes could hold messages at different security levels, you could not see your Top Secret messages while logged in as Secret (no read up) and you couldn't delete your Secret messages while logged in as Top Secret (no write down). I re-implemented the Multics message facility in the mid-eighties, and I'm not aware of any problems that allowed a regular user to copy messages read while logged in as Top Secret to Secret--are you referring to a bug in the old "mail" command?

All: There's a lot more information on Multics at Tom Van Vleck's Multicians website, and I've noticed that a lot of Multics documentation and code has found its way online of late.

Ian MasonSeptember 19, 2007 2:28 PM

@Kathryn "the company can get pretty rarefied."

Go on, come out and say it. You really mean "old" don't you. ;-)

Valdis KletnieksSeptember 19, 2007 4:27 PM

@supersnail:

"1. Small sample size. Honeywell claimed MULTICS was running at 80 sites. Say 300 servers running mostly in secure Military environments, mostly when hackers and script kiddies were stealing apples at kindergarten."

On the other hand, some of the people who *were* doing security at the time were pretty sharp. For instance, read Ken Thompson's famous Turing Award Lecture "On Trusting Trust" (http://www.acm.org/classics/oct95/) which references "an unnamed Air Force document", which was in fact the original Karger&Schell Multics penetration test report
that the "30 Years Later" retrospective is in reference to....

Ian MasonSeptember 19, 2007 4:58 PM

Also on the "small sample size" issue:

You must also remember Multics started back in the days of mainframes when computers were rare, expensive beasts. IBMs System/360 was introduced in the same year as work on Multics started. Eighty sites was a fair number of sites for this kind of big iron. Plus each site supported hundreds or thousands of users. So as a proportion of installed base of all computers 80 sites was not insignificant.

Furthermore, I'd bet that citations for "Multics" in peer reviewed journals outruns citations for "Microsoft Windows".

UNTERSeptember 19, 2007 7:37 PM

"To make matters worse, everyone is now expected to be a system administrator of their own desktop system, but without any training! "

I think they underemphasize that line. In the world of the early '70's, there was a high ratio of "genius" to "user". Administration was being done by some of the best in the world, with relatively few users, particularly low-expertise users. They could focus on technical issues of security.

It is impossible to even begin to deal with security issues when very few system's administrator's are any good, even those that are good have relatively little authority within their organizations, and the user base is composed of anyone who can point with a mouse. A great deal of administrative responsibility is then placed on mouse-clickers.

In that environment, how can you implement security? I don't trust my own IT department (with good evidence), but my colleagues don't even know that there exists a security and infrastructure problem (they are often one and the same). And this isn't even my area of expertise - just a side-line so my work doesn't get destroyed.

Technical issues are greatly submerged by administrative issues. We not only can go thirty years without significant kernel/os reform, I can guarantee that we will! Ubiquitous computing will bury that layer under more pressing issues of simple competence.

Lawrence D'OliveiroSeptember 19, 2007 9:29 PM

I don't accept most of their claims that MULTICS is/was somehow inherently more secure than today's Linux/BSD systems. I don't think they ever got as far as implementing TCP/IP, let alone more complicated things like SMTP, HTTP, Kerberos and so on--protocols whose implementations did suffer from vulnerabilities. If they had to implement those in PL/I, they would have had just as many vulnerabilities.

They disparage SELinux because it's bigger than their original kernel. Yet they don't say how big their own "Kernel Design Project" would have got if it hadn't been stopped.

And they never had anything like SSL or SSH, mutually authenticating between secure systems over an insecure network. That's something we take for granted today.

lodSeptember 20, 2007 1:43 AM

I find it fascinating to read all these computer security discussions based around multi-user systems and protecting different users for each other. This is an issue for big servers but it's a fairly small one and I suspect that it will become basically irrelevant as virtual machines become more popular.

The problem I see is one of malicious applications rather than security levels. I really don't give two hoots if one of my computers gets compromised and the attacker gets root access. I can just pull down the computer and reinstall the lot in a few hours. The far bigger issue for me is my documents and data and you only need my regular user permissions to attack them.

On a single user computer I'm concerned about my email program, my web browser and my online game. Anything that has online connectivity can be used to corrupt or delete all my documents. SELinux is promising as an improvement in this area but it didn't look like Multics has anything to offer in this regard.

For now the best protection seems to be regular backups, but it's becoming harder as hard drive size increases outpace backup media size.

jsSeptember 20, 2007 5:19 AM

Unrelated to the content, but wow. The squished font in that PDF is *really* hard on the eyes.

Tom Van VleckSeptember 20, 2007 10:27 AM

I worked on the security internals of several operating
systems, including Multics and Unix-derived systems.
Multics was more secure. Not perfect, but better than the
systems in common use today. Systems like L4.verified and
Aesec seem like an interesting next step.

PL/I is a better language than C or C++ for writing secure
code. Again, not the ultimate. Because the compiler and
runtime were aware of the declared size of string objects,
some kinds of overflows were much less likely.

Security has three parts. First is the user interface, like
ACLs and mandatory controls, capabilities, etc. Second is
the reference monitor, the underlying machinery that
enforces the controls. The Multics reference monitor was
small, simple, and efficient, as a result of design,
langauge support, hardware support, and implementation
practices. Third is user behavior: the most secure system
in the world will fail if a user executes code from
strangers... and this is the most common current problem.

I think I will add some of the incorrect remarks here to
http://www.multicians.org/myths.html
which discusses a lot of Myths About Multics. Facts:
- Multics had one of the first TCP/IP implementations.
It ran in a less privileged ring than the kernel.
- My user registration code in Multics would not support
"hundreds of thousands of users" on a single site.
Maybe 1500 registered users.

Multicans are encouraged to register at and contribute to
http://www.multicians.org/

Key StrokerSeptember 20, 2007 3:20 PM

@Kathryn

"This is why I usually read and don't talk here - the company can get pretty rarefied. I have a basic knowledge of C++, but I'm finding this discussion facinating."

I'll second that. When I as taught (a little) programmimg, security was not on the syllabus.

@Bruce

This is one of the most interesting and thought provoking articles you've posted in a while. It makes me remember my student days and confirms my deep scepticism about modern systems (not just IT systems). More Please!

ranzSeptember 20, 2007 10:16 PM

Jim, Tom, good to hear you again...

I too was a Multics "hardcore" engineer. The hardcore group had responsibility for the bowels of the OS. I also had the privilege of working with Roger Schell while I was in the military and later in the civilian world. Roger, one of the world's pre-emminent uber-masters of computer and network security, is still at it, at the helm of Gemini Computers in Pacific Grove CA. The design VPN solutions on the A-1 rated GEMSOS kernel. Roger's work on the Multics kernel project earned him a Phd from MIT in the late 1970's.

Multics was, and is still, more secure than any Unix based solution. Tom Van Vleck pointed out some of the conceptual differences above. To dig in a little deeper you must look at the Multics marriage between hardware and the OS. Although today's processors have ring structures available (I remember those discussions where Intel visited CISL) the OS's rarely if ever make use of them. If a System Administrator changed your access to an object, the change occurred almost instantaneously for all users because a hardware fault was generated that required each process to revalidate its access to the object before any further references. These things (just like ACLs, MAC, privilege rings) were not add-on products as in Unix, but were fundamental to the design from day one (actually MAC was a major OS redesign in the early years of the project). The processor was specifically built for the OS and the OS was specifically designed and built to use the security features of the hardware. One could not exist or be secure without the other.

The claims that Multics is secure are substaniated by a grueling 2 year evaluation project by the Computer Security Center at the NSA. Multics was awarded a B-2 security rating in 1985.

I, like most Multicians I'm sure, have often wondered if continued to have been funded, how would the OS have translated to the distributed computing environment we have today. System Administration was difficult because you had to understand the security policy being enforced and know how to use the tools at your disposal to effect that enforcement. I don't think I've ever seen a cogent description of these issues for Windows. -ear

Benjamin RandomSeptember 21, 2007 11:01 AM

@Kathryn: As an individual, the choices we make as to which products to buy is one of our most powerful methods of communication. I believe that increasing heterogeneity will increase general security. So there it is - always go for the underdog!

ScottSeptember 21, 2007 11:11 PM

@Iod

You may not give two hoots if your box is rooted.

But I care if your box is rooted, because the botnet it joins is a hassle to _me_. (Whether or not you are a good admin and restore it promptly is not the point. There are plenty of bad admins out there.) The point is that it is not enough to say "My data is safe."

Corruption or destruction of data is just _one_ thing which must be defended against.

J. Spencer LoveSeptember 22, 2007 5:22 PM

Lawrence D'Oliveiro notes that he didn't think that Multics had TCP/IP, much less "more complicated" things like SMTP. His "don't accept" of old timers' claims about Multics really reads more like "reject." If so, it's from a position of ignorance.

In the fall of 1982, just about 25 years ago, I received some prototype code from Michael Greenwald, then a grad student of David Clark's at MIT, and was told I had to turn this (well, more like implement using that and the RFCs as learning aids) into a secure and adequately-performing service in time for the cutover of the ARPAnet from NCP to TCP/IP, which was scheduled for January 1st, 1983.

MIT-Multics was ready for the cutover on that date, but the VAX implementation (not from DEC, as I recall) was not widely available until October 1983, which is when the cutover became reality. My first TCP/IP ran in ring 3, which was protected from users (in ring 4), but the OS (in rings 0 and 1) was protected from it.

Speaking of "adequately performing," although most of this was written in PL/1, I wrote a utility in assembler to rapidly convert between 8 9-bit Multics characters in a 72-bit doubleword and 9 8-bit bytes in the same doubleword for binary FTP transfers. The PL/1 compiler's output for this task did not perform well.

During the first 9 months of 1983, I was assigned to work with Honeywell to develop a multi-level secure TCP/IP, which ran in ring 1 (where non-discretionary access control was largely implemented, not just in the ring 0 kernel), and the code we produced ran at the Pentagon on a LAN connecting several (four?) Multics systems. I never saw those systems, but my understanding is that Secret and Top-Secret data shared that network. Multics supported multiple compartments (as well as levels), but I don't know how many it was trusted with on the same network.

SMTP was widely used before TCP/IP and DNS existed. There was no security in that protocol as specified, although optional authentication was added fairly early.

Even the final generation of Multics CPUs was only capable of a few (6-8) MIPS (each, although a system could have 4 or more). Processors we have now are (no exaggeration) hundreds of times faster. Network encryption, if used at all, was handled by dedicated processors in external boxes, typically at the link level. Mail within a Multics system was pretty secure; network mail was a different critter entirely. Secure file transfer involved encrypting a file before launching your FTP client.

Other services like Telnet and FTP also existed long before TCP/IP did. The ARPAnet was a going concern in 1973 when I arrived at MIT as a freshman and discovered Multics. I had already moved on to other projects by the time DNS was deployed. Multics hardware development ceased in the mid 1980s, but software development continued (in some form) into the 1990s. A web browser for Multics seems unlikely, but something of the sort may have been written. (Think lynx? What ever happened to gopher?)

ranzSeptember 24, 2007 8:26 AM

And to think, all this was going on before anybody heard of Al Gore. Unless he was a grad student rooming with Greenwald, that is. -ear

Richard LamsonSeptember 27, 2007 8:22 PM

One of the security aspects of access control not mentioned above is that there were separate controls for read, write and execute on segments. Typically writable segments were not executable; certainly temporary storage segments such as the PL/I stack and allocation areas were not. Executable program segments were pure code and not writable, i.e., no program would or could write into itself. Thus, even if there were an out-of-bounds error in a program, writing into that part of memory would not corrupt executable code, so it would not be possible to use buffer overflow errors as a security back door. This kind of hack is the secret to most of the buffer-overflow security problems.

By the way, the person who said IP was not implemented on Multics was dead wrong. Telnet/FTP/SMTP were first implemented over NCP (pre-IP/TCP Arpanet protocol) and reimplemented several times. I personally implemented a Telnet and FTP server for both IP/TCP and Chaosnet; they worked by being data pipes to a pseudotty, so when you logged in, it was just like coming in from a normal terminal, went through the same validation by the Initializer Daemon as a normal terminal; in short, it was not a source of access control problems. I suppose if MIME had been implemented someone could have sent Multics executables which perhaps some foolish person would have been willing to run, but it, too, would have been subject to the access constraints of the user doing so, and would not have caused catastrophic system problems (although the user him/herself might have been pretty upset). But that would be no different from running, say, the 'cookie' command that Chris Tavares wrote, which was a little Trojan Horse hack that would set a timer, and every few minutes your process would wake up and demand a cookie.

Jim WilcoxsonSeptember 25, 2008 8:55 PM

@supersnail:

An advantage of PL/I over C when it comes to buffer overflows is that when passing arguments to external procedures, the compiler also passes "dope vectors" - a small data structure that describes each argument.

So, if you pass a char(20) string to an external procedure, the external procedure would declare it as char(*), meaning "I don't know how long this is". If you try to write 100 characters into this argument, the PL/I runtime would *automatically* chop it to 20 characters and prevent a buffer overflow. And if you asked for length(arg), it would return 20, even though you declared it as char(*). It truly was impossible to have a buffer overflow unless the programmer did something like a pointer overlay, ie:

dcl
arg char(*);
overwrite char(100) based;
p pointer;

p = addr(arg);
p->overwrite = 'blahblah...'; /* a big string */


Now, you could declare the argument as char(100) in the external procedure instead of char(*), and that probably would cause an overflow. PL/I would allow you to do this for efficiency, since it wouldn't have to access and calculate string lengths on assignments. But PL/I does have a very elegant mechanism for handling varying-length data that C simply doesn't have.

This mechanism is completely separate from PL/I varying-length strings - char(20) varying. For these strings, a length field is carried around at the beginning of the string so that the programmer can always access the current length of the string. For example, if 'abc' is assigned to a char(20) var string s, then length(s) = 3. If 'abc' is assigned to a char(20) string s, then 'abc' is copied followed by 17 spaces. length(s) will always be 20. When char var strings are passed as arguments, the external procedure would declare them as char(*) var, and buffer overflows were still impossible because the PL/I runtime would truncate any assignments that were too big, just like with the char(*) case.

Another nice feature of PL/I is that if subscript checking was enabled, an external procedure with an array declared as:

dcl a(*) fixed bin;

meaning "I don't know how big the array is", could still be accurately range-checked. The PL/I runtime would access the dope vector to see how big the array really was to verify whether a subscript was out of bounds.

You can also continue to pass arrays and strings with unknown bounds (declared with *) to more external procedures, and the dope vectors are carried along at each level so that range checking works no matter the call depth.

Sure, it is much harder to implement all of this in the PL/I runtime, which is why C doesn't do it. But it's a shame some of the good aspects of PL/I were not carried forward into C.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..