A Brief History of Operating Systems

Introduction

Operating Systems (OS) are the layer of software that sits between hardware and higher level pieces of function like applications and middleware. They help applications and middleware make use of the underlying hardware by doing things like managing memory, providing device driver support for hardware components and file systems to store and organize information on disk. While not technically part of the operating system layer, many modern operating systems are package some higher level function such as shells (also know as graphical user interfaces (GUI)), storage compression, firewalls, internet access, and even web browsers. This paper is a brief overview of the key operating systems in the market, their ancestry, their design points and their differentiating features.

Early computers lacked any form of operating system. The application talked directly to the hardware. As computers advanced and computer manufacturers sought to make them more relevant and easier to program to, machines came with some relatively simple support code elements called runtime libraries which were used to link a user?s program to the machine and assist with operations such as input and output. As machines became more powerful, the runtime libraries were assembled into a program that was started before the first customer job, read in the customer job, controlled its execution, cleaned up after it, recorded its usage, and immediately went on to process the next job. These were the first operating systems and this still describes the basic function of an OS. As noted in the previous paragraph, operating systems evolved and it became common to bundle generally used utilities and applications with the base OS function. What was originally referred to as an operating system is now generally known as the ?kernel?, while operating system has come to mean the collection of the Kernel, GUI and utilities.

Early operating systems were very diverse, … This state of affairs continued until the 1960s when IBM developed the System/360 series of machines which all used the same instruction architecture.

Hardware

IBM as we know it today would not exist but for our greatest feat of innovation to date: a true “bet the company” gamble over four decades ago that produced the System/360, a series of machines that provided a common instruction set in various implementations that spanned a wide range of performance. It replaced products, each with its own architecture; from the 1401 small business system to the 7090, then a large scientific computer.

System/360 has evolved over those four decades into today’s zSeries server. Many programs running today were written several years — or in some cases decades — in the past. The architecture has evolved substantially over the years too, with the inclusion of many new instructions, new kinds of I/O support, multiprocessor support, fiber channels, and many other changes.

All this innovation and deployment has taken place while running many of the world’s largest businesses — year after year after year.

IBM has always designed and built the processing units for its mainframe computers; we have too, as no one else has the expertise and resources.

IBM also designs and builds the processing units for the other computers we sell, the ones that don’t use Intel processors.

These processing units are various implementations of the Power architecture that is based on the work of the legendary “801” project at IBM Research in the early 1980’s led by John Cocke. (“801” is the building code for the Yorktown lab.) The 801 project worked out the idea of RISC (Reduced Instruction Set Computing). Previous processing units, including IBM’s own System/360, had instruction set architectures in which some operations were quite easy to implement in hardware, others less so. The implementation of these more complex instructions was often done using a special form of processor-level program called microcode.

The key idea behind RISC is to build the fastest possible hardware using very simple instructions, and do all the rest in software.

Chips using the Power architecture and built by IBM are now used in our use in pSeries and iSeries processors, and also in some zSeries attached processors; they are also used in BlueGene, the world’s largest supercomputer.

IBM has also developed — jointly with Sony and Toshiba –the Cell architecture. This is a new and revolutionary design that focuses on game and digital multimedia applications; it combines the 64-bit Power Architecture with memory flow control and “synergistic” processors in order to provide the required computational density and power efficiency.

zOS

IBM’s mainframe operating system, z/OS, has a long and successful history. It started as OS/360 supporting the System 360 mainframes in the 1960s. OS/360 evolved in both name and capabilities to become MFT (Multiprogramming with Fixed number of Tasks), MVT (Multiprogramming with Variable number of Tasks), SVS (Single Virtual Storage), MVS (Multiple Virtual Storage), MVS/XA (MVS with Extended Architecture), MVS/ESA (MVS with Enterprise System Architecture), OS/390, and most recently, z/OS.

z/OS is a 64-bit server operating system. It combines the classic functions of MVS: multitasking, virtual memory (allowing different tasks to have different address spaces), and Unix System Services (USS), a UNIX implementation optimized for mainframe architecture. z/OS has maintained compatability; for example, programs written in the 1960’s can still run under z/OS with no change. USS allows other Unix applications from other platforms to run on IBM mainframes, typically with only a recompile being necessary. z/OS supports Java and easily communicates with TCP/IP and the web. A complementary operating system, z/VM (Virtual Machine), improves Linux support on the same system.

z/VM is itself a separate operating system. It was the first true virtual machine system, tracing its roots to VM supporting the System 360 mainframes. In essence, VM and the mainframe hardware cooperate so that multiple instances of any operating system, each with protected access to the full instruction set, can peacefully and concurrently coexist. VM slices up a single mainframe, dynamically managing workload. Any mainframe operating system can run under VM; thus, one could have hundreds of various OS guests running on one physical mainframe. If one guest crashes, it has no impact to any other part of the server. Businesses and governments find VM incredibly useful for software change management and testing. VM has been refined over many decades, and it is unique as a robust, reliable, high performance, high-end server technology for running enterprise-scale mixed workloads. Fully self-virtualizing processor hardware technology is essential to VM’s capabilities, and that same processor technology is not found in today’s x86 and PowerPC CPUs.

z/OS is IBM’s flagship operating system. It supports mission-critical, continuous, high-volume business and government operations with the utmost in security and reliability. Specifically designed for very high availability, security, and multiple workloads at high availability, z/OS is the platform that runs the majority of banking systems, airlines, governments and large enterprises.

In the mid-1960s, IBM designed an alternative to OS/360 for smaller members of the S/360 family. The first ‘VSE’ was Disk Operating System/360 (DOS/360). Over the last four decades, DOS/360 evolved into DOS/VS, DOS/VSE, VSE/SP, VSE/ESA, and most recently, z/VSE.

z/VSE is built on a heritage of ongoing refinement and innovation that spans four decades. It is similar to z/OS, but not as sophisticated or advanced. z/VSE is designed for customers with more basic, less demanding function, capability,z/VSE has maintained compatibility; basic VSE programs written in the sixties can often run under z/VSE with little or no change.

Traditionally, a substantial number of VSE customers used VM for additional flexibility and productivity. With the recent availability of Linux on IBM System z, z/VSE customers can use the open, industry standard capabilities of IBM Middleware running on Linux (on the same System z) to build new applications that leverage and exploit their existing core VSE applications and data.

i5/OS

i5/OS traces its roots back to the IBM system 3/X family of general business computers. The 3/x family evolved into the Application System AS/400 which was based on the IBM System/38 technologies merged with many of the administrative ease of use functions from the IBM System/36. The System/38 was the first commercial system to offer automatic data balancing of disk storage, raid disk protection, integrated relational database, object-based operating system, single-level storage, 64-bit addressing (48 in hardware and 16 in microcode), abstract high-level machine interface and capability based addressing. The AS/400 was introduced in 1988 and specifically designed for general business and departmental use. Originally based on CISC (Complex Instruction Set Computer) architecture and an instruction set similar to the System 360 and 370, it was later migrated in 1995 to a RISC (Reduced Instruction Set Computer) architecture based on the PowerPC CPU when it added 64-bit hardware addressing support. As part of this evolution it was renamed to iSeries in 2000 and i5 in 2006 and is based on the Power5 processor. The partner OS to this platform is now known as i5/OS. It is interesting to note that 128-bits are reserved for addresses and have been since the System/38. So when the world goes to 96-bit address or larger the i5 is ready to adapt.

The i5/OS was designed as a “turnkey” operating system, requiring little or no on-site attention from IT staff during normal operation. Many large companies lock an i5 in a room for branch offices and administer all of those systems remotely from a central site. The OS has a built-in DB2 database which does not require separate installation and maintenance, and system administration has been wizard-driven for years. i5/OS is also well regarded for its tuned Java implementation, including specific hardware optimizations.

The operating system has built-in subsystems that provide some backward compatibility with earlier IBM general business systems, such as the IBM System/3x systems. i5/OS can coexist on iSeries hardware with AIX and Linux. With the logical partitioning support available on the i5, it is possible to have 254 separate operating systems all running at the same time on a single i5 system. You can also plug in up to 60 xSeries (PC) servers internally in the system and run and manage those Windows servers from an single user interface. The “i” could really stand for “integrated” in i5.

The iSeries and i5/OS are popular systems in the banking and retail industries as well as enterprise departments and small and medium sized business. The i5/OS systems are very popular outside the U.S. evidenced by the fact that the system is translated into 51 different languages. It has a strong reputation for being easy to administrate and “just working”. Many casinos in the U.S. use an AS/400 or i5 — they want you to gamble with your money but they are unwilling to gamble with it once it becomes theirs.

AIX

AIX (Advanced Interactive Executive) was first introduced in 1986 for use with IBM’s RT Personal Computer system family of workstations based on the IBM 32-bit (RISC) microprocessor — named ROMP — and its corresponding high-function memory management unit. AIX was created in response to the increasing levels of workstation performance and functionality, to provide users with an operating system that was as sophisticated as those used in mainframe computers.

The core of AIX was based on AT&T UNIX System V. In addition to System V, AIX included many enhancements generally available in the indutry, most notably some features of System V.2, and many from BSD (Berkeley Software Distribution)4.2 and 4.3 (these are variants of UNIX developed and distributed by the University of California at Berkeley). UNIX was chosen because it provided significant power, supported multiuser capabilities, and was both portable and open-ended.

The generality and portability of UNIX were achieved at some cost in optimum use of the underlying hardware. Rather than rewrite the kernel, the AIX team provded a set of software services for the kernel and modified the kernel and other functions to exploit the facilities provided by that layer. The Virtual Resource Manager (VRM) controlled the real hardware and provided a stable, high-level machine interface to the advanced hardware features and devices. AIX provided facilities to overcome several major deficiencies in the then-current version of UNIX: lack of virtual memory support that exploited the ROMP hardware features, limited support for code sharing, limited real-time facilities, and limited support for dynamic I/O support.

IBM has refined and enhanced AIX throughout the almost two decades that have elapsed since the first release.

AIX pioneered numerous network operating system enhancements, many innovations later adopted by Unix-like operating systems. AIX 5L version 5.3, released in August 2004, is known for its scalability. It supports up to 64 central processing units and two terabytes of random access memory. The JFS2 file system, first introduced by IBM as part of AIX, supports computer files and partitions up to 16 TB in size.
,h3>Unix

The UNIX operating system was originally created in the 1970’s to provide a test bed for computer science experimentation. It differed from its predecessor conventional operating systems in several key ways. Essentially, all of the operating system code was written in C — a language first developed as part of UNIX — to ensure easy portability from one processor architecture to another.

Most of the control structures of the operating system, such as configuration tables, are bound as late as possible. Configuration information is kept in editable files to allow easy modification for experimental purposed.

The file system, often called the heart of the UNIX system, is a tree-structured hierarchy consisting of directories and files. Files are represented as linear byte spaces rather than records and fields used by other operating systems, including IBM’s System/360. Directories are structured files describing files and other directories. In keeping with the objective of portability, most I/O is performed through generic devices that are mapped to real I/O devices by user-replaceable routines called device drivers.

Any part of the nucleus of the system (called the kernel) can be modified by an appropriately authorized user. A command-processing component (called a shell) performs parameter substitution and calls appropriate command programs. No real distinction is made between command processors supplied with the operating system and those written by the user that accept the same invocation parameter conventions, and several shells can coexist in a given system.

UNIX was created at Bell Labs at a time when its parent organization, AT&T, had an agreement with the U.S. Dept. of Justice to confine its business to telephony. Since AT&T could not sell software, it licensed UNIX under liberal terms and provided it in source form.

The most significant difference from ordinary operating systems was the accessibility of all elements in the software to user modification. UNIX provided tools for its own redefinition, making it the most popular operating system in academic computer science. Many of the commands and facilities that were orginally developed in the course of computer science experiments found their way into production UNIX systems. This greatly enriched the UNIX functional power, while contributing a certain amount of inconsistency.

Ken Thompson and Dennis Richie were the principal authors of Unix. Thompson took a sabbatical leave at the University of California at Berkeley in the late 1970’s, laying the foundation for an active research group whose work resulted in the variants of Unix known as BSD (Berkeley Software Distribution). This work was also fueled by a grant from DARPA to extend Unix to support networking, work that resulted in much of the basic infrastructure of the internet as we know it today. All this work was eventually made available under a very liberal license known as BSD. Currently three variants of BSD are widely used: FreeBSD, which emphasizes portability to a variety of hardware platforms; NetBSD, which focuses on the Intel 386 platform; and OpenBSD, which focuses on security. BSD variants are used to power many of the largest web sites. Apple’s operating system OS/X is BSD-based.

Following an settlement with the U.S. Dept. of Justice in the early 1980’s, AT&T was able to commercialize Unix, resulting in several releases known as “System V”.

Hardware vendors produced multiple Unix variants, each adapted for their own architecture: AIX from IBM, Solaris from Sun, HP-UX from HP, Ultrix from DEC, and Irix from SGI. Each tried to provide its own value-add. This resulted in a fragmentation of the market, creating an opportunity that resulted in Linux, a completely new implementation of Unix in the form of Open Source Software.

The market has since become more unified, and many of the innovations first made available in the proprietary variants are now available in the open variants.

Currently the best answer to “What is Unix?” is that of the [WWW] The Open Group. Their definition is known as the [WWW] The Single UNIX Specification. IBM is a member of this group.

Most Unix programs can be easily ported from one Unix variant to another, and many variants support binary compatibility.

Linux

n the words of its primary author, Linus Torvalds, “Linux is a Unix-like operating system, but not a version of Unix. This gives Linux a different heritage than, for example, Free BSD. What I mean is this: the creators of Free BSD started with the source code to Berkeley Unix, and their kernel is directly descended from that source code. So Free BSD is a version of UNIX; it’s in the Unix family tree. Linux, on the other hand, aims to provide an interface that is compatible with Unix, but the kernel was written from scratch, without reference to Unix source code. So Linux itself is not a port of Unix. It’s a new operating system.”

Torvalds began work on the project while an undergradute student in 1991 on a part-time basis and has continued to lead it until this day; he now works full-time for the Open Source Development Lab (OSDL) in Portland, Oregon (IBM is a member of OSDL and thus helps support his work).

From that modest beginning fifteen years ago Linux has grown to be a complete, modern operating system with a world-wide following. It is widely used on servers and has become the most widely-used implementation of Unix. Linux has also become the standard platform for developing and deploying open source software.

Linux’s success has many factors.

Torvalds early on showed a willingness to incorporate fixes and enhancements from others; from that has grown the current development community numbering in the hundreds (thousands?).

The project began just as the modern internet began to emerge, enabling a global form of collaboration and distribution at a much lower cost than was previously possible.

From early on the project strived to implement existing standards such as POSIX; it avoided change for the sake of change.

Linux began with a focus on Intel architecture, making Linux constantly more competitive with the Unix implementations that were tied to proprietary architectures by companies that were unable to keep up with Intel. Linux was, however, ported early on to other hardware platforms, thus providing the basis for the wide number of hardware platforms available today. It was also designed in a way that supported a variety of architectures, So today Linux can be found in wrist watches, printers, and supercomputers.

Torvalds used the “tool chain” from the Free Software Foundation (FSF), notably the GCC compiler and compiler/language development tools.Linux used the same license, GPL, as was used for the tools, but also provided an exception that allowed the development of commercial software that could run under Linux.

More importantly, Linux also completed that “tool chain.” Previously, although the FSF tools and utilities could be ported to all the Unix variants — some proprietary, others not — there was no clear variant of choice for development of them. But Linux “completed” the stack, so that all the code from system start-up to application development was available in source form.

Since every part was now open, any part could be adapted as needed. This provied a development environment constrained only by the programmer’s skills and imagination, so today Linux provides a common platform on which one can write a meaningful application that can be run on either a computer costing less than a hundred dollars, or on a computer costing a hundred million dollars, with little or no change required in the code.

For example, IBM today supports Linux on all its hardware platforms: i ,p, x and z Series. Linux is the only operating system that runs on ALL of out platforms.

Linux is used for the input/output nodes on IBM’s own BlueGene supercomputer, and is in the process of being ported to our newest architecture, Cell (should mention the collaborators here, as it’s not just IBM).

For most of its history Linux was developed using a traditional product cycle. For example, odd-numbered kernel versions were for devlopment; once stabilized they became the next even version. For example, stable 2.2 was followed by a series of development 2.3 versions, the last of which became the first of the 2.4 stable version, and so forth.

More recently, a coupld of years were needed to prepare the next stable release. However, the development process has evolved so that now the kernel is in a continuous state of development and release.

For example, between the release of 2.6.15 on 3 Jan 2006 and 2.6.16 on 20 Mar 2006, 5734 patches, with a median size about 2500 bytes, were incorporated into the kernel. So the release cycle for Linus has shrunk to months while that for Windows has grown — and grown.

Currently on any day the equivalent of at least a thousand full-time developers are working worldwide to improve the Linux kernel.

While Linux is a key part of the “open source stack”, it is by no means the only part. The “kernel” is the collection of projects directly led by Torvalds. Around that are various closely-tied projects, such as testing and reliability support, known as the “extended kernel”. B eyond that thousands of open source packages; for example, Apache HTTP server, the Firefox browser, the Perl programming language, and the TeX publishing system used to product most of the world’s technical publications.

A business has grown up around the assembly and testing of the various pieces of the open source stack. Such collections are called “distributions” and the folks who put them together are called “distributors”; all are rooted in some version of the Linux kernel. Red Had and Novell are commercial companies that produce distributions; Debian is a non-profit project that produces its own distribution.

It is possible to create a meaningful production environment with the kernel and a hundred or so other open source packages. Most common distributions include several hundred more packages, and the most comprehensive support several thousand.

Resources

L. K. Loucks and C. H. Sauer, “Advanced Interactive Executive (AIX) operating system overview,” IBM Systems Journal, 26, No. 4, 326-344, 1987; available as PDF, “[WWW] http://www.research.ibm.com/journal/sj/264/ibmsj2604C.pdf”

“A high-frequency custom CMOS S/390 microprocessor”, C. F. Webb and J. S. Liptay, IBM Journal of Research and Development, Vol. 41, No. 4; [WWW] http://www.research.ibm.com/journal/rd/414/webb.html

“Introduction to the Cell multiprocessor”, J. A. Kahle, et. al, IBM Journal of Reserach ad Development, Vol 49, no 4/5,2006 [WWW] http://www.research.ibm.com/journal/rd/494/kahle.html

[WWW] Open Source Software, IBM Systems Journal, Vol. 44, No. 2, 2005

John Cocke, V. Markstein, “The evolution of RISC technology at IBM”, IBM Systems Journal, Vol. 44, No. 1/2, p. 48, 2000. available as PDF, [WWW] http://www.research.ibm.com/journal/rd/441/cocke.pdf

[WWW] Blue Gene, IBM Journal of Research and Development, Vol. 49, No. 2/3, 2005

[WWW] F. P. O’Connell, S. P. White, “POWER3: The next generation of of PowerPC Processors,” IBM Journal of Research and Development, Vol. 44, No. 6, page 873, 2000

[WWW] A. E. Eichenberger et. al., “Using advanced compiler technology to exploit the performance of the Cell Broadband Engine”, IBM Systems Journal, Vol. 45, No. 1, 2006

[WWW] Unix History provides a fascinating display of the history of Unix, showing the extraordinary and continuous evolution of what is the most prolific of operating systems.

“Open Sources: Voices from the Open Source Revolution”, C. Di Bona et. al, O’Reilly Press, 1999.

Notes:

The AIX section is based on an article by Larry Loucks and C. H. Sauer. Larry was the lead architect of the RT system; he is now retired. I met Larry only once, at a meeting of the IBM Academy of Technology in early October of 1998. I think it fair to say this was a key meeting that led to IBM getting involved in Linux and Open Source; I hope to write about it in more detail in the future.

Larry had written a paper about Unix and Linux. He mentioned that one of the appeals of Open Source was that it let you change the system. He recalled that during his early days at IBM back in the 1960’s, at a time when IBM’s code was freely available (we sold hardware, not software), he had a problem at a customer site in South Dakota. He was able to resolve it by patching the operating system, something he could do only because the code was available.

The UNIX section is largely based on an overview of Unix given in the Loucks/Sauer article.

Interesting background on Linux courtesy of Tridge. Tridge reports he made an offer to Linus after he finished his undergraduate work, and “in 91/92 he did some consulting work on the side – working on visual basic stuff under windows I think. When he came to visit Canberra in 93 (I think it was 93?) he had a laptop with him with windows and visual basic installed as he was supposed to be finishing some work on a project.”

Advertisements

5 Comments

  1. Reeve
    Posted January 11, 2010 at 19:47 | Permalink | Reply

    The System/38 was marketed as the upgrade path for the System/3x but there was no “evolution” involved: the compatibility mode was contrived to reduce the shock.

    Batch programs were barely source code-compatible and the interactive capabilities were completely different from System/3 and System/3x offerings.

    While there were a bunch of handy System/36 administrative functions, I can’t think of one that made it intact into the AS/400. While the AS/400 added substantial capabilities, the DDS/DSPF/CLP/CMD/PF/LF model with queues and objects remained the same across the System/38 and AS/400 familaies–at least until the CISC-to-RISC wave passed.

  2. joe
    Posted August 2, 2011 at 22:39 | Permalink | Reply

    You may want to check your facts re: the various BSD’s. I noticed a mistake.

    I guess you haven’t read much about them, or tried using them? Is this because of the IBM-Linux connection?

    Just one man’s opinion but I find the BSD’s to be vastly superior to today’s Linux, and possibly even AIX. I am aligned with the Spitbol and Minimal sensibilities but I have a hard time with Linux. By comparsion to BSD, it’s very sloppy.

    BSD code, particularly NetBSD code, is cleaner, the original BSD projects are older (Net->Open and Free->DragonFly), the developers are wiser and the standards that contributors must meet are generally higher than with Linux.

    To me, the BSD approach is simply more orderly, organised and rational.

  3. MadisonYE
    Posted August 2, 2011 at 23:31 | Permalink | Reply

    I’ve just found this site http://www.theas400.com, could I just copy the technical
    information from the web site or do I need some kind of permission?. I’m writing a project for school.

    Madison

  4. Posted May 25, 2012 at 15:19 | Permalink | Reply

    Student of computer are willing to learn more about an operating system, therefore,the need for it’s explanation is always increasing. thanks.

  5. Posted July 23, 2012 at 09:20 | Permalink | Reply

    thank you for this great article about operating systems!

4 Trackbacks

  1. […] morning I was astounded to learn that my note A Brief History of Operating Systems had drawn 236 views yesterday, and so far today it has drawn over 40. I am writing this early in […]

  2. […] absence, I’ve noted with some surprise that my most read post each and every day has been A Brief History of Operating Systems, written several years […]

  3. […] among my five most popular for several years now. The only one consistently more popular has been A Brief History of Pperating Systems, based on a couple of days of writing while I was at IBM. Steve Mills, then and now head of […]

  4. […] The second most-viewed post is A Brief History of Operating Systems, based on a couple of days of writing done while I was at IBM. Steve Mills, then and now head of […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

  • Pages

  • October 2017
    M T W T F S S
    « Apr    
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
  • RSS The Wayward Word Press

  • Recent Comments

    mrrdev on On being the maintainer, sole…
    daveshields on On being the maintainer, sole…
    James Murray on On being the maintainer, sole…
    russurquhart1 on SPITBOL for OSX is now av…
    dave porter on On being the maintainer, sole…
  • Archives

  • Blog Stats

  • Top Posts

  • Top Rated

  • Recent Posts

  • Archives

  • Top Rated

  • %d bloggers like this: