The SIGOPS Hall of Fame Award was instituted in 2005 to recognize
the most influential Operating Systems papers that were published at
least ten years in the past. For an in-depth discussion of the SIGOPS
policy for this award, please read Jeff
for the SIGOPS Hall of Fame Award.
Note: SIGOPS is in the process of revising the Hall of Fame
Award selection procedure. For more details, please see
SIGOPS members can send in their nominations by e-mail
to HOFnominations (at)
sigops.org. The Hall of Fame Award Committee will choose which
nominated paper wins the award. The decision will be based on a
discussion that considers the impact the paper (and more generally of
the research described in the paper) has had on the field of operating
systems research. The Award committee will prepare a short statement
that describes why the paper was selected.
The award winners will be announced at the SOSP or OSDI conference
by the current program chair. The program chair will read the
statement prepared by the Award committee that describes why the paper
was selected. The authors of the award winning paper will be given a
plaque, naming the paper, the authors, the conference or journal the
paper appeared in, and the conference in which the award was
made. This certificate will be signed by the program chair and the
current chair of SIGOPS. A list of the winners of the award will be
maintained on the SIGOPS website.
The Hall of Fame Award Committee consists of past program chairs from
SOSP, OSDI, EuroSys, past Weiser and Turing Award winners from the
SIGOPS community, and representatives of each of the Hall of Fame
Award papers. Five members from the committee, chosen to be without
conflict of interest with the possible award winners, do the final
R. A. Meyer and L. H. Seawright.
A virtual machine time-sharing system.
IBM Systems Journal 9(3), September 1970, 199-218.
This paper described the second generation of the very first virtual machine system. It was originally built in 1966 for an IBM 360/40 with custom virtual memory hardware and then ported to a 360/67, which had virtual memory built in. In addition to a virtual machine monitor called CP, the system included a single user interactive system called CMS, heavily influenced by MIT's CTSS; to support multiple users the system ran CMS in a separate VM for each user. Because of the clean architecture of the 360, CP could virtualize the
hardware perfectly (except for timing dependencies and self-modifying channel programs) without binary translation, though it did have to translate the channel programs. It could run most of the existing IBM operating systems in virtual machines. CP/67 evolved into VM/370, which became the main time-sharing system for IBM mainframes.
David. L. Parnas.
On the criteria to be used in decomposing systems into modules.
Communications of the ACM 15(12), December 1972, 1053-1058.
This paper introduced a technique for decomposing a complex system into modules. Through a simple example it showed that a modularization that emphasizes what is now known as "information hiding" is superior to more obvious module decompositions in terms of the software engineering lifecycle. The paper argues the beneficial decomposition can be achieved with minimal performance overheads. The "information hiding" approach has influenced software engineering in areas including operating systems, distributed systems, databases, and programming languages.
H. T. Kung and John T. Robinson.
On optimistic methods for concurrency control.
ACM Transactions on Database Systems (TODS) 6(2), June 1981, 213-226.
This paper introduced the notion of optimistic concurrency control, proceeding with a transaction without locking the data items accessed, in the expectation that the transaction's accesses will not conflict with those of other transactions. This idea, originally introduced in the context of conventional databases, has proven very powerful when transactions are applied to general-purpose systems.
Marshall K. McKusick, William N. Joy, Samuel J. Leffler, and Robert S. Fabry
A fast file system for UNIX.
ACM Transactions on Computer Systems (TOCS) 2(3), August 1984, 181-197.
This paper introduced techniques to make the file system "disk aware", thus demonstrating the importance of understanding the interplay between hardware technology and file-system design. The structuring concept of the cylinder group, while simple, is found in some form in many current systems (including the widely-deployed Linux ext* family) and serves as an excellent example of the
importance of locality in storage. The paper also introduced numerous functionality improvements, including symbolic links and atomic rename, which have since become commonplace features in modern file systems.
James J. Kistler and M. Satyanarayanan.
Disconnected operation in the Coda File System.
ACM Transactions on Computer Systems (TOCS) 10(1), February 1992, 3-25.
This paper was the first to describe the use of caching to provide availability in addition to improved performance in a distributed setting where clients use files stored at remote file servers, leading to potential loss of service during disconnection. The Coda design provided a thoughtful and elegant approach to supporting continued service during disconnection. Disconnected clients continued to service user requests using locally cached content; however all potential modifications performed while disconnected were logged locally, and when service was restored the system attempted to reconcile the local modifications with the current server state. The Coda design inspired much follow-on research on distributed file systems and its techniques were adopted in other systems.
Maurice Herlihy and J. Eliot B. Moss.
Transactional memory: architectural support for lock-free data structures.
In Proceedings of the 20th annual international symposium on computer architecture (ISCA '93), May 1993, 289-300.
This paper introduced transactional memory, an architectural concept intended to make lock-free synchronization as efficient and easy to use as conventional techniques based on mutual exclusion. This concept has found its way into commercial multicore processors, and has generated a large amount of follow-on work in software transactional memory.
Robert Wahbe, Steven Lucco, Thomas E. Anderson, and Susan L. Graham.
Efficient software-based fault isolation.
In Proceedings of the 14th ACM Symposium on Operating Systems Principles (SOSP '93). December 1993, 203-216.
This paper demonstrated that compiler or code-rewriting techniques could isolate untrusted code modules, preventing them from writing or jumping to addresses outside their "fault domain", without the overhead of crossing hardware-enforced address space boundaries, and without much increase in execution time of code within a domain. The paper inspired substantial subsequent research, and the
basic techniques have been implemented in widely-deployed software, such as Web browsers.
D. B. Terry, M. M. Theimer, Karin Petersen, A. J. Demers, M. J. Spreitzer, and C. H. Hauser.
Managing update conflicts in Bayou, a weakly connected replicated storage system.
In Proceedings of the 15th ACM symposium on Operating Systems Principles (SOSP '95), December 1995, 172-182.
Bayou is a replicated storage system that anticipated the world of numerous small mobile devices executing collaborative applications over unreliable networks. The paper describes a client-server storage structure supporting eventual consistency, anti-entropy protocols, disconnected operation, log-based recovery, and an application-centered approach to detecting and resolving update conflicts to arrive at consistent replicas. These concepts were backed up by a prototype implementation, two applications, and a simple performance evaluation. Bayou is still relevant to the problems faced by, and the solutions employed by, a large number of today's modern applications.
On micro-kernel construction.
In Proceedings of the 15th ACM symposium on Operating Systems Principles (SOSP '95), December 1995, 237-250.
This paper presented the core design ideas behind the L4 microkernel, especially the minimality principle, which states that functionality must only be implemented inside the kernel if moving it outside would prevent the implementation of required system functionality. This principle was at the heart of L4's design, and supported a ruthless performance focus, which allowed L4 to outperform other microkernels by an order of magnitude. The core ideas of this paper led to a family of L4 microkernels which were commercially deployed on a large scale, and eventually enabled unprecedented assurance through formal verification.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, and Hari Balakrishnan.
Chord: A scalable peer-to-peer lookup service for Internet applications.
In Proceedings of the 2001 conference on Applications, technologies, architectures, and protocols for computer communications (SIGCOMM '01), 2001, 149-160.
This paper introduced a novel protocol that enables efficient key lookup in a large-scale and dynamic environment; the paper shows how to utilize consistent hashing to achieve provable correctness and performance properties while maintaining a simplicity and elegance of design. The core ideas within this paper have had a tremendous impact both upon subsequent academic work as well as upon industry, where numerous popular key-value storage systems employ similar techniques. The ability to scale while gracefully handling node addition and deletion remains an essential property required by many systems today.
Carl A. Waldspurger.
Memory resource management in VMware ESX server.
In Proceedings of the 5th Symposium on Operating Systems Design and Implementation (OSDI '02), December 2002, 181-194.
This paper introduced elegant and effective techniques of hypervisor memory management. Memory ballooning allows the hypervisor to reclaim memory from a virtual machine in accordance with the unmodified guest's operating system policies. Transparent page sharing supports efficient memory use with small overhead. The combination of active memory estimation, idle memory tax, and proportional fair sharing, along with admission-controlled memory reservation,provides the basis for service level agreements and reasoned overcommitment. This paper has been highly influential; many of its techniques have been adopted by widely-used hypervisors.
George W. Dunlap, Samuel T. King, Sukru Cinar, Murtaza A. Basrai, and Peter M. Chen.
ReVirt: Enabling intrusion analysis through virtual-machine logging and replay.
In Proceedings of the 5th Symposium on Operating Systems Design and Implementation (OSDI '02), 2002, 211-224.
The paper demonstrated that the execution of an arbitrary program inside a virtual machine can be replayed deterministically and efficiently. Originally intended primarily as a tool for intrusion analysis, record-and-replay has been used subsequently for debugging, fault-tolerance, to audit program executions, and other virtual machine services. The work has directly influenced commercial products and sparked a research area that continues to this day.
Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung.
The Google file system.
In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP '03). 2003, 29-43.
This paper presented an effective design for a large-scale distributed file system that provided fault tolerance while running on inexpensive commodity hardware. It provided very large amounts of I/O bandwidth by having all data transfers happen directly between client processes and machines storing the actual data, handled automatic recovery from failed disks and machines, and retained the simplicity of managing file system metadata in a single centralized master. GFS formed the basis for the design for the open-source HDFS system, as
well the backbone for the evolution of large-scale distributed file systems at Google and elsewhere.
Jeffrey Dean and Sanjay Ghemawat.
MapReduce: simplified data processing on large clusters.
In Proceedings of the 6th Conference on Symposium on Opearting Systems Design & Implementation (OSDI'04), 2004, USENIX Association.
The paper proposed a simple yet highly effective approach for processing large data sets in a scalable and fault-tolerant manner. An impressive aspect of the design is its simplicity: it elegantly captures a common pattern that solves two critical problems faced by many developers today (scalability and fault tolerance), while still retaining a clean, easy-to-use interface that supports a
wide range of applications. The impact of MapReduce has been huge. It is widely used in industry, with virtually every large company running MapReduce. As a sign of great system design, developers have adopted MapReduce in many use cases beyond its original goals and inspired many follow-on systems.
- Daniel G. Bobrow, Jerry D. Burchfiel, Daniel L. Murphy and Raymond S. Tomlinson. Tenex, A Paged Time Sharing System for the PDP-10 Communications of the ACM 15(3), March 1972.
The Tenex system pioneered many ideas that are prominent in modern operating systems. It included one of the first page based memory systems, copy on write sharing, mapping of files into virtual memory, and user/group/other file protection. It also had mnemonic commands with command-line completion and automatic file versioning. As one reviewer said, "Reading it now, I'm pleasantly surprised by how much is familiar --- thanks to its successors."
- Joel Bartlett. A NonStop Kernel in Proceedings of the Eighth ACM Symposium on Operating Systems Principles (SOSP '81), Pacific Grove, California, December 1981.
Tandem was the first commercial database to achieve fault tolerance. To accomplish this, the Tandem system had to bring together many techniques --- including message-passing, mirroring, fast failure detection, and failover --- into a practical design and implementation.
- K. Mani Chandy and Leslie Lamport.
Distributed Snapshots: Determining Global States of a Distributed System
ACM Transactions on Computer Systems 3(1), February 1985.
This paper takes the idea of consistency for distributed predicate evaluation, formalizes it, distinguishes between stable and dynamic predicates, and shows precise conditions for correct detection of stable conditions. The fundamental techniques in the paper are the secret sauce in many distributed algorithms for deadlock detection, termination detection, consistent checkpointing for fault tolerance, global predicate detection for debugging and monitoring, and distributed simulation.
- Kenneth P. Birman and Thomas A. Joseph.
Exploiting Virtual Synchrony in Distributed Systems
in Proceedings of the Eleventh ACM Symposium on Operating Systems Principles (SOSP '87), Austin, Texas, November 1987.
This paper describes a methodology for building distributed applications comprised of multiple components, each realized by a group of replicated servers. It defines a number of group communication primitives and then ties fault notification into the fabric of group services by introducing the virtual synchrony principle, which orders communication and fault notifications consistently among group members and across multiple groups.
- Eddie Kohler, Robert Morris, Benjie Chen, John Jannotti and Frans Kaashoek.
The Click Modular Router
ACM Transactions on Computer Systems (TOCS), 18(3), August 2000.
Click defines a simple, modular, and efficient framework for constructing network routers with different services and properties. Since this paper's publication, Click has been an essential tool for the networking and systems research communities with dozens and perhaps hundreds of systems and papers built on it, including several commercially successful systems.
- Brian M. Oki, Barbara
H. Liskov. Viewstamped
Replication: A New Primary Copy Method to Support Highly-Available
Distributed Systems Proceedings of the Seventh Annual ACM
Symposium on Principles of Distributed Computing (PODC 1988), Toronto,
ON, Canada, Aug 1988, pp 8--17.
The paper introduces a replication protocol very similar to what is
now known as Paxos. That protocol has become the standard for
consistent, fault-tolerant state-machine replication, and is widely
used in data centers to keep the state consistent despite failures and
Lamport. The Part
Time Parliament ACM TOCS 16(2), May 1998, 133--169.
The work (originally published in 1989) was independent and roughly
concurrent with the Viewstamped Replication work also recognized this
year. It describes the protocol in a more general setting, adds a
correctness argument, and forms the basis for modern Paxos
- Kai Li, Paul
Coherence in Shared Virtual Memory Systems ACM TOCS 7(4), Nov
1989, pp 321--359.
The paper shows how to simulate coherent shared memory on a cluster,
and also introduces directory-based distributed cache-coherence. It
spawned a entire research area, and introduced cache coherence
mechanisms that are widely used in industry.
- Mendel Rosenblum, John
K. Ousterhout. The
Design and Implementation of a Log-Structured File System ACM TOCS
10(1), Feb 1992, pp 26--52.
The paper introduces log-structured file storage, where data is
written sequentially to a log and continuously de-fragmented. The
underlying ideas have influenced many modern file and storage systems
like NetApp's WAFL file systems, Facebook's picture store, aspects of
Google's BigTable, and the Flash translation layers found in SSDs.
Do Computers Stop And What Can Be Done About It? HP Labs Technical
The paper presents the first large scale quantitative study of
computer failures in practice, of a system built using best practices
at the time to achieve fault-tolerance.
on Trusting Trust. Communications of the ACM, Volume 27 Issue 8,
The paper demonstrated that to have trust in a program, one cannot
just rely on trust in the person who wrote it, or even on verifying
the source code. One must also ensure that the entire tool chain used
to produce and execute binaries is trustworthy.
- Jack B. Dennis, Earl C. Van
Semantics for Multiprogrammed Computations. Communications of the
ACM, Volume 9 Issue 3, March 1966.
The paper lays out the conceptual foundations for multiprogramming
and protection in computer systems.
David A. Patterson, Garth Gibson, Randy
H. Katz. A Case for
Redundant Arrays of Inexpensive Disks (RAID). Proceedings of the
1988 ACM SIGMOD International Conference on Management of Data.
The paper shows how to achieve efficient, fault tolerant and highly
available storage using cheap, unreliable components.
- Roger Needham and Michael Schroeder. Using Encryption for Authentication in Large Networks of Computers. Communications of the ACM, December 1978.
- Butler Lampson and Howard Sturgis. Crash Recovery in a Distributed Data Storage System. Technical report, Xerox Palo Alto Research Center, June 1979.
- Jim Gray, Paul McJones, Mike Blasgen, Bruce Lindsay, Raymond Lorie, Tom Price, Franco Putzolu, and Irving Traiger. The Recovery Manager of the System R Database Manager. ACM Computing Surveys, June 1981.
- Cary G. Gray and
David R. Cheriton, Leases: An Efficient Fault-Tolerant Mechanism for Distributed File Cache Consistency, Proceedings of the Twelfth ACM Symposium on Operating Systems Priciples (SOSP), December 1989, Litchfield Park, AZ, USA.
The Gray and Cheriton paper pioneered through its analysis of the Leases mechanism, which has become one of the most widely-used mechanisms for managing distributed caches. The paper is particularly striking for its careful analysis of the semantics of leases, its detailed experiments, and its thoughtful discussion of fault-tolerance issues.
- Butler W. Lampson and
David D. Redell,
Experience with processes and monitors in Mesa, Proceedings of the Seventh ACM Symposium on Operating Systems Priciples (SOSP), December 1979, Pacific Grove, CA, USA.
When this paper was written, monitors had emerged as the synchronization method of choice. in programming languages conferences and operating systems textbooks. This paper was the first to look closely at the practical issues that monitors pose when used in a large production system. These issues remain contemporary, and indeed researchers working on transactional memory mechanisms would do well to reread this wonderful paper.
- Nancy P. Kronenberg,
Henry M. Levy,
and William D. Strecker,
VAXclusters: A Closely-Coupled Distributed System, Proceedings of the Tenth AMC Symposium on Operating Systems Principles (SOSP), December 1985, Orcas Island, USA, USA.
The VAX Clusters system was the first modern clustered system supporting such basic features as a distributed file system and a distributed locking service. The SOSP paper on VAX Clusters remains a classic today. VAXclusters was a huge commercial success, and set the stage for today.s massive data centers.
- John H. Howard, Michael L. Kazar, Sherri G. Menees, David
A. Nichols, M. Satyanarayanan, Robert N. Sidebotham, and Michael J. West, Scale and
Performance in a Distributed File System, Proceedings of the Eleventh ACM Symposium on Operating
Systems Principles (SOSP), November 1987, Austin TX, USA.
- Richard Rashid, Avadis Tevanian, Michael Young, David Golub, Robert
Baron, David Black, William Bolosky, and Jonathan Chew, Machine-Independent
Virtual Memory Management for Paged Uniprocessor and Multiprocessor
Architectures, Proceedings of the Second Architectural Support for Programming Languages and
Operating Systems (ASPLOS), October 1987, Palo Alto CA, USA.
- Andrew D. Birrell, Roy Levin, Roger M. Needham, and Michael D. Schroeder, Grapevine: An Exercise
in Distributed Computing, Proceedings of the Eighth ACM Symposium on Operating Systems Principles
(SOSP), December 1981, Pacific Grove CA, USA.
- Edouard Bugnion, Scott Devine, and Mendel Rosenblum, Disco: Running
Commodity Operating Systems on Scalable Multiprocessors, Proceedings of
the Sixteenth ACM Symposium
on Operating Systems Principles (SOSP), October 1997, Saint Malo, France.
- Andre Bensoussan, Charlie T. Clingen, Robert C. Daley, The Multics Virtual
Memory: Concepts and Design, Communications of the ACM 15(5):308-318, May 1972.
- Leslie Lamport, Time,
Clocks, and the Ordering of Events in a Distributed System, Communications of the
ACM 21(7):558-565, July 1978.
Perhaps the first true "distributed systems" paper, it introduced the concept
of "causal ordering", which turned out to be useful in many settings. The paper
proposed the mechanism it called "logical clocks", but everyone now calls these
- Andrew D. Birrell and Bruce Jay Nelson, Implementing
Remote Procedure Calls, ACM Transactions on Computer Systems 2(1):39-59, February 1984.
This is the paper on RPC, which has become the standard for remote
communication in distributed systems and the Internet. The paper does an excellent job
laying out the basic model for RPC and the implementation options.
- J. H. Saltzer, D. P. Reed, and D. D. Clark, End-To-End
Arguments in System Design, ACM Transactions on Computer Systems 2(4):277-288, November
This paper gave system designers, and especially Internet designers, an elegant
framework for making sound decisions. A paper that launched a revolution and, ultimately,
- Michael Burrows, Martin Abadi, and Roger Needham, A
Logic of Authentication, ACM Transactions on Computer Systems 8(1):18-36, February 1990.
This paper introduced to the systems community a logic-based notation for
authentication protocols to precisely describe certificates, delegations, etc. With
this precise description a designer can easily reason whether a protocol is correct
or not, and avoid the security flaws that have plagued protocols. "Speaks-for"
and "says" are now standard tools for system designers.
- Fred B. Schneider, Implementing
Fault-Tolerant Services Using the State Machine Approach: a tutorial, ACM Computing
Surveys 22(4):299-319, December 1990.
The paper that explained how we should think about replication ... a model that turns
out to underlie Paxos, Virtual Synchrony, Byzantine replication, and even Transactional
- George C. Necula and Peter Lee, Safe
Kernel Extensions Without Run-Time Checking, Proceedings of the Second
USENIX Symposium on Operating Systems Design and Implementation, October 1996, Seattle,
This paper introduced the notion of proof carrying code (PCC) and showed how it could
be used for ensuring safe execution by kernel extensions without incurring run-time overhead.
PCC turns out to be a general approach for relocating trust in a system; trust is gained in
a component by trusting a proof checker (and using it to check a proof the component
behaves as expected) rather than trusting the component per se. PCC has become one of the
cornerstones of language-based security.
- Edsger W. Dijkstra, The
Structure of the THE Multiprogramming System, Proceedings of the First
ACM Symposium on Operating Systems Principles, October 1967, Gatlinburg, TN, USA.
The first paper to suggest that an operating system be built in a structured way.
That structure was a series of layers, each a virtual machine that introduced abstractions
built using the functionality of lower layer. The paper stimulated a great deal of
subsequent work in building operating systems as structured systems.
- Peter J. Denning, The
Working Set Model for Program Behavior, Proceedings of the First
ACM Symposium on Operating Systems Principles, October 1967, Gatlinburg, TN, USA.
This paper introduced the working set model, which has became a key concept in
understanding of locality of memory references and for implementing virtual memory. Most
paging algorithms can trace their roots back to this work.
- Dennis M. Ritchie and Ken Thompson, The
UNIX Time-Sharing System, Proceedings of the Fourth
ACM Symposium on Operating Systems Principles, October 1973, Yorktown Heights, NY, USA.
At a time when operating systems were trending towards complexity, UNIX emerged as a
hallmark of elegance and simplicity.
- Butler Lampson, Hints
for Computer System Design, Proceedings of the Ninth
ACM Symposium on Operating Systems Principles, pp. 33-48, October 1983, Bretton Woods,
A classic study of experience building large systems, distilled into a cookbook of
wisdom for the operating systems researcher. As time has passed, the value of these hints
has only grown and the range of systems to which they apply enlarged.