The first part of the dissertation presents results obtained through the development of the Sprite file system, which uses large main-memory file caches to achieve high performance. Sprite provides non-write-through file caching on both client and server machines. A simple cache consistency mechanism permits files to be shared by multiple clients without danger of stale data. Benchmark programs indicate that client caches allow diskless Sprite workstations to perform within 0-8% of workstations with disks. In addition, client caching reduces server loading by 50% and network traffic by 75%.
In addition to demonstrating the performance advantages of client caching, this dissertation also shows the advantage of writing policies that delay the writing of blocks from client caches to server caches and from server caches to disk. A measurement of 9 different writing policies on the client and 4 on the server shows that delayed-write policies provide the best performance in terms of network bytes transferred, disk utilization, server utilization and elapsed time. More restrictive policies such as write-through can cause benchmarks to execute from 25% to 100% more slowly than if delayed-write policies are used.
The second part of this dissertation looks at the interaction between the virtual memory system and the file system. It describes a mechanism that has been implemented as part of Sprite that allows the file system cache to vary in size in response to the needs of the virtual memory system and the file system. This is done by having the file system of each machine negotiate with the virtual memory system over physical memory use. This variable-size cache mechanism provides better performance than a fixed-size file system cache of any size over a mix of file-intensive and virtual-memory-intensive programs.
The last part of this dissertation focuses on copy-on-write mechanisms for efficient process creation. It describes a simple copy-on-write mechanism that has been implemented as part of Sprite which is a combination of copy-on-write (COW) and copy-on-reference (COR). The COW-COR mechanism can potentially improve fork performance over copy-on-fork schemes from 10 to 100 times if many page copies are avoided. However, in normal use more than 70% of the pages have to be copied anyway. The overhead of handling the page faults required to copy the pages results in worse overall performance than copy-on-fork; with a more optimized implementation forks would be about 20% faster with COW-COR than with copy-on-fork. A pure COW scheme would eliminate 10 to 20 percent of the page copies required under COW-COR and would provide up to a 20% improvement in fork performance over COW-COR. However, because of extra cache-flushing overhead on machines with virtually-addressed caches, pure COW may have worse overall performance than COW-COR on these types of machines.