In order to characterize wide-area network traffic, we have analyzed traces from four Internet sites. We identify characteristics common to all conversations of each major type of traffic, and find that these characteristics are stable across time and geographic site. Our results contradict many prevalent beliefs. For example, previous simulation models of wide-area traffic have assumed bulk transfers from 80 Kilobytes to 2 Megabytes of data. In contrast, we find that up to 90% of all bulk transfers involve 10 Kilobytes or less. This and other findings may affect results of previous studies and should be taken into account in future models of wide-area traffic.
We derive from our traces a new workload model for driving simulations of wide-area internetworks. It generates traffic for individual conversations of each major type of traffic. The model accurately and efficiently reproduces behavior specific to each traffic type by sampling measured probability distributions through the inverse transform method. Our model is valid for network conditions other than those prevalent during the measurements because it samples only network-independent traffic characteristics. We also describe a new wide-area internetwork simulator that includes both our workload model and realistic models of network components.
We then present a simulation study of policies for multiplexing datagrams over virtual circuits at the entrance to wide-area networks. We compare schemes for mapping conversations to virtual circuits and queueing disciplines for scheduling datagrams onto virtual circuits. We find that networks should establish one virtual circuit per type of traffic flowing between two network points of presence, and provide round-robin service to transmission resources shared by virtual circuits. This multiplexing policy exhibits good performance and consumes moderate amounts of resources at the expense of some fairness among traffic sources of the same type. In particular, it maintains interactive delay nearly constant and close to the possible minimum, and maintains bulk transfer throughput near the possible maximum, even as network load increases beyond saturation. Furthermore, it results in bottleneck buffer consumption that rises slowly with offered load. Other multiplexing policies exhibit interactive delay that increases with offered load, and buffer consumption that rises quickly with offered load.
Again using our traffic characterization, we evaluate mechanisms for multiplexing variable-sized datagrams onto small fixed-size cells. Cells offer performance and implementation advantages to networks that service many types of traffic, but they incur bandwidth inefficiencies due to protocol headers and cell fragmentation. We find that cell-based networks using standard protocols are inefficient in carrying wide-area data traffic. For example, ATM-based networks using SMDS and IEEE 802.6 protocols lose more than 40% of their bandwidth to overhead at the network level and below. Furthermore, we find that viable compression techniques can significantly improve efficiency. For example, a combination of three compression techniques can regain more than 20% of the bandwidth previously lost to overhead.