PDF

Description

MapReduce is a popular, but still insufficiently understood paradigm for large-scale, distributed, data-intensive computation. The variety of MapReduce applications and deployment environments makes it difficult to model MapReduce performance and generalize design improvements. In this paper, we present a methodology to understand performance tradeoffs for MapReduce workloads. Using production workload traces from Facebook and Yahoo, we develop an empirical workload model and use it to generate and replay synthetic workloads. We demonstrate how to use this methodology to answer "what-if" questions pertaining to system size, data intensity and hardware/software configuration.

Details

Files

Statistics

from
to
Export
Download Full History