PDF

Description

Cloud computing has given rise to a variety of distributed applications that rely on the ability to harness commodity resources for large scale computations. The inherent performance variability in these applications' workload coupled with the system's heterogeneity render ineffective heuristics-based design decisions such as system configuration, application partitioning and placement, and job scheduling. Furthermore, the cloud operator's objective to maximize utilization conflicts with cloud application developers' goals of minimizing latency, necessitating systematic approaches to tradeoff these optimization angles. One important cloud application that highlights these tradeoffs is MapReduce. In this paper, we demonstrate a systematic approach to reasoning about cloud performance tradeoffs using a tool we developed called Statistical Workload Analysis and Replay for MapReduce (SWARM). We use SWARM to generate realistic workloads to examine latency-utilization tradeoffs in MapReduce. SWARM enables us to infer that batched and multi-tenant execution effectively balance the tradeoff between latency and cluster utilization, a key insight for cloud operators.

Details

Files

Statistics

from
to
Export
Download Full History