Model composition in the form of prediction pipelines is an emerging pattern in the design of machine learning applications that offers the opportunity to substantially simplify development, improve accuracy, and reduce cost. However, in low-latency settings spanning multiple machine learning frameworks with varying resource requirements, prediction pipelines are challenging and expensive to provision and execute.
In this paper we address the challenges of allocating resources and efficiently and reliably executing prediction pipelines spanning multiple machine learning models and frameworks. We exploit the reproducible performance characteristics of individual models and monotonic performance scaling of prediction workloads to decompose the resource allocation and performance tuning problem along model boundaries. Consequently, we are able to estimate and optimize end-to-end system performance.
Our proposed system---InferLine---leverages these insights and instantiates a general-purpose framework for serving prediction pipelines. We demonstrate that InferLine is able to configure and execute prediction pipelines across a wide range of throughput and latency goals and achieve over a 6x reduction in cost when compared to a hand-tuned and horizontally scaled single process pipeline.
InferLine: ML Inference Pipeline Composition Framework
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).