PDF

Description

Scientists increasingly rely on Python tools to perform scalable distributed memory array operations using rich, NumPy-like expressions. Existing solutions achieve sub-optimal performance on numerical operations and training of machine learning models by relying on dynamic scheduling provided by task-based distributed systems. This can lead to performance problems which are difficult to address without in-depth knowledge of the underlying distributed system. In particular, generalized linear models are difficult to scale given their reliance on element-wise array and basic linear algebra operations.

In this thesis, I present these problems in terms of scalable linear algebra and automatic parallelization of Python. The solutions presented seamlessly scale the NumPy API and generalized linear models on task-based distributed systems. Our overall solution is presented in three primary parts: (1) An approach to parallelizing generalized linear models (GLMs) using blocked matrix operations. (2) The open source library NumS, an implementation of these ideas for the NumPy API optimized for the distributed system Ray. (3) Formal syntax and semantics for automatic parallelization of basic Python and linear algebra operations.

Our primary contribution is NumS, a modular Python-based distributed numerical array library optimized for Ray. Load Simulated Hierarchical Scheduling (LSHS), the scheduler developed for NumS, is capable of attaining communication lower bounds on some common numerical operations. Our empirical study shows that LSHS enhances performance on Ray by decreasing network load by a factor of 2×, requiring 4× less memory, and reducing execution time by 10× on the logistic regression problem. In a comparison to related solutions, LSHS achieves up to 2× speedup on logistic regression compared to Dask ML and Spark’s MLlib on a terabyte of data.

Details

Files

Statistics

from
to
Export
Download Full History