PDF

Description

Before the broad availability of cloud computing, algorithms were developed to generate low latency schedules for processors given an input DAG of computational tasks. These algorithms often relied on a number of simplifications, including assuming an unlimited number of processors, as well as an unlimited amount of memory per processor. With the recent explosion of cloud computing, the setting of unbounded processors is now practically viable. Our project, the HASTE execution optimizer, takes as input workloads consisting of sets of task DAGs, a style of workload that is fairly common with data-driven applications, especially machine learning. HASTE employs the Lower Bound (LWB) scheduling algorithm, a task DAG scheduling algorithm that assumes an unbounded number of processors to create schedules for DAG execution that run each task node in the DAG at its earliest possible time. HASTE utilizes the merging of common prefixes between DAGs to create a single merged DAG, eliminating duplicated execution of equivalent nodes. After passing this merged DAG through LWB to generate a schedule, HASTE determines the memory costs of the schedule, and groups scheduled task executions into VMs in order to minimize total compute time, aggregated across allocated VMs. This report outlines the ILP that groups these components together to minimize the aggregated compute time, and therefore financial cost, of HASTE. Using public Azure Function invocation data, this report also shows how the heuristics employed by HASTE deliver the low latency of LWB while staying within reasonable financial cost relative to standard DAG execution algorithms.

Details

Files

Statistics

from
to
Export
Download Full History