Description
To address these challenges, I pursue two lines of inquiry.
On the inference speed side, I consider the task of accelerating semantic segmentation, a classic image recognition task with applications in autonomous perception, on video by leveraging motion information. Specifically, I explore the use of block motion fields from compressed video (e.g. MPEG-4 / H.264) to warp deep representations, and a new, two-stream network architecture to correct warping error. These techniques seek to amortize the cost of extracting image features, the bottleneck in many neural architectures, over multiple video frames, while preserving, and in some cases, boosting, model accuracy.
On the compute cost side, I investigate how cross-camera person tracking, a video analytics task with applications in security and retail intelligence, can be executed efficiently on multiple video streams. Here I demonstrate how a profile of cross-camera correlations, built offline on historical video data, can be used as a spatial and temporal filter, ruling out cameras and frames unlikely to contain the target identity at inference time. My experiments shows that this filtering, together with a fallback mechanism, can substantially reduce compute cost, as well as improve precision, on video analytics workloads.
This body of work is motivated by a simple statement: machine learning systems must meet the performance requirements of the applications they enable. Advances in deep learning applied to vision have unlocked opportunity in robotic navigation, industrial and agricultural monitoring, and retail intelligence, each use case with its own latency, throughput, and cost constraints. This thesis is a step toward solving this constrained optimization problem.