PDF

Description

Machine learning has its roots in the design of algorithms that extract actionable structure from real world data. For high dimensional and high entropy data, machine learning techniques must cope with a fundamental tension arising from the curse of dimensionality: they must be computationally and statistically efficient despite the sheer size of the exponentially large spaces where meaningful signal is hidden. This thesis explores and attempts to resolve this tension in deep generative modeling. The first part of this thesis addresses imitation learning, the problem of reproducing the behavior of experts acting in dynamic environments. Supervised learning, to predict expert actions from states, suffers in statistical efficiency because large amounts of expert data are required to prevent action prediction errors from compounding over long behaviors. We propose a resolution in the form of an algorithm that learns a policy by matching the expert's state distribution. During learning, our algorithm continually executes the policy in the task environment and compares its states to the expert's on a gradually learned reward function. Allowing our algorithm to interact with the environment during training in this manner allows it to learn policies that stay on expert states even when expert data is extremely scarce. The second part of this thesis addresses modeling and compressing natural images using likelihood-based generative models, which are generative models trained with maximum likelihood to explicitly represent the probability distribution of data. When these models are scaled to high entropy datasets, they become computationally inefficient to employ for downstream tasks like image synthesis and compression. We present progress on these problems in the form of developments in flow models, a class of likelihood-based generative models that admit fast sampling and inference. We reduce flow model codelengths to be competitive with those of other types of likelihood-based generative models. Then, we develop the first computationally efficient compression algorithms for flow models, making our improved codelengths realizable in practice with fully parallelizable encoding and decoding.

Details

Files

Statistics

from
to
Export
Download Full History