PDF

Description

In this thesis, we study how maximum entropy framework can provide efficient deep reinforcement learning (deep RL) algorithms that solve tasks consistently and sample efficiently. This framework has several intriguing properties. First, the optimal policies are stochastic, improving exploration and preventing convergence to local optima, particularly when the objective is multimodal. Second, the entropy term provides regularization, resulting in more consistent and robust learning when compared to deterministic methods. Third, maximum entropy policies are composable, that is, two or more policies can be combined, and the resulting policy can be shown to be approximately optimal for the sum of the constituent task rewards. And fourth, the view of maximum entropy RL as probabilistic inference provides a foundation for building hierarchical policies that can solve complex and sparse reward tasks. In the first part, we will devise new algorithms based on this framework, starting from soft Q-learning that learns expressive energy-based policies, to soft actor-critic that provides simplicity and convenience of actor-critic methods, and ending with automatic temperature adjustment scheme that practically eliminates the need for hyperparameter tuning, which is a crucial feature for real-world applications where tuning of hyperparameters can be prohibitively expensive. In the second part, we will discuss extensions enabled by the inherent stochasticity of maximum entropy polices, including compositionality and hierarchical learning. We will demonstrate the effectiveness of the proposed algorithms on both simulated and real-world robotic manipulation and locomotion tasks.

Details

Files

Statistics

from
to
Export
Download Full History