PDF

Description

A key step in many policy search algorithms is estimating the gradient of the objective function with respect to a given policy's parameters. The gradient is typically estimated from a set of policy trials. Each of these trials can be very expensive and so we prefer to minimize the total number of trials required to achieve a desired level of performance. In this paper we show that by viewing the task of estimating the gradient as a structured probabilistic inference problem, we can improve the learning performance. We argue that in many instances, reasoning about sensory data obtained during policy execution is beneficial. In other words, in addition to an agent knowing how well it performed during each policy run, it is helpful for it to learn "how it feels" to perform a particular task well. This knowledge is especially useful if we are able to incorporate prior knowledge specific to the given control task. Examples of using prior knowledge include setting the conditional independencies between various sensor variables and choosing the types of conditional probability distributions. In addition, by using hierarchical Bayes methods we are able to efficiently reuse old data from trials of other policies. We demonstrate the effectiveness of this approach by showing an improvement in learning performance on a toy cannon problem and a dart throwing task.

Details

Files

Statistics

from
to
Export
Download Full History