Go to main content

PDF

Description

Multi-task policies enable a user to adjust their desired objective or task parameters without having to train a new policy for every new desired task. In order to train multi-task policies that can generalize to unseen tasks it is common to train them on a large repository of tasks. Tasks are commonly learned with demonstrations or reward functions. However, collecting human demonstrations or instrumenting reward functions for each new task is expensive and limits scaling of multi-task policies. How tasks are specified to multi-task policies is also an important dimension that can result in expensive labor during task communication. In this thesis we explore ways to learn and specify new tasks with minimal human supervision to enable more scalable multi-task policies.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS