Go to main content

PDF

Description

The last decade of advances in AI has been dominated by approaches to scaling deep learning across large datasets and compute. These advances have led to significant progress in natural language processing and computer vision, but have not yet translated to the same level of success in sequential decision-making settings.

One explanation for this discrepancy is that we have not yet figured out how to leverage the structure of decision-making problems to exploit these advances. In this thesis, we tackle the question: what is the right way to represent the world for sequential decision-making?

In Part I, I will discuss work on how we should represent tasks and goals in a way that lets us leverage the large-scale robotics datasets and pretrained models that have emerged in recent years. Then, Part II will focus on representations that enable compositional and long-horizon decision-making in more general settings. Part III begins to explore how representations can be structured to compute information theoretic quantities that enable new intrinsic motivation capabilities.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS