Machine learning approaches have traditionally made strong simplifying assumptions: that a benevolent teacher is available to present and classify instances of a single concept to be learned; that no noise or uncertainty is present in the environment; that a complete and correct domain theory is available; or that a useful language is provided by the designer. Additionally, much existing machine learning research has been done in a piecemeal fashion, addressing subproblems without a uniform conceptual approach to designing intelligent systems. The resulting learning techniques often are only useful for narrowly defined problems, and are so dependent on the underlying assumptions that they do not generalize well -- or at all -- to complex domains.

PAGODA (Probabilistic Autonomous GOal-Directed Agent), the intelligent agent design presented in this thesis, avoids making any of the above assumptions. It incorporates solutions to the problems of deciding what to learn, selecting a learning bias, and inductive learning under uncertainty, in an integrated system, based on the principles of probabilistic representation of knowledge, Bayesian evaluation techniques, and limited rationality as a normative behavioral goal. PAGODA has been implemented and tested in a simulated robot domain, RALPH (Rational Agent with Limited Performance Hardware).

Goal-Directed learning (GDL) allows the agent to decide what to learn, enabling autonomous learning in complex domains. The value of being able to predict various features of the environment is computed using the principles of decision theory. The agent uses the features with highest values as learning goals for building predictive theories.

Probabilistic Bias Evaluation (PBE) determines the learning bias for each learning goal using probabilistic domain knowledge, an expected learning curve, and a time-preference function to find the expected discounted future accuracy for proposed biases; the best of these biases is used for learning.

Theories are represented as Uniquely Predictive Theories (UPTs), which consist of restricted sets of conditional probabilities. Probability Combination using Independence (PCI), a probabilistic inference method which relies on minimal independence assumptions, is applied to the theories to make probabilistic predictions for planning and evaluation. A Bayesian evaluation method is used to determine the best theory to explain the observed data.

Chapter 1 of the thesis defines the problem of building autonomous rational agents, and motivates PAGODA as a solution to this problem. Chapter 2 surveys past approaches to probabilistic learning. Chapter 3 describes PAGODA's performance element, including the RALPH world and PAGODA's probabilistic representation for theories (UPTs), inference method (PCI), and planning mechanism. Chapters 4, 5, and 6 describe Goal-Directed Learning, Probabilistic Bias Evaluation, and probabilistic learning, respectively. The implementation of PAGODA in the RALPH domain and results of empirical tests are described in Chapter 7. Related work in a number of fields is discussed in Chapter 8. Finally, Chapter 9 presents conclusions and outlines open problems for future research.




Download Full History