Description
This dissertation aims to lay the foundations for formalizing and ensuring safety in human-robot interaction, particularly when robots learn from and about people. It discusses how treating robot learning algorithms as dynamical systems driven by human data enables safe human-robot interaction. We first introduce a Bayesian monitor which infers online if the robot's learned human model can evolve to well-explain observed human data. We then discuss how a novel, control-theoretic problem formulation enables us to formally quantify what the robot could learn online from human data and how quickly this learning could be achieved. Coupling these ideas with robot motion planning algorithms, we demonstrate how robots can safely and automatically adapt their behavior based on how trustworthy their learned human models are. This thesis ends by taking a step back and raising the question: "What is the ‘right’ notion of safety when robots interact with people?'' and discusses opportunities for how rethinking our notions of safety can capture more subtle aspects of human-robot interaction.