PDF

Description

Every good collaboration is built on solid mutual understanding. Without understanding their machines' behavior, human operators cannot plan around them. Yet increasing automation is distancing them from active understanding. This dissertation will apply cognitive science to build automation that boosts human understanding.

The need for transparency is urgent for safety supervising tasks. Humans' environmental awareness and expansive understanding of safety can save robots from unforeseen edge cases. But only if those humans can also think through the robot's ongoing activity. Actions can be optimized to evidence safety or clearly anticipate faults, enabling supervisors to develop evidence-based appropriate trust. This work explores how observing action allows both humans and robots to construct better working models of the other.

In research on assured autonomy we focus on how machines can autonomously guarantee safety. Yet there will always remain a modeling gap that we require human collaborators to help fill: that's why after decades of autopilot experience and improvements, we still require two human pilots to validate ongoing safe operation. This thesis contends that safe robotics must work to inform these safety collaborators; that choices don't only function to complete objectives but are also evidence that other agents ultimately judge.

Characterizing how agents judge can empower our machines to choose actions to win correct judgements. First we will exposit how to learn humans' safety concerns from data despite noisy dynamics and demonstrations. After learning humans' concerns, we typify how they perceive and forecast danger. Building on cognitive science we present a model of human safety forecasting structured by reachability analysis. This structure induces data-efficient learning on small datasets so we can learn each supervisor's idiosyncratic ways of thinking -- enabling designers to conform their intelligent systems like a glove to a hand.

We build on these models of human safety judgement to support that judgement through machine choices. After learning each supervisor's unique alarms, respecting that safe set lets robot teams decrease supervisory false positives. Extending this approach to anticipate safety concerns ahead of the decision point, we optimize motion as evidence to reject the null hypothesis of danger.

The approaches in this dissertation contributes a mathematical lens for further inquiries into human risk-taking, safety negotiation, and technology learning. By employing the formalisms of intelligent safety to sketch human safety behavior, we imbue machines with a "theory of mind'' that is essential to fluent collaboration for our societal systems.

Details

Files

Statistics

from
to
Export
Download Full History