Go to main content

PDF

Description

We present several methods for improving the robustness of and providing assurance for control stacks that involve learning-enabled components, particularly deep learning algorithms. We leverage uncertainty quantification as a primary tool towards this goal, and we present several algorithms for translating arbitrary measures of uncertainty to practical and usable assurances. In our analysis, we separately address methods where our learning-enabled components can make predictions and receive ground truth information in real time, and methods where ground truth data would not be available at runtime. In the first case, we provide exact assurances on the behavior of our control stack, and in the second case, we empirically demonstrate robustness according to a desired operating specification. We discuss applications of these methods in practical systems, including autonomous vehicles, quadrupedal robots, and autonomous aircraft.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS