Go to main content

PDF

Description

Machine learning has shown great potential to facilitate more effective methods for human-computer interaction. This includes artificial intelligence-based interfaces that can assist users with performing their desired objectives with improved performance. In this technical report, we propose two human-in-the-loop deep reinforcement learning (RL) based methods to infer the intent of a user through only high-dimensional, noisy user inputs, while adapting to the user’s inputs and feedback over time, in order to assist the user in performing their desired objectives more effectively. In Chapter 1, we propose a deep RL approach that learns from human feedback for assistive typing interfaces, which we formulate as contextual bandit problems. In Chapter 2, we propose a method that extends this style of approach for robotics tasks, which require sequential decision making. We do this through leveraging autonomous pre-training with deep RL. We demonstrate the effectiveness of these approaches using simulated user inputs, real user studies where participants communicate intent through webcam eye gaze, and a pilot study using brain-computer interfaces.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS