Description
Modern deep learning techniques are data-hungry, which presents a problem in robotics because real-world robotic data is difficult to collect. Simulated data is cheap and scalable, but jumping the “reality gap” to use simulated data for real-world tasks is challenging. In this thesis, we discuss using synthetic data to learn visual models that allow robots to perform manipulation tasks in the real world. We begin by discussing domain randomization, a technique for bridging the reality gap by massively randomizing the visual properties of the simulator. We demonstrate that, using domain randomization, synthetic data alone can be used to train a deep neural network to localize objects accurately enough for a robot to grasp them in the real world. The remainder of the thesis discusses extensions of this approach to a broader range of objects and scenes. First, we introduce a data generation pipeline inspired by the success of domain randomization for visual data that creates millions of unrealistic procedurally generated random objects, removing the assumption that 3D models of the objects are present at training time. Second, we reformulate the problem from pose prediction to grasp prediction and introduce a generative model architecture that learns a distribution over grasps, allowing our models to handle pose ambiguity and grasp a wide range of objects with a single neural network. Third, we introduce an attention mechanism for 3-dimensional data. We demonstrate that this attention mechanism can be used to perform higher fidelity neural rendering, and that models learned this way can be fine-tuned to perform accurate pose estimation when the camera intrinsics are unknown at training time. We conclude by surveying recent applications and extensions of domain randomization in the literature and suggesting several promising directions for research in sim-to-real transfer for robotics.