PDF

Description

We aim to bridge the gap between end-to-end learning and traditional pipeline-based approaches for autonomous vehicles (AVs). In this work, we replace the traditional planning and control algorithms of modular approaches with an end-to-end learned policy, developing a hybrid of the two approaches. Our learned policy takes a bird's-eye-view representation of the world as input, and produces control actions such as braking, steering, and acceleration. To support the development of this learned policy, we introduce caRLot, a novel OpenAI gym environment that builds atop the open-source Pylot AV platform to provide configurable abstractions in addition to an interface with the CARLA simulator. We use caRLot to learn a model-free reinforcement learning policy that replaces planning and control, and compare its performance and runtime against several state-of-the-art approaches. We find that our hybrid approach has a notable improvement in runtime over a modular driving system, while having a significant advantage in interpretability over end-to-end systems.

Details

Files

Statistics

from
to
Export
Download Full History