We introduce an autonomous navigation framework for ground-based, mobile robots that incorporates a known dynamics model into training, allows for planning in unknown, partially observable environments, and solves the full navigation problem of goal-directed, collision avoidant movement on a robot with complex, non-linear dynamics. We leverage visual semantics through a trained policy that, given a desired goal location and first person image of the environment, predicts a low frequency guiding control, or waypoint. We use the waypoint produced by our policy along with robust feedback controllers and known dynamics models to generate high frequency control outputs. Our approach allows for visual semantics to be learned during training while providing a simple methodology for incorporating robust dynamics models into training. Our experiments demonstrate that our method is able to reason through statistics of the visual world allowing for effective planning in unknown spaces. Additionally, we demonstrate that our formulation is robust to the particulars of low-level control, achieving performance over twice that of a comparable end-to-end learning method.