High-quality human avatars are an important part of compelling virtual reality (VR) experiences. Animating an avatar to match the movement of its user, however, is a fundamentally difficult task, as most VR systems only track the user's head and hands, leaving the rest of the body undetermined. In this report, we introduce Temporal IK, a data-driven approach to predicting full-body poses from standard VR headset and controller inputs. We describe a recurrent neural network that, when given a sequence of positions and rotations from VR tracked objects, predicts the corresponding full-body poses in a manner that exploits the temporal consistency of human motion. To train and evaluate this model, we recorded several hours of motion capture data of subjects using VR. The model is integrated into an end-to-end solution within Unity, a popular game engine, for ease of use. Our model is found to do well in generating natural looking motion in the upper body.