Description
This work presents Goal-Induced Inverse Reinforcement Learning, an IRL framework that learns a transferable reward function and achieves good performance as compared to imitation-learning algorithms. By learning the rewards in the IRL framework, our algorithm is able to obtain a more generalizable reward function that is able to solve different tasks by changing just the goal specification. Indeed, this work showed that the reward function learned changes to match the task at hand, and can be toggled depending on the given goal-instruction, mapping to the true, underlying reward function that the goal-instruction intends. This work also shows that the learned reward is shaped, allowing for ease learning by reinforcement learning agents. Furthermore, by training the policy and reward models jointly, we are able to efficiently obtain a policy that can perform on par with other imitation-learning policies. GIIRL shows comparable, if not better, results than behavioral-cloning algorithm.