Inverse reinforcement learning (Ng & Russell, 2000) is the setting where an agent is trying to infer a reward function based on expert demonstrations. Meta-learning is the problem where an agent is trained on some collection of different, but related environments or tasks, and is trying to learn a way to quickly adapt to new tasks. Thus, meta inverse reinforcement learning is the setting where an agent is trying to infer reward functions that generalize to multiple tasks. It appears, however, that the rewards learned by current meta IRL algorithms are highly susceptible to overfitting on the training tasks, and during finetuning are sometimes unable to quickly adapt to the test environment.
In this paper, we contribute a general framework of approaching the problem of meta IRL by jointly meta-learning both policies and reward networks. We first show that by applying this modification using a gradient-based approach, we are able to improve upon an existing meta IRL algorithm called Meta-AIRL (Gleave & Habryka, 2018). We also propose an alternative method based on the idea of contextual RNN meta-learners. We evaluate our algorithms against a single-task baseline and the original Meta-AIRL algorithm on a collection of continuous control tasks, and we conclude with suggestions for future research.