PDF

Description

Multi-task learning fundamentally involves utilizing multiple tasks to assist with generalization. In this report, we first investigate the motivation for research in multi-task learning, showing that when training multiple tasks together on the same neural network, performance may benefit if the tasks are related and suffer if the tasks are not. We then study multi-task architectures and evaluate in depth a mixture-of-experts model. We show that in experiments on the CIFAR-100 MTL domain, multi-clustering outperforms prior architectures in accuracy and computation time. Next, we apply multi-task learning to the object detection task using the BDD100K dataset. We explain our method of training the object detection task with the self-supervised tasks of angle and distance prediction and colorization, demonstrating performance benefits. Lastly, we demonstrate our work in few-shot learning, where we proposed a method to train a task to which we have little to no data by exploiting the task’s compositionality.

Details

Files

Statistics

from
to
Export
Download Full History