Description
In this dissertation, I will present the many ways that we can learn transferable representations under different scenarios, including 1) when the source domain has only limited labels, even only one label per class, 2) when there are multiple labeled source domains, 3) when there are multiple unseen unlabeled target domains. These approaches are general across different data modalities (e.g. vision and language) and can be easily combined to solve other similar domain transfer settings (e.g. adapting from multiple sources with limited labels), enabling models to generalize beyond the source domains. Many of the works transfer knowledge from simulation data to real-world data in order to alleviate the need for expensive manual annotations. Finally, I present our pioneering work on building a LiDAR point cloud simulator, which has further enabled a large amount of domain adaptation work on LiDAR point cloud segmentation adaptation.