We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation which minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner, and can be applied to categories unseen at training time. We prove that the resulting model may be kernelized to learn non-linear transformations under a variety of regularizers. In addition to being one of the first studies of domain adaptation for object recognition, this work develops a general theoretical framework for adaptation that could be applied to non-image data. We present a new image database for studying the effects of visual domain shift on object recognition, and demonstrate the ability of our method to improve recognition on categories with few or no target domain labels, moderate to large changes in the imaging conditions, and even changes in the feature representation.
Report EECS-2010-106 includes a two-page Appendix.