Go to main content

PDF

Description

In machine learning, most models are trained under the assumption that their test data will come from the same distribution as their training data. However, in the real world, this may not be true, necessitating a method to detect out-of-distribution (OOD) inputs. Thus far, prior works mostly evaluate when the OOD inputs are different classes, e.g. an image of a dog passed to a cat breed classifier. They do not consider OOD inputs that are of the same class but with a stylistic change, e.g. a cat under red lighting. In this work, we distinguish these two types as semantic and stylistic OOD data, respectively. We also propose to use a new modality, natural language, for the problem. As both the in-distribution dataset and stylistic OOD differences can be described with natural language, a model that utilizes it can be beneficial. We use OpenAI CLIP to encode style-contextual descriptions of our training dataset and at test time compare these to the encoded image. Our method, which we call DesCLIPtions, requires no additional training yet outperforms baselines for certain tasks. Overall, we conclude that natural language supervision is a promising direction for OOD detection.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS