Go to main content

PDF

Description

A predictive model's utility lies in its ability to generalize to data it has not seen. Unfortunately, it is difficult to reliably measure a model's ability to generalize to unseen data since it requires reasoning about the model's interactions with unknown environments. Generalization of deep learning models has been the subject of extensive study for years, but there has been a recent increase in the exploration of generalization metrics to predict the generalization of deep learning models.

While prior work in generalization metrics has been dominated by computer vision, in this work, we conduct one of the first analyses of generalization metrics in natural language processing (NLP). We study 36 generalization metrics spanning various motivations/theories with the goal of understanding the degree to which each metric is appropriate for use in predicting the generalization of models common in NLP. We particularly focus on shape metrics (generalization metrics derived from the shape of the empirical distribution of eigenvalues of weight correlation matrices) and are among the first to consider out-of-distribution generalization when evaluating the effectiveness generalization metrics.

We find that shape metrics are a promising category of generalization metrics, as they are the best metrics among those we consider at predicting generalization performance throughout training and show characteristics of being "ideal" generalization metrics. Interestingly, many of the generalization metrics we consider exhibit a behavior reminiscent of the Simpson's paradox when related to generalization performance. Moreover, the generalization metrics we consider are generally robust to changes in data distribution. However, there are signs this robustness is limited.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS