Description
While prior work in generalization metrics has been dominated by computer vision, in this work, we conduct one of the first analyses of generalization metrics in natural language processing (NLP). We study 36 generalization metrics spanning various motivations/theories with the goal of understanding the degree to which each metric is appropriate for use in predicting the generalization of models common in NLP. We particularly focus on shape metrics (generalization metrics derived from the shape of the empirical distribution of eigenvalues of weight correlation matrices) and are among the first to consider out-of-distribution generalization when evaluating the effectiveness generalization metrics.
We find that shape metrics are a promising category of generalization metrics, as they are the best metrics among those we consider at predicting generalization performance throughout training and show characteristics of being "ideal" generalization metrics. Interestingly, many of the generalization metrics we consider exhibit a behavior reminiscent of the Simpson's paradox when related to generalization performance. Moreover, the generalization metrics we consider are generally robust to changes in data distribution. However, there are signs this robustness is limited.