PDF

Description

The phenomenon of adversarial examples in neural networks has spurred the development of robust classification methods that are immune to these vulnerabilities. Classifiers using generative models, including Analysis by Synthesis (ABS) introduced by Schott et al. and its extension E-ABS by Ju et al., have achieved state-of-the-art robust accuracy on several benchmark datasets like SVHN and MNIST. Their inference time complexity, however, scales linearly with the number of classes in the data, limiting their practicality. We evaluate two approaches to speed up ABS-style models and inference: first, a hierarchical decision tree framework that achieves accuracy nearly on par with E-ABS in logarithmic time, and second, a scheme to classify latent vectors based on a prior distribution in constant time. We also provide an algorithm to search over decision tree structures, which yields significant improvements in accuracy over naive arrangements.

Details

Files

Statistics

from
to
Export
Download Full History