Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to test-time evasion attacks (i.e., adversarial examples): inputs specifically designed by an adversary to cause a neural network to misclassify them. This makes applying neural networks in security-critical areas concerning.
In this dissertation, we introduce a general framework for evaluating the robustness of neural network through optimization-based methods. We apply our framework to two different domains, image recognition and automatic speech recognition, and find it provides state-of-the-art results for both. To further demonstrate the power of our methods, we apply our attacks to break 14 defenses that have been proposed to alleviate adversarial examples.
We then turn to the problem of designing a secure classifier. Given this apparently-fundamental vulnerability of neural networks to adversarial examples, instead of taking an existing classifier and attempting to make it robust, we construct a new classifier which is provably robust by design under a restricted threat model. We consider the domain of malware classification, and construct a neural network classifier that is can not be fooled by an insertion adversary, who can only insert new functionality, and not change existing functionality.
We hope this dissertation will provide a useful starting point for both evaluating and constructing neural networks robust in the presence of an adversary.
Title
Evaluation and Design of Robust Neural Network Defenses
Published
2018-08-10
Full Collection Name
Electrical Engineering & Computer Sciences Technical Reports
Other Identifiers
EECS-2018-118
Type
Text
Extent
173 p
Archive
The Engineering Library
Usage Statement
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).