Deep learning models are vulnerable to adversarial examples: maliciously perturbed inputs that compel models to make incorrect predictions with high confidence. We present an analysis of adversarial examples in the context of visual decompilers. Using the image-to-LaTeX task as a baseline for structured prediction problems, we show that targeted and non-targeted adversarial examples can fool the model using a minimal amount of perturbations. Additionally, we apply and discuss the limitations of two detection schemes. Finally, we propose—and subsequently break—two prevention strategies, one of which involves a novel attack for quantized adversarial examples.
Title
Adversarial Examples for Visual Decompilers
Published
2017-05-12
Full Collection Name
Electrical Engineering & Computer Sciences Technical Reports
Other Identifiers
EECS-2017-81
Type
Text
Extent
46 p
Archive
The Engineering Library
Usage Statement
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).