Description
I analyze the behavior of learning systems in adversarial environments. My thesis is that learning algorithms are vulnerable to attacks that can transform the learner into a liability for the system they are intended to aid, but by critically analyzing potential security threats, the extent of these threat can be assessed, proper learning techniques can be selected to minimize the adversary's impact, and failures of system can be averted.
I present a systematic approach for identifying and analyzing threats against a machine learning system. I examine real-world learning systems, assess their vulnerabilities, demonstrate real-world attacks against their learning mechanism, and propose defenses that can successful mitigate the effectiveness of such attacks. In doing so, I provide machine learning practitioners with a systematic methodology for assessing a learner's vulnerability and developing defenses to strengthen their system against such threats. Additionally, I also examine and answer theoretical questions about the limits of adversarial contamination and classifier evasion.