Mathias Lécuyer

Friday, Oct 5, 2018 at 11:30 AM in 380 Soda Hall

Title: Certified Robustness to Adversarial Examples with Differential Privacy

Abstract: Enormous technological progress in fitting accurate, complex models, such as deep neural networks (DNNs), over large amounts of data has caused the wide-spread adoption of such models across a broad range of applications. Examples include safety- and security-critical applications, such as self-driving cars and face recognition-based authentication systems, where robustness against adversarial behavior is crucial. Unfortunately, it has become clear in recent years that DNNs are extremely vulnerable to a variety of attacks, including adversarial examples attack in which the adversary finds small perturbations to correctly classified inputs that cause a DNN to produce erroneous predictions. While significant effort has been placed on defending against this attack, most existing defenses are best effort and fail with new generations of the attack. In this talk I describe PixelDP, the first certifiably robust defense against adversarial examples that scales to large, real-world DNNs and datasets. PixelDP leverages a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired formalism from the privacy domain. Using PixelDP, I developed the first version of Google's state-of-the-art Inception DNN for ImageNet that has non-trivial guaranteed accuracy under arbitrary, norm-bounded adversarial example attack.

Bio: Mathias Lecuyer is a doctoral candidate in the Computer Science Department at Columbia University. He holds a B.Sc. in Mathematics and Physics from Ecole Polytechnique in France and M.Sc. degrees in Computer Science from Ecole Polytechnique and Columbia University. Mathias' interests span broad areas of computer science research, including software systems, security and privacy, statistics, and machine learning. His dissertation focuses on enabling the promises of big data without imposing undue risks to individuals. To this end, he builds software systems and machine learning methods to address three major vectors of risk emerging in data driven ecosystems: the aggressive collection and wide access policies often applied to user data; the lack of external transparency on how user data is being used and for what purposes; and new classes of attacks unique to machine learning systems, such as adversarial examples.

Current Seminar Schedule

Security Lab