Mathias Lecuyer

Monday, October 14, 2019 at 11:00 AM in 400 Cory Hall

Title: Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform

Abstract: Companies increasingly expose machine learning (ML) models trained over sensitive user data to untrusted domains, such as end-user devices and wide-access model stores. This creates a need to control the data’s leakage through these models. In this talk I will describe Sage, a differentially private (DP) ML platform that bounds the cumulative leakage of training data through models. Sage builds upon the rich literature on DP ML algorithms and contributes pragmatic solutions to two of the most pressing practical challenges of global DP: running out of privacy budget and the privacy-utility tradeoff. To address the former, I will present block composition, a new privacy loss accounting method leveraging the growing database regime of ML workloads to keep training models endlessly on a data stream while enforcing a global DP guarantee. To address the latter, I will describe privacy adaptive training, a process that trains a model on growing amounts of data and/or with increasing privacy parameters until, with high probability, the model meets developer configured quality criteria.

Bio: Mathias is a Post-Doctoral Researcher at Microsoft Research, New York. Prior to this, he was a PhD student at Columbia University, working with Roxana Geambasu, Augustin Chaintreau, and Daniel Hsu. He works at the intersection of systems and machine learning, aiming to offer strong semantics and guarantees that systems designers can rely on to safely leverage ML in their work. While until now he primarily focused on security and privacy guarantees, he is also interested in leveraging ML to understand and optimize systems. At the end of his postdoc, he will join the University of British Columbia in Vancouver as an assistant professor.

Current Seminar Schedule

Security Lab