Rahul Sharma

April 12, 2023 at 11:00 AM on Zoom / Soda Hall

Can Secure Inference be as Fast as Plaintext?

Abstract: In the problem of secure inference, a model owner holds a proprietary machine learning (ML) model and a data owner holds an input. The goal is for the data owner to get model predictions for the input without revealing anything about the input to the model owner. Secure multiparty computation (MPC) can theoretically be used to solve this problem, but its overheads can be high. Over the last six years, the security community has been actively working on reducing these overheads. Our recent work shows that secure inference has reached a tipping point: the latency of secure inference can match plaintext inference. This talk will explain why this development is important, the main ideas that helped us get here, and what problems remain.

Bio: Rahul Sharma is a principal researcher at Microsoft Research India who likes to make things run as fast as possible. Prior to joining MSR, he obtained his PhD in Computer Science from Stanford University.

Security Lab