Megha Srivastava and Neil Perry (Stanford)

May 3, 2023 at 11:00 AM on Zoom / Soda Hall

Do Users Write More Insecure Code with AI Assistants?

Abstract: We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants’ language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.

Bio: Megha Srivastava is a PhD student in the Computer Science department at Stanford University, co-advised by Dorsa Sadigh and Dan Boneh. Her research interest focuses on developing more reliable machine learning models, particularly in the broader context of human-AI interaction. Her work has been recognized by an ICML Best-Paper Runner Award (2018), and she is currently supported by the NSF Graduate Research Fellowship.

Neil Perry is a PhD candidate advised by Dan Boneh and a fellow at the Hoover Institution. His research is broadly related to applying Cryptography to public policy, but includes protecting the anonymity of protesters and designing systems for Nuclear Arms Control Verification.

Security Lab