Burning questions in AI security

ABSTRACT

 There are few technology areas more important than the intersection of AI and cybersecurity in 2025.  For AI to deliver on its promised value in autonomously transforming our economy and national defense, we must ensure it behaves securely.  And for AI to improve our cybersecurity, we’ll need to rely on AI-driven forms of automation.

How do we succeed?  In this talk, I’ll lay out what I see as the core open questions in the field and give opinionated answers:

  • How do we render non-deterministic, uninterpretable deep neural networks trustworthy enough that we can rely on them as virtual colleagues?
  • Where are the most lucrative areas in security to which we should apply large language models, inference scaling laws, and reinforcement learning?
  • Which artifacts are most important for frontier AI companies to defend as we pursue the international AI race, and what’s the role for the open source and open science culture that’s been part and parcel to American, British, and Canadian leadership in AI?

After giving a perspective on these questions based in my experience leading AI security work at Meta, I’ll discuss the role of the technology, policy, research, and national security communities in answering them.

 

Joshua Saxe Headshot


BIO

Joshua Saxe leads Meta's efforts to integrate security into its large language models (LLMs) and protect them from application-level cyberattacks. Before joining Meta, he served as chief scientist at Sophos, principal investigator on multiple DARPA programs at Invincea Labs, and led machine learning security research at Applied Minds. Joshua co-authored the book "Malware Data Science" with Hillary Sanders, published by No Starch Press. He has authored dozens of scientific papers and patents on security AI and has presented at numerous conferences, including Defcon, Blackhat and RSA.

 

 

License: CC-3.0
Submitted by Regan Williams on