Language Models For Formal Proof



Language models have shown great promise for making it easier to develop and maintain machine-checkable proofs of program correctness, proving the absence of costly or dangerous bugs in critical systems. Language-model-based proof automation tools nicely complement symbolic methods---the former are unreliable but flexible and natural, while the latter provide strong guarantees but lack flexibility and naturalness. This talk will describe how these different methods come together to make writing machine-checkable proofs of program correctness easier, and what this means for the future of machine-checkable proof. 



Talia Ringer is an Assistant Professor at UIUC, where they build technologies that make it easier to develop and maintain verified software systems. Techniques they use toward this end range from advances in dependent type theory to machine learning and everything in between. Talia was also a visiting faculty researcher at Google in 2022, working on language models for formal proof. Prior to UIUC, they got their PhD from University of Washington in 2021. Prior to graduate school, they earned their BS in Computer Science and Mathematics from the University of Maryland, then worked as a software engineer at Amazon for three years. They have received the SIGPLAN Distinguished Service Award and the DARPA Young Faculty Award.

License: CC-3.0
Submitted by Regan Williams on