- Series
- School of Mathematics Colloquium
- Time
- Thursday, January 29, 2026 - 11:00am for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
- Vijay Ganesh – Georgia Institute of Technology (SCS) – vganesh@gatech.edu – https://vganesh1.github.io/
- Organizer
- Anton Leykin
Abstract: In recent years we have witnessed a symbiotic trend wherein LLMs are being combined with provers, solvers, and computer algebra systems, resulting in dramatic breakthroughs in AI for math. Following this trend, we have developed two lines of work in my research group. The first is the idea that "good" joint embeddings (JE) can dramatically improve the efficacy of LLM-based auto-formalization tools. We say that JEs are good if they respect the following invariant: semantically-equivalent formally-dissimilar objects (e.g., pairs of sematically-equivalent natural and formal language proofs) must be "close by" in the embedding space, and semantically inequivalent ones "far apart". We use such JE models as part of a successful RAG-based auto-formalization pipeline, demonstrating that such JEs are a critical AI-for-math technology. The second idea is Reinforcement Learning with Symbolic Feedback (RLSF), a class of techniques that addresses the LLM hallucination problem in contexts where we have access to rich symbolic feedback such math, physics, and code, demonstrating that they too are critical to the success of AI for math.
Bio: Dr. Vijay Ganesh is a professor of computer science at Georgia Tech and the associate director of the Institute for Data Engineering and Science (IDEaS), also at Georgia Tech. Additionally, he is a co-founder and steering committee member of the Centre for Mathematical AI at the Fields Institute, and an AI Fellow at the BSIA in Waterloo, Canada. Prior to joining Georgia Tech in 2023, Vijay was a professor at the University of Waterloo in Canada from 2012 to 2023, a co-director of the Waterloo AI Institute from 2021 to 2023, and a research scientist at the Massachusetts Institute of Technology from 2007 to 2012. Vijay completed his PhD in computer science from Stanford University in 2007.
Vijay's primary area of research is the theory and practice of SAT/SMT solvers, combinations of machine learning and automated reasoning, and their application in neurosymbolic AI, software engineering, security, mathematics, and physics. In this context he has led the development of many SAT/SMT solvers, most notably, STP, Z3str family of string solvers, Z3-alpha, MapleSAT, AlphaMapleSAT, and MathCheck. He also leads the development of several neurosymbolic AI tools aimed at mathematics, physics, and software engineering. On the theoretical side, he works on topics in mathematical logic and proof complexity. For his research, Vijay has won over 35 awards, honors, and medals, including an ACM Impact Paper Award at ISSTA 2019, ACM Test of Time Award at CCS 2016, and a Ten-Year Most Influential Paper citation at DATE 2008.