In-Context Operator Learning on the Space of Probability Measures

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 13, 2026 - 2:00pm for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Dixi Wang – Purdue University – wang6721@purdue.eduhttps://www.math.purdue.edu/people/profile/wang6721.html
Organizer
Wenjing Liao

We introduce in-context operator learning on probability measure spaces for optimal transport (OT). The goal is to learn a single solution operator that maps a pair of distributions to the OT map, using only few-shot samples from each distribution as a prompt and without gradient updates at inference. We parameterize the solution operator and develop scaling-law theory in two regimes. In the nonparametric setting, when tasks concentrate on a low-intrinsic-dimension manifold of source– target pairs, we establish generalization bounds that quantify how in-context accuracy scales with prompt size, intrinsic task dimension, and model capacity. In the parametric setting (e.g., Gaussian families), we give an explicit architecture that recovers the exact OT map in context and provide finite-sample excess-risk bounds. Our numerical experiments on synthetic transports and generative modeling benchmarks validate the framework.