From centralized to federated learning of neural operators: Accuracy, efficiency, and reliability

Series
Applied and Computational Mathematics Seminar
Time
Monday, January 27, 2025 - 2:00pm for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Lu Lu – Yale University – lu.lu@yale.eduhttps://lugroup.yale.edu/people/
Organizer
Wenjing Liao

As an emerging paradigm in scientific machine learning, deep neural operators pioneered by us can learn nonlinear operators of complex dynamic systems via neural networks. In this talk, I will present the deep operator network (DeepONet) to learn various operators that represent deterministic and stochastic differential equations. I will also present several extensions of DeepONet, such as DeepM&Mnet for multiphysics problems, DeepONet with proper orthogonal decomposition or Fourier decoder layers, MIONet for multiple-input operators, and multifidelity DeepONet. I will demonstrate the effectiveness of DeepONet and its extensions to diverse multiphysics and multiscale problems, such as bubble growth dynamics, high-speed boundary layers, electroconvection, hypersonics, geological carbon sequestration, full waveform inversion, and astrophysics. Deep learning models are usually limited to interpolation scenarios, and I will quantify the extrapolation complexity and develop a complete workflow to address the challenge of extrapolation for deep neural operators. Moreover, I will present the first operator learning method that only requires one PDE solution, i.e., one-shot learning, by introducing a new concept of local solution operator based on the principle of locality of PDEs. I will also present the first systematic study of federated scientific machine learning (FedSciML) for approximating functions and solving PDEs with data heterogeneity.