- Series
- Applied and Computational Mathematics Seminar
- Time
- Monday, April 14, 2025 - 2:00pm for 1 hour (actually 50 minutes)
- Location
- Skiles 005 and https://gatech.zoom.us/j/94954654170
- Speaker
- Yahong Yang – Penn State
- Organizer
- Wei Zhu
Neural networks have become powerful tools for solving Partial Differential Equations (PDEs), with wide-ranging applications in engineering, physics, and biology. In this talk, we explore the performance of deep neural networks in solving PDEs, focusing on two primary sources of error: approximation error, and generalization error. The approximation error captures the gap between the exact PDE solution and the neural network’s hypothesis space. Generalization error arises from the challenges of learning from finite samples. We begin by analyzing the approximation capabilities of deep neural networks, particularly under Sobolev norms, and discuss strategies to overcome the curse of dimensionality. We then present generalization error bounds, offering insight into when and why deep networks can outperform shallow ones in solving PDEs.