Neural Networks with Inputs Based on Domain of Dependence and A Converging Sequence for Solving Conservation Laws

Applied and Computational Mathematics Seminar
Monday, February 28, 2022 - 2:00pm for 1 hour (actually 50 minutes)
Haoxiang Huang – GT –
Molei Tao

Recent research on solving partial differential equations with deep neural networks (DNNs) has demonstrated that spatiotemporal-function approximators defined by auto-differentiation are effective    for approximating nonlinear problems. However, it remains a challenge to resolve discontinuities in nonlinear conservation laws using forward methods with DNNs without beginning with part of the solution. In this study, we incorporate first-order numerical schemes into DNNs to set up the loss function approximator instead of auto-differentiation from traditional deep learning framework such as the TensorFlow package, thereby improving the effectiveness of capturing discontinuities in Riemann problems. We introduce a novel neural network method.  A local low-cost solution is first used as the input of a neural network to predict the high-fidelity solution at a space-time location. The challenge lies in the fact that there is no way to distinguish a smeared discontinuity from a steep smooth solution in the input, thus resulting in “multiple predictions” of the neural network. To overcome the difficulty, two solutions of the conservation laws from a converging sequence, computed from low-cost numerical schemes, and in a local domain of dependence of the space-time location, serve as the input. Despite smeared input solutions, the output provides sharp approximations to solutions containing shocks and contact surfaces, and the method is efficient to use, once trained. It works not only for discontinuities, but also for smooth areas of the solution, implying broader applications for other differential equations.