Computer simulations do a good job of modeling physical systems from traffic patterns to rocket engines, but they can take a long time to run. New work takes advantage of deep learning to speed them up.
What’s new: Youngkyu Kim and a team at University of California and Lawrence Livermore National Lab developed a technique that uses a neural network to compute the progress of a fluid dynamics simulation much more quickly than traditional methods.
Key insight: Changes in the state of a simulation from one time step to the next can be expressed as a set of differential equations. One of the faster ways to solve differential equations is to calculate many partial solutions and combine them into an approximate solution. A neural network that has been trained to approximate solutions to differential equations also can generate these partial solutions. Not every neuron is important in calculating a given partial solution, so using only the subnetwork of neurons required to calculate each one makes this process much more efficient.
How It works: They used an autoencoder made up of two single-hidden-layer neural networks, an encoder and a decoder. The decoder’s output layer was sparsely connected, so neurons received input from only a few neurons in the previous layer. The authors trained the autoencoder to reproduce thousands of states of Burgers’ Equation, which simulates the location and speed of fluids in motion.
- At inference, the encoder encoded a solution at a given time step and passed it to the decoder.
- The authors divided the autoencoder’s output vector into partial solutions using an unnamed sampling algorithm. Then they traced the neurons involved in each one, defining subnetworks.
- For each subnetwork, they calculated the partial derivative of all its weights and biases. They took the integral of the partial derivatives to calculate partial solutions of the next timestep.
- They combined the partial solutions into a prediction of the simulation’s new state via the recently proposed algorithm SNS, which uses the method of least squares to approximate a solution.
Results: On the Burgers’ Equation that involves one spatial dimension, their method solved the problem 2.7 times faster than the usual approach with only 1 percent error. On the two-dimensional Burgers’ Equation, their method solved the problem 12 times faster with less than 1 percent error. Given the speed increase between one- and two-dimensional Burgers’ Equations, the authors suggest that acceleration may rise with the number of equations a simulation requires.
Why it matters: Our teams have seen a number of problems, such as airfoil design or optimization of nuclear power plants, in which an accurate but slow physics sim can be used to explore options. The design pattern of using a learning algorithm to approximate such simulations more quickly has been gaining traction, and this work takes a further step in that direction.
We’re thinking: In approximating solutions to a Burgers’ Equation, neural networks clearly meat expectations. Other approaches wouldn’t ketchup even if the authors mustard the effort to keep working on them.