Engineers who design aircraft, aqueducts, and other objects that interact with air and water use numerical simulations to test potential shapes, but they rely on trial and error to improve their designs. A neural simulator can optimize the shape itself.
What’s new: Researchers at DeepMind devised Differentiable Learned Simulators, neural networks that learn to simulate physical processes, to help design surfaces that channel fluids in specific ways.
Key insight: A popular way to design an object with certain physical properties is to evolve it using a numerical simulator: sample candidate designs, test their properties, keep the best design, tweak it randomly, and repeat. Here’s a faster, nonrandom alternative: Given parameters that define an object’s shape as a two- or three-dimensional mesh, a differentiable model can compute how it should change to better perform a task. Then it can use that information to adjust the object’s shape directly.
How it works: Water and air can be modeled as systems of particles. The authors trained MeshGraphNets, a type of graph neural network, to reproduce a prebuilt simulator’s output. The networks were trained to simulate the flow of particles around various shapes by predicting the next state given the previous state. The MeshGraphNets’ nodes represented particles, and their edges connected nearby particles.
- They trained one network to simulate the flow of water in two dimensions and used it to optimize the shapes of containers and ramps. They trained another to simulate water in three dimensions and used it to design surfaces that directed an incoming stream in certain directions. They trained the third on the output of an aerodynamic solver and used it to design an airfoil — a cross-section of a wing — to reduce drag.
- Given a shape’s parameters, the trained networks predicted how the state would change over a set number of time steps by repeatedly predicting the next state from the current one. Then they evaluated the object based on a reward function. The reward functions for the 2D and 3D water tasks maximized the likelihood that particles would pass through a target region of simulated space. The reward function for the aerodynamic task minimized drag.
- To optimize a shape, the authors repeatedly backpropagated gradients from the reward function through the network (without changing it) to the shape, updating its parameters.
Results: Shapes designed using the authors’ approach outperformed those produced by the cross-entropy method (CEM), a technique that samples many designs and evolves them to maximize rewards. In the 2D water tasks, they achieved rewards 3.9 to 37.5 percent higher than shapes produced by CEM using the prebuilt simulator. In the aerodynamic task, they achieved results similar to those of a highly specialized solver, producing drag coefficients between 0.01898 and 0.01919 compared to DAFoam’s 0.01902 (lower is better).
We’re thinking: It’s not uncommon to train a neural network to mimic the output of a computation-intensive physics simulator. Using such a neural simulator not to run simulations but to optimize inputs according to the simulation’s outcome — that’s a fresh idea.