Physics simulations are really cool if you think about it. They manage to replicate the rules of reality so precisely that they’re sometimes difficult to discern from real life, and they enable realistic animation of crazy occurences to take place.
But how? Traditionally, physics simulations rely on calculations based on the laws of physics to obtain accurate results. There are multiple ways that a physics simulation can be created with multiple parameters, but essentially physics simulations must do these three things on loop to ensure the object is moving accurately in each frame:
- Identify the forces acting on an object
- Calculate these forces
- Apply the forces to the object
Velocity, torque, friction, all of these forces unique to each object in the simulation must be calculated.
On top of all that, the motion of these objects are restricted by constraints. Constraints are essentially the ‘laws’ of the ‘world’ where the objects live. These laws are put in place to ensure that objects don’t penetrate each other or defy gravity for example.
Complexity will increase the more detailed we make our simulation. So far we’ve just been talking about simple, rigid, 2d objects, we haven’t talked about rotation or different textured objects or objects with special properties or joints or collisions or particles or anything of the sort. These computations can add up, and eventually can become quite costly for the computer to process accurately.
This computational cost is why many physics engines used commercially (in games for example) approximate a good amount of visuals. The system prioritizes scalability and performance over precision because the more precise it is, the more costly it is to process.
Just how costly can accurate physics simulations be? It took 5 days for the supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center to compute an accurate foam bubble popping simulation. This is a lot more complicated than it sounds.
Clearly, there is room for improvement. By decreasing computing time, we would have the ability to model at a larger scale with higher resolution, and gain a better understanding of more complicated physics.
So machine learning comes to the rescue!
Instead of using an equation to try and predict how something will move, why don’t we use a neural network trained on data of what we’re trying to simulate?
AI can simulate phenomena based on observational data, and can even apply what its learned to scenarios outside of the training data! Let’s take the example of the car running through a maze scenario shown earlier. By training a neural network on loads of video data on how cars move and how they react to crashing into walls, we can create a more accurate simulation where cars don’t phase through walls.
The neural net can also apply what its learned from training data to something slightly different, such as a car traversing a more complex maze or maybe a different vehicle. The model doesn’t even need to be built to specialize to what its simulating, the ML process can generalize to different materials based on the data its trained on.
This training data could be from simulations derived from a physics function that is calculated offline, or from real life examples. The neural network takes a lot of training, but after that it can run the simulation in a matter of milliseconds.
To make sure the neural network maintains accuracy and doesn’t stray too far from the training data, backpropagation is commonly used. Backpropagation adjusts the neural networks weights, constantly iterating the network to ease it into more accurate predictions that are closer to the training data. The predictions aren’t corrected to the exact truth immediately as that would result in even more unstable predictions in the future.
Because this is still an area of research, many researchers experiment with different types of neural nets depending on the desired outcome of the simulation.
Let’s take.a look at fluid simulation for example.
One research group used a graph network simulator to model a water/sand/goop simulation. This simulation focused on scalability, and what made it so scalable is that the particles in the simulation were only affected by local particles within a specified range, so global information wasn’t needed. The model doesn’t seem to work very well for rigid objects, particle based simulations are its strong suit.
Another research group focused less on accuracy and more on achieving high resolution real time fluid physics simulation. This was previously too computation heavy to be done in real time but they developed a way to do so using regression forests to approximate particle behaviour.
Now machine learning isn’t a magical solution to all our problems, it can still take up a significant portion of computing time and memory usage based on the complexity of the model. It can also take a while to generate and train the model on data, and finding good quality data is not an easy task.
However machine learning has so much potential. Given some time and experimentation, it could reduce computational costs dramatically. There is no question that machine learning is the future of simulations, both in research and in industry.
- The traditional method of making physics simulations with calculations based on the laws of physics are computation heavy
- Industries that have to do physics simulations on a large scale have to sacrifice accuracy for scalability and faster computation time
- This can be solved with neural networks trained on data of what’s being simulated, so they can accurately predict what should be simulated without as much computation, allowing for a faster run time.