# Differentiable Fluid Simulations with ΦFlow2¶

This notebook steps you through setting up fluid simulations and using TensorFlow's differentiation to optimize them.

Execute the cell below to install the ΦFlow Python package from GitHub.

## Setting up a Simulation¶

ΦFlow is vectorized but object-oriented, i.e. data are represented by Python objects that internally use tensors.

First, we create grids for the quantities we want to simulate. For this example, we require a velocity field and a smoke density field. We sample the smoke field at the cell centers and the velocity in staggered form.

Additionally, we want to add more smoke every time step. We create the INFLOW field from a circle (2D Sphere) which defines where hot smoke is emitted. Furthermore, we are interested in running the simulation for different inflow locations.

ΦFlow supports data-parallell execution via batch dimensions. When a quantity has a batch dimension with size n, operations involving that quantity will be performed n times simultaneously and the result will also have that batch dimension. Here we add the batch dimension inflow_loc.

For an overview of the dimension types, see the documentation or watch the introductory tutorial video.

The created grids are instances of the class Grid. Like tensors, grids also have the shape attribute which lists all batch, spatial and channel dimensions. Shapes in ΦFlow store not only the sizes of the dimensions but also their names and types.

The grid values can be accessed using the values property.

Grids have many more properties which are documented here. Also note that the staggered grid has a non-uniform shape because the number of faces is not equal to the number of cells.

## Running the Simulation¶

Next, let's do some physics! Since the initial velocity is zero, we just add the inflow and the corresponding buoyancy force. For the buoyancy force we use the factor (0, 0.5) to specify strength and direction. Finally, we project the velocity field to make it incompressible.

Note that the @ operator is a shorthand for resampling a field at different points. Since smoke is sampled at cell centers and velocity at face centers, this conversion is necessary.

Let's run a longer simulation! Now we add the transport or advection operations to the simulation. ΦFlow provides multiple algorithms for advection. Here we use semi-Lagrangian advection for the velocity and MacCormack advection for the smoke distribution.

The simulation we just computed was using pure NumPy so all operations were non-differentiable. To enable differentiability, we need to use either PyTorch, TensorFlow or Jax. This can be achieved by changing the import statement to phi.tf.flow, phi.torch.flow or phi.jax.flow, respectively. Tensors created after this import will be allocated using PyTorch / TensorFlow / Jax and operations on these will be executed with the corresponding backend. These operations can make use of a GPU through CUDA if your configuration supports it.

We set up the simulation as before.

We can verify that tensors are now backed by TensorFlow / PyTorch / Jax.

Note that tensors created with NumPy will keep using NumPy/SciPy operations unless a TensorFlow tensor is also passed to the same operation.

Let's look at how to get gradients from our simulation. Say we want to optimize the initial velocities so that all simulations arrive at a final state that is similar to the right simulation where the inflow is located at (16, 5).

To achieve this, we define the loss function as $L = | D(s - s_r) |^2$ where $s$ denotes the smoke density and the function $D$ diffuses the difference to smoothen the gradients.

Now it is important that the initial velocity has the inflow_loc dimension before we record the gradients.

Finally, we use gradient_function() to obtain the gradient with respect to the initial velocity. Since the velocity is the second argument to the simulate() function, we pass wrt=[1].

The argument get_output=False specifies that we are not interested in the actual output of the function. By setting it to True, we would also get the loss value and the final simulation state.

To evaluate the gradient, we simply call the gradient function with the same arguments as we would call the simulation.

With the gradient, we can easily perform basic gradient descent optimization. For more advanced optimization techniques and neural network training, see the optimization documentation.

This notebook provided an introduction to running fluid simulations in NumPy and TensorFlow. It demonstrated how to obtain simulation gradients which can be used to optimize physical variables or train neural networks.

The full ΦFlow documentation is available at https://tum-pbs.github.io/PhiFlow/.

Visit the playground to run ΦFlow code in an empty notebook.