Writing Fluid Simulations in ΦFlow¶

Google Collab Book

There are two main viewpoints for simulating fluids:

  • Eulerian simulations use grids, tracking fluid distribution and fluid velocity at fixed sample points
  • Lagrangian simulations track particles that move with the fluid.

ΦFlow supports both methods to some extent but mainly focuses on Eulerian simulations.

Before we discuss the various operations required for fluid simulations, let's define our variables and initial state. In this case, we will create a 64×96 grid, sampling velocity vectors in staggered form and marker values at the centroids.

In [1]:
from tqdm.notebook import trange
from phi.jax.flow import *  # imports sub-modules + core classes

velocity = StaggeredGrid(Noise(), 'periodic', x=64, y=96)
plot({"velocity": velocity, "vorticity": field.curl(velocity)})
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[1], line 5
      2 from phi.jax.flow import *  # imports sub-modules + core classes
      4 velocity = StaggeredGrid(Noise(), 'periodic', x=64, y=96)
----> 5 plot({"velocity": velocity, "vorticity": field.curl(velocity)})

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phi/vis/_vis.py:322, in plot(lib, row_dims, col_dims, animate, overlay, title, size, same_scale, log_dims, show_color_bar, color, alpha, err, frame_time, repeat, plt_params, max_subfigures, *fields)
    320     for i, f in enumerate(fields):
    321         idx = indices[pos][i]
--> 322         plots.plot(f, figure, axes[pos], subplots[pos], min_val, max_val, show_color_bar, color[pos][i][idx], alpha[idx], err[idx])
    323 plots.finalize(figure)
    324 LAST_FIGURE[0] = figure

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phi/vis/_vis_base.py:382, in PlottingLibrary.plot(self, data, figure, subplot, space, *args, **kwargs)
    380 for recipe in self.recipes:
    381     if recipe.can_plot(data, space):
--> 382         recipe.plot(data, figure, subplot, space, *args, **kwargs)
    383         return
    384 raise NotImplementedError(f"No {self.name} recipe found for {data}. Recipes: {self.recipes}")

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phi/vis/_matplotlib/_matplotlib_plots.py:515, in StreamPlot2D.plot(***failed resolving arguments***)
    513     if err.args[0] == "need at least one array to concatenate":
    514         return
--> 515     raise err
    516 stream.lines.set_alpha(a)
    517 new_patches = set(subplot.patches) - prev_patches

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phi/vis/_matplotlib/_matplotlib_plots.py:511, in StreamPlot2D.plot(***failed resolving arguments***)
    509 prev_patches = set(subplot.patches)
    510 try:
--> 511     stream = subplot.streamplot(x, y, u.T, v.T, color=col, cmap=colormaps.get_cmap(matplotlib.rcParams['image.cmap']))
    512 except ValueError as err:  # no lines cause
    513     if err.args[0] == "need at least one array to concatenate":

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/__init__.py:1524, in _preprocess_data.<locals>.inner(ax, data, *args, **kwargs)
   1521 @functools.wraps(func)
   1522 def inner(ax, *args, data=None, **kwargs):
   1523     if data is None:
-> 1524         return func(
   1525             ax,
   1526             *map(cbook.sanitize_sequence, args),
   1527             **{k: cbook.sanitize_sequence(v) for k, v in kwargs.items()})
   1529     bound = new_sig.bind(ax, *args, **kwargs)
   1530     auto_label = (bound.arguments.get(label_namer)
   1531                   or bound.kwargs.get(label_namer))

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/streamplot.py:91, in streamplot(axes, x, y, u, v, density, linewidth, color, cmap, norm, arrowsize, arrowstyle, minlength, transform, zorder, start_points, maxlength, integration_direction, broken_streamlines)
     18 def streamplot(axes, x, y, u, v, density=1, linewidth=None, color=None,
     19                cmap=None, norm=None, arrowsize=1, arrowstyle='-|>',
     20                minlength=0.1, transform=None, zorder=None, start_points=None,
     21                maxlength=4.0, integration_direction='both',
     22                broken_streamlines=True):
     23     """
     24     Draw streamlines of a vector flow.
     25 
   (...)     89         changes should be backward compatible.
     90     """
---> 91     grid = Grid(x, y)
     92     mask = StreamMask(density)
     93     dmap = DomainMap(grid, mask)

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/streamplot.py:367, in Grid.__init__(self, x, y)
    365     raise ValueError("'x' values must be equally spaced")
    366 if not np.allclose(np.diff(y), self.height / (self.ny - 1)):
--> 367     raise ValueError("'y' values must be equally spaced")

ValueError: 'y' values must be equally spaced

Operator Splitting¶

The Navier-Stokes equations for fluids, $\frac{\partial u}{\partial t} = - (u \cdot \nabla) u - \nu \nabla^2 u - \frac 1 \rho \nabla p + g$, comprise multiple terms.

Operator splitting enables writing fast and stable fluid simulations by sequentially evaluating the different terms. For each of the terms, ΦFlow provides functions to compute them:

  • Advection: advect.semi_lagrangian [Stam 1999], advect.mac_cormack [MacCormack 2002]
  • Diffusion: diffuse.explicit, diffuse.implicit
  • Pressure projection: fluid.make_incompressible [Chorin and Temam 1968]

All of these functions take in a state variable and return the new state after a certain time dt has passed. In the following example, the velocity is self-advected and made incompressible, while the marker is passively advected.

In [2]:
@jit_compile
def operator_split_step(v, p, dt, viscosity=0.1):
    v = advect.semi_lagrangian(v, v, dt)  # velocity self-advection
    v = diffuse.explicit(v, viscosity, dt)
    v, p = fluid.make_incompressible(v, (), Solve(x0=p, rank_deficiency=0))
    return v, p

velocity0, pressure0 = fluid.make_incompressible(velocity)
velocity1, pressure1 = operator_split_step(velocity0, None, dt=1.)
plot({'initial vorticity': field.curl(velocity0), 'after step': field.curl(velocity1)})
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/backend/_linalg.py:345: SparseEfficiencyWarning: spsolve requires A be CSC or CSR matrix format
  x = spsolve(lin[batch], y[batch])  # returns nan when diverges
---------------------------------------------------------------------------
Diverged                                  Traceback (most recent call last)
Cell In[2], line 8
      5     v, p = fluid.make_incompressible(v, (), Solve(x0=p, rank_deficiency=0))
      6     return v, p
----> 8 velocity0, pressure0 = fluid.make_incompressible(velocity)
      9 velocity1, pressure1 = operator_split_step(velocity0, None, dt=1.)
     10 plot({'initial vorticity': field.curl(velocity0), 'after step': field.curl(velocity1)})

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phi/physics/fluid.py:156, in make_incompressible(velocity, obstacles, solve, active, order, correct_skew, wide_stencil)
    154 if wide_stencil is None:
    155     wide_stencil = not velocity.is_staggered
--> 156 pressure = math.solve_linear(masked_laplace, div, solve, velocity.boundary, hard_bcs, active, wide_stencil=wide_stencil, order=order, implicit=None, upwind=None, correct_skew=correct_skew)
    157 # --- Subtract grad p ---
    158 grad_pressure = field.spatial_gradient(pressure, input_velocity.extrapolation, at=velocity.sampled_at, order=order, scheme='green-gauss')

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:671, in solve_linear(f, y, solve, grad_for_f, f_kwargs, *f_args, **f_kwargs_)
    668         return result  # must return exactly `x` so gradient isn't computed w.r.t. other quantities
    670     _matrix_solve = attach_gradient_solve(_matrix_solve_forward, auxiliary_args=f'is_backprop,solve{",matrix" if matrix.backend == NUMPY else ""}', matrix_adjoint=grad_for_f)
--> 671     return _matrix_solve(y - bias, solve, matrix)
    672 else:  # Matrix-free solve
    673     f_args = cached(f_args)

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_functional.py:963, in CustomGradientFunction.__call__(self, *args, **kwargs)
    958             if len(self.traces) >= 8:
    959                 warnings.warn(f"""{self.__name__} has been traced {len(self.traces)} times.
    960 To avoid memory leaks, call {f_name(self.f)}.traces.clear(), {f_name(self.f)}.recorded_mappings.clear().
    961 Traces can be avoided by jit-compiling the code that calls custom gradient functions.
    962 """, RuntimeWarning, stacklevel=2)
--> 963         native_result = self.traces[key](*natives)  # With PyTorch + jit, this does not call forward_native every time
    964         output_key = match_output_signature(key, self.recorded_mappings, self)
    965         output_tensors = assemble_tensors(native_result, output_key.specs)

    [... skipping hidden 9 frame]

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_functional.py:919, in CustomGradientFunction._trace.<locals>.forward_native(*natives)
    917 kwargs = assemble_tree(in_key.tree, in_tensors, attr_type=variable_attributes)
    918 ML_LOGGER.debug(f"Running forward pass of custom op {forward_native.__name__} given args {tuple(kwargs.keys())} containing {len(natives)} native tensors")
--> 919 result = self.f(**kwargs, **in_key.auxiliary_kwargs)  # Tensor or tuple/list of Tensors
    920 nest, out_tensors = disassemble_tree(result, cache=True, attr_type=variable_attributes)
    921 result_natives, result_shapes, specs = disassemble_tensors(out_tensors, expand=True)

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:667, in solve_linear.<locals>._matrix_solve_forward(y, solve, matrix, is_backprop)
    665     idx = b.concat([idx, new_col, new_row], 0)
    666     nat_matrix = b.sparse_coo_tensor(idx, data, (N+1, N+1))
--> 667 result = _linear_solve_forward(y, solve, nat_matrix, pattern_dims_in, pattern_dims_out, preconditioner, backend, is_backprop)
    668 return result

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:780, in _linear_solve_forward(y, solve, native_lin_op, pattern_dims_in, pattern_dims_out, preconditioner, backend, is_backprop)
    778 for tape in _SOLVE_TAPES:
    779     tape._add(solve, trj, result)
--> 780 result.convergence_check(is_backprop and 'TensorFlow' in backend.name)  # raises ConvergenceException
    781 return final_x

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:202, in SolveInfo.convergence_check(self, only_warn)
    200             warnings.warn(self.msg, ConvergenceWarning)
    201         else:
--> 202             raise Diverged(self)
    203 if not self.converged.trajectory[-1].all:
    204     if NotConverged not in self.solve.suppress:

Diverged: Direct solution does not satisfy tolerance: norm(residual)=0.0009532533003948629
In [3]:
@jit_compile
def operator_split_step(v, p, dt):
    v = advect.semi_lagrangian(v, v, dt)  # velocity self-advection
    v = diffuse.explicit(v, 0.1, dt)
    v, p = fluid.make_incompressible(v, (), Solve(x0=p, rank_deficiency=0))
    return v, p

velocity0, pressure0 = fluid.make_incompressible(velocity)
velocity1, pressure1 = operator_split_step(velocity0, None, dt=1.)
plot({'initial vorticity': field.curl(velocity0), 'after step': field.curl(velocity1)})
---------------------------------------------------------------------------
Diverged                                  Traceback (most recent call last)
Cell In[3], line 8
      5     v, p = fluid.make_incompressible(v, (), Solve(x0=p, rank_deficiency=0))
      6     return v, p
----> 8 velocity0, pressure0 = fluid.make_incompressible(velocity)
      9 velocity1, pressure1 = operator_split_step(velocity0, None, dt=1.)
     10 plot({'initial vorticity': field.curl(velocity0), 'after step': field.curl(velocity1)})

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phi/physics/fluid.py:156, in make_incompressible(velocity, obstacles, solve, active, order, correct_skew, wide_stencil)
    154 if wide_stencil is None:
    155     wide_stencil = not velocity.is_staggered
--> 156 pressure = math.solve_linear(masked_laplace, div, solve, velocity.boundary, hard_bcs, active, wide_stencil=wide_stencil, order=order, implicit=None, upwind=None, correct_skew=correct_skew)
    157 # --- Subtract grad p ---
    158 grad_pressure = field.spatial_gradient(pressure, input_velocity.extrapolation, at=velocity.sampled_at, order=order, scheme='green-gauss')

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:671, in solve_linear(f, y, solve, grad_for_f, f_kwargs, *f_args, **f_kwargs_)
    668         return result  # must return exactly `x` so gradient isn't computed w.r.t. other quantities
    670     _matrix_solve = attach_gradient_solve(_matrix_solve_forward, auxiliary_args=f'is_backprop,solve{",matrix" if matrix.backend == NUMPY else ""}', matrix_adjoint=grad_for_f)
--> 671     return _matrix_solve(y - bias, solve, matrix)
    672 else:  # Matrix-free solve
    673     f_args = cached(f_args)

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_functional.py:963, in CustomGradientFunction.__call__(self, *args, **kwargs)
    958             if len(self.traces) >= 8:
    959                 warnings.warn(f"""{self.__name__} has been traced {len(self.traces)} times.
    960 To avoid memory leaks, call {f_name(self.f)}.traces.clear(), {f_name(self.f)}.recorded_mappings.clear().
    961 Traces can be avoided by jit-compiling the code that calls custom gradient functions.
    962 """, RuntimeWarning, stacklevel=2)
--> 963         native_result = self.traces[key](*natives)  # With PyTorch + jit, this does not call forward_native every time
    964         output_key = match_output_signature(key, self.recorded_mappings, self)
    965         output_tensors = assemble_tensors(native_result, output_key.specs)

    [... skipping hidden 9 frame]

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_functional.py:919, in CustomGradientFunction._trace.<locals>.forward_native(*natives)
    917 kwargs = assemble_tree(in_key.tree, in_tensors, attr_type=variable_attributes)
    918 ML_LOGGER.debug(f"Running forward pass of custom op {forward_native.__name__} given args {tuple(kwargs.keys())} containing {len(natives)} native tensors")
--> 919 result = self.f(**kwargs, **in_key.auxiliary_kwargs)  # Tensor or tuple/list of Tensors
    920 nest, out_tensors = disassemble_tree(result, cache=True, attr_type=variable_attributes)
    921 result_natives, result_shapes, specs = disassemble_tensors(out_tensors, expand=True)

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:667, in solve_linear.<locals>._matrix_solve_forward(y, solve, matrix, is_backprop)
    665     idx = b.concat([idx, new_col, new_row], 0)
    666     nat_matrix = b.sparse_coo_tensor(idx, data, (N+1, N+1))
--> 667 result = _linear_solve_forward(y, solve, nat_matrix, pattern_dims_in, pattern_dims_out, preconditioner, backend, is_backprop)
    668 return result

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:780, in _linear_solve_forward(y, solve, native_lin_op, pattern_dims_in, pattern_dims_out, preconditioner, backend, is_backprop)
    778 for tape in _SOLVE_TAPES:
    779     tape._add(solve, trj, result)
--> 780 result.convergence_check(is_backprop and 'TensorFlow' in backend.name)  # raises ConvergenceException
    781 return final_x

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:202, in SolveInfo.convergence_check(self, only_warn)
    200             warnings.warn(self.msg, ConvergenceWarning)
    201         else:
--> 202             raise Diverged(self)
    203 if not self.converged.trajectory[-1].all:
    204     if NotConverged not in self.solve.suppress:

Diverged: Direct solution does not satisfy tolerance: norm(residual)=0.0009532533003948629

We can use iterate to compute a trajectory by repeatedly calling operator_split_step. All intermediate states are stacked along the specified dimension which we call time.

In [4]:
velocity_trj, pressure_trj = iterate(operator_split_step, batch(time=100), velocity0, pressure0, dt=1., range=trange)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[4], line 1
----> 1 velocity_trj, pressure_trj = iterate(operator_split_step, batch(time=100), velocity0, pressure0, dt=1., range=trange)

NameError: name 'velocity0' is not defined

Alternatively, we could have written a for loop, added all intermediate states to a list, and stacked the results afterward. Now, let's plot this trajectory by animating the time dimension.

In [5]:
plot(field.curl(velocity_trj), animate='time', same_scale=False)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[5], line 1
----> 1 plot(field.curl(velocity_trj), animate='time', same_scale=False)

NameError: name 'velocity_trj' is not defined

Higher-Order Simulations¶

The operator splitting approach is not compatible with more accurate numerical schemes. For more accurate simulations, we can use higher-order spatial schemes as well as time integration. In that case, we define a momentum equation which computes the PDE terms directly, without integrating them in time. The following example computes explicit fourth-order accurate advection and diffusion.

In [6]:
def momentum_equation(v, viscosity=0.1):
    advection = advect.finite_difference(v, v, order=4, implicit=None)
    diffusion = diffuse.finite_difference(v, viscosity, order=4, implicit=None)
    return advection + diffusion

Next, we perform time integration with the incompressibility constraint. This is considerably more expensive than the previous approach but yields much more accurate results.

In [7]:
@jit_compile
def rk4_step(v, p, dt):
    return fluid.incompressible_rk4(momentum_equation, v, p, dt, pressure_order=4)

velocity = CenteredGrid(Noise(vector='x,y'), 'periodic', x=64, y=96)
velocity0, pressure0 = fluid.make_incompressible(velocity, order=4)
velocity_trj, pressure_trj = iterate(rk4_step, batch(time=100), velocity0, pressure0, dt=.5, substeps=2, range=trange)
plot(field.curl(velocity_trj), animate='time', same_scale=False)
---------------------------------------------------------------------------
Diverged                                  Traceback (most recent call last)
Cell In[7], line 6
      3     return fluid.incompressible_rk4(momentum_equation, v, p, dt, pressure_order=4)
      5 velocity = CenteredGrid(Noise(vector='x,y'), 'periodic', x=64, y=96)
----> 6 velocity0, pressure0 = fluid.make_incompressible(velocity, order=4)
      7 velocity_trj, pressure_trj = iterate(rk4_step, batch(time=100), velocity0, pressure0, dt=.5, substeps=2, range=trange)
      8 plot(field.curl(velocity_trj), animate='time', same_scale=False)

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phi/physics/fluid.py:156, in make_incompressible(velocity, obstacles, solve, active, order, correct_skew, wide_stencil)
    154 if wide_stencil is None:
    155     wide_stencil = not velocity.is_staggered
--> 156 pressure = math.solve_linear(masked_laplace, div, solve, velocity.boundary, hard_bcs, active, wide_stencil=wide_stencil, order=order, implicit=None, upwind=None, correct_skew=correct_skew)
    157 # --- Subtract grad p ---
    158 grad_pressure = field.spatial_gradient(pressure, input_velocity.extrapolation, at=velocity.sampled_at, order=order, scheme='green-gauss')

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:671, in solve_linear(f, y, solve, grad_for_f, f_kwargs, *f_args, **f_kwargs_)
    668         return result  # must return exactly `x` so gradient isn't computed w.r.t. other quantities
    670     _matrix_solve = attach_gradient_solve(_matrix_solve_forward, auxiliary_args=f'is_backprop,solve{",matrix" if matrix.backend == NUMPY else ""}', matrix_adjoint=grad_for_f)
--> 671     return _matrix_solve(y - bias, solve, matrix)
    672 else:  # Matrix-free solve
    673     f_args = cached(f_args)

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_functional.py:963, in CustomGradientFunction.__call__(self, *args, **kwargs)
    958             if len(self.traces) >= 8:
    959                 warnings.warn(f"""{self.__name__} has been traced {len(self.traces)} times.
    960 To avoid memory leaks, call {f_name(self.f)}.traces.clear(), {f_name(self.f)}.recorded_mappings.clear().
    961 Traces can be avoided by jit-compiling the code that calls custom gradient functions.
    962 """, RuntimeWarning, stacklevel=2)
--> 963         native_result = self.traces[key](*natives)  # With PyTorch + jit, this does not call forward_native every time
    964         output_key = match_output_signature(key, self.recorded_mappings, self)
    965         output_tensors = assemble_tensors(native_result, output_key.specs)

    [... skipping hidden 9 frame]

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_functional.py:919, in CustomGradientFunction._trace.<locals>.forward_native(*natives)
    917 kwargs = assemble_tree(in_key.tree, in_tensors, attr_type=variable_attributes)
    918 ML_LOGGER.debug(f"Running forward pass of custom op {forward_native.__name__} given args {tuple(kwargs.keys())} containing {len(natives)} native tensors")
--> 919 result = self.f(**kwargs, **in_key.auxiliary_kwargs)  # Tensor or tuple/list of Tensors
    920 nest, out_tensors = disassemble_tree(result, cache=True, attr_type=variable_attributes)
    921 result_natives, result_shapes, specs = disassemble_tensors(out_tensors, expand=True)

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:667, in solve_linear.<locals>._matrix_solve_forward(y, solve, matrix, is_backprop)
    665     idx = b.concat([idx, new_col, new_row], 0)
    666     nat_matrix = b.sparse_coo_tensor(idx, data, (N+1, N+1))
--> 667 result = _linear_solve_forward(y, solve, nat_matrix, pattern_dims_in, pattern_dims_out, preconditioner, backend, is_backprop)
    668 return result

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:780, in _linear_solve_forward(y, solve, native_lin_op, pattern_dims_in, pattern_dims_out, preconditioner, backend, is_backprop)
    778 for tape in _SOLVE_TAPES:
    779     tape._add(solve, trj, result)
--> 780 result.convergence_check(is_backprop and 'TensorFlow' in backend.name)  # raises ConvergenceException
    781 return final_x

File /opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/phiml/math/_optimize.py:202, in SolveInfo.convergence_check(self, only_warn)
    200             warnings.warn(self.msg, ConvergenceWarning)
    201         else:
--> 202             raise Diverged(self)
    203 if not self.converged.trajectory[-1].all:
    204     if NotConverged not in self.solve.suppress:

Diverged: Direct solution does not satisfy tolerance: norm(residual)=0.010173769667744637

Further Reading¶

The Kolmogorov flow notebebook shows higher-order fluid flow with forcing.

For a comparison of various schemes in both accuracy and performance is given here.

Coupling between centered and staggered fields can be seen in the smoke plume notebook and Python script.