ΦFlow Cookbook¶

This notebook lists useful code snippets.

Import for NumPy, TensorFlow, Jax, PyTorch¶

In [1]:
from phi.flow import *
from phi.tf.flow import *
from phi.jax.stax.flow import *
from phi.torch.flow import *
2023-08-29 12:50:30.780713: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-08-29 12:50:30.830099: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-08-29 12:50:30.831296: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-08-29 12:50:31.675608: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)

Select GPU or CPU¶

In [2]:
backend.default_backend().list_devices('GPU')
Out[2]:
[]
In [3]:
backend.default_backend().list_devices('CPU')
Out[3]:
[torch device 'CPU' (CPU 'cpu') | 6932 MB | 2 processors | ]
In [4]:
assert backend.default_backend().set_default_device('CPU')

Use 64 bit FP precision¶

In [5]:
math.set_global_precision(32)  # single precision is the default
x32 = math.random_normal(batch(b=4))

with math.precision(64):  ## operations within this context will use 32 bit floats
    x64 = math.to_float(x32)

Sample Random Values¶

In [6]:
data = math.random_normal(batch(examples=10)) * .1  # batch of scalar values
data = math.random_uniform(batch(examples=10), channel(vector='x,y'))  # batch of vectors
data
Out[6]:
(examplesᵇ=10, vectorᶜ=x,y) 0.480 ± 0.305 (2e-02...1e+00)

Slice a Tensor¶

In [7]:
data.examples[0]
Out[7]:
(x=0.191, y=0.595)

Print a Tensor¶

In [8]:
print(data)
print(f"{data:full:shape:dtype:color:.1f}")
(examplesᵇ=10, vectorᶜ=x,y) 0.480 ± 0.305 (2e-02...1e+00)
(examplesᵇ=10, vectorᶜ=x,y)
[[0.2, 0.6],
 [0.1, 0.7],
 [1.0, 0.4],
 [0.3, 0.3],
 [0.1, 0.3],
 [0.2, 0.7],
 [0.9, 0.9],
 [0.6, 0.6],
 [0.5, 0.0],
 [1.0, 0.3]]

Plot a Tensor¶

In [9]:
data = math.random_uniform(spatial(x=8, y=6))
vis.plot(data)  # or vis.show(data)
Out[9]:
<Figure size 1200x500 with 2 Axes>

Convert a Tensor to NumPy¶

In [10]:
data.numpy(order='x,y')
Out[10]:
array([[0.45785683, 0.08527088, 0.5096399 , 0.7802357 , 0.06318313,
        0.21213293],
       [0.64608747, 0.63206434, 0.91747266, 0.8138582 , 0.00503629,
        0.26304227],
       [0.8486472 , 0.9666401 , 0.17502236, 0.1325391 , 0.5373124 ,
        0.20725596],
       [0.2963305 , 0.05790192, 0.23139203, 0.08626831, 0.6218854 ,
        0.77004516],
       [0.5287756 , 0.8242924 , 0.69254345, 0.54871184, 0.5790198 ,
        0.91739386],
       [0.74913627, 0.49778354, 0.48441988, 0.45968515, 0.08384019,
        0.687513  ],
       [0.30472183, 0.66141444, 0.26227504, 0.86061823, 0.28071064,
        0.64048195],
       [0.72425216, 0.7381197 , 0.2096594 , 0.65706366, 0.44790828,
        0.80129623]], dtype=float32)
In [11]:
math.reshaped_native(data, ['extra', data.shape], to_numpy=True)
Out[11]:
array([[0.45785683, 0.08527088, 0.5096399 , 0.7802357 , 0.06318313,
        0.21213293, 0.64608747, 0.63206434, 0.91747266, 0.8138582 ,
        0.00503629, 0.26304227, 0.8486472 , 0.9666401 , 0.17502236,
        0.1325391 , 0.5373124 , 0.20725596, 0.2963305 , 0.05790192,
        0.23139203, 0.08626831, 0.6218854 , 0.77004516, 0.5287756 ,
        0.8242924 , 0.69254345, 0.54871184, 0.5790198 , 0.91739386,
        0.74913627, 0.49778354, 0.48441988, 0.45968515, 0.08384019,
        0.687513  , 0.30472183, 0.66141444, 0.26227504, 0.86061823,
        0.28071064, 0.64048195, 0.72425216, 0.7381197 , 0.2096594 ,
        0.65706366, 0.44790828, 0.80129623]], dtype=float32)

Compute Pair-wise Distances¶

In [12]:
points = math.tensor([(0, 0), (0, 1), (1, 0)], instance('points'), channel('vector'))
distances = points - math.rename_dims(points, 'points', 'others')
math.print(math.vec_length(distances))
[[0.       , 1.       , 1.       ],
 [1.       , 0.       , 1.4142135],
 [1.       , 1.4142135, 0.       ]]

Construct a CenteredGrid¶

In [13]:
zero_grid = CenteredGrid(0, 0, x=32, y=32, bounds=Box(x=1, y=1))
y_grid = CenteredGrid((0, 1), extrapolation.BOUNDARY, x=32, y=32)
noise_grid = CenteredGrid(Noise(), extrapolation.PERIODIC, x=32, y=32)
sin_curve = CenteredGrid(lambda x: math.sin(x), extrapolation.PERIODIC, x=100, bounds=Box(x=2 * PI))

vis.plot(zero_grid, y_grid, noise_grid, sin_curve, size=(12, 3))
Out[13]:
<Figure size 1200x300 with 6 Axes>

Construct a StaggeredGrid¶

In [14]:
zero_grid = StaggeredGrid(0, 0, x=32, y=32, bounds=Box(x=1, y=1))
y_grid = StaggeredGrid((0, 1), extrapolation.BOUNDARY, x=32, y=32)
noise_grid = StaggeredGrid(Noise(), extrapolation.PERIODIC, x=32, y=32)
sin_curve = StaggeredGrid(lambda x: math.sin(x), extrapolation.PERIODIC, x=100, bounds=Box(x=2 * PI))

vis.plot(zero_grid, y_grid, noise_grid, sin_curve, size=(12, 3))
Out[14]:
<Figure size 1200x300 with 4 Axes>

Construct StaggeredGrid from NumPy Arrays¶

Given matching arrays vx and vy, we can construct a StaggeredGrid. Note that the shapes of the arrays must match the extrapolation!

In [15]:
vx = math.tensor(np.zeros([33, 32]), spatial('x,y'))
vy = math.tensor(np.zeros([32, 33]), spatial('x,y'))
StaggeredGrid(math.stack([vx, vy], channel('vector')), extrapolation.BOUNDARY)

vx = math.tensor(np.zeros([32, 32]), spatial('x,y'))
vy = math.tensor(np.zeros([32, 32]), spatial('x,y'))
StaggeredGrid(math.stack([vx, vy], channel('vector')), extrapolation.PERIODIC)

vx = math.tensor(np.zeros([31, 32]), spatial('x,y'))
vy = math.tensor(np.zeros([32, 31]), spatial('x,y'))
StaggeredGrid(math.stack([vx, vy], channel('vector')), 0)
Out[15]:
StaggeredGrid[(xˢ=32, yˢ=32, vectorᶜ=2), size=(x=32, y=32) int64, extrapolation=0]

BFGS Optimization¶

In [16]:
def loss_function(x):
    return math.l2_loss(math.cos(x))

initial_guess = math.tensor([1, -1], math.batch('batch'))
math.minimize(loss_function, Solve('L-BFGS-B', 0, 1e-3, x0=initial_guess))
Out[16]:
(1.574, -1.574) along batchᵇ

Linear Solve¶

In [17]:
def f(x):
    return 2 * x

math.solve_linear(f, 84, Solve('CG', 1e-5, x0=0))
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
Cell In[17], line 4
      1 def f(x):
      2     return 2 * x
----> 4 math.solve_linear(f, 84, Solve('CG', 1e-5, x0=0))

File /opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/phiml/math/_optimize.py:538, in solve_linear(f, y, solve, grad_for_f, f_kwargs, *f_args, **f_kwargs_)
    536 rank = y_tensors[0].rank
    537 assert x0_tensors[0].rank == rank, f"y and x0 must have the same rank but got {y_tensors[0].shape.sizes} for y and {x0_tensors[0].shape.sizes} for x0"
--> 538 y = wrap(y, *[batch(f'batch{i}') for i in range(rank - 1)], channel('vector'))
    539 x0 = wrap(solve.x0, *[batch(f'batch{i}') for i in range(rank - 1)], channel('vector'))
    540 solve = copy_with(solve, x0=x0)

File /opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/phiml/math/_tensors.py:1631, in wrap(data, *shape)
   1628 def wrap(data,
   1629          *shape: Shape) -> Tensor:
   1630     """ Short for `phiml.math.tensor()` with `convert=False`. """
-> 1631     return tensor(data, *shape, convert=False)

File /opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/phiml/math/_tensors.py:1584, in tensor(data, convert, default_list_dim, *shape)
   1582     return layout(data)
   1583 elif isinstance(data, (numbers.Number, bool)):
-> 1584     assert not shape, f"Trying to create a zero-dimensional Tensor from value '{data}' but shape={shape}"
   1585     if convert:
   1586         data = default_backend().as_tensor(data, convert_external=True)

AssertionError: Trying to create a zero-dimensional Tensor from value '84' but shape=(vectorᶜ=None)

Sparse Matrix Construction¶

In [18]:
from functools import partial

periodic_laplace = partial(math.laplace, padding=extrapolation.PERIODIC)
example_input = math.ones(spatial(x=3))
matrix, bias = math.matrix_from_function(periodic_laplace, example_input)
math.print(matrix)
x=0     -2.   1.   1.  along ~x
x=1      1.  -2.   1.  along ~x
x=2      1.   1.  -2.  along ~x

Sampling a Function¶

In [19]:
def f(x):
    return math.l2_loss(math.sin(x))

f_grid = CenteredGrid(f, x=100, y=100, bounds=Box(x=2*PI, y=2*PI))
vis.plot(f_grid)
Out[19]:
<Figure size 1200x500 with 2 Axes>

Plot Optimization Trajectories¶

In [20]:
def minimize(x0):
    with math.SolveTape(record_trajectories=True) as solves:
        math.minimize(f, Solve('BFGS', 0, 1e-5, x0=x0))
    return solves[0].x  # shape (trajectory, x, y, vector)

trajectories = CenteredGrid(minimize, x=8, y=8, bounds=Box(x=2*PI, y=2*PI)).values
segments = []
for start, end in zip(trajectories.trajectory[:-1].trajectory, trajectories.trajectory[1:].trajectory):
    segments.append(PointCloud(start, end - start, bounds=Box(x=2*PI, y=2*PI)))
anim_segments = field.stack(segments, batch('time'))
vis.plot(f_grid, anim_segments, overlay='args', animate='time', color='#FFFFFF', frame_time=500)
/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/phiml/math/_ops.py:446: RuntimeWarning: pack_dims() default implementation is slow on large dimensions ((_cᶜ=4096)). Please implement __unpack_dim__() for Layout as defined in phiml.math.magic
  return unpack_dim(stack(result, channel('_c')) if isinstance(result, Shapable) else wrap(result, channel('_c')), '_c', shape)
Out[20]:
Your browser does not support the video tag.

Neural Network Training¶

In [21]:
net = dense_net(1, 1, layers=[8, 8], activation='ReLU')  # Implemented for PyTorch, TensorFlow, Jax-Stax
optimizer = adam(net, 1e-3)
BATCH = batch(batch=100)

def loss_function(data: Tensor):
    prediction = math.native_call(net, data)
    label = math.sin(data)
    return math.l2_loss(prediction - label), data, label

print(f"Initial loss: {loss_function(math.random_normal(BATCH))[0]}")
for i in range(100):
    loss, _data, _label = update_weights(net, optimizer, loss_function, data=math.random_normal(BATCH))
print(f"Final loss: {loss}")
Initial loss: (batchᵇ=100) 0.195 ± 0.172 (2e-05...6e-01)
Final loss: (batchᵇ=100) 0.093 ± 0.120 (5e-07...4e-01)

Parameter Count¶

In [22]:
parameter_count(net)
Out[22]:
97