Like NumPy, TensorFlow, PyTorch and Jax, ΦFlow provides a Tensor
class.
There are, however, two crucial differences:
from phi.flow import *
# from phi.torch.flow import *
# from phi.tf.flow import *
# from phi.jax.flow import *
Every dimension is assigned one of four types: channel
, spatial
, instance
or batch
(abbreviated as c,s,i,b).
Channel dimensions enumerate properties of a single object, be it the velocity components of a particle or grid cell, or the RGB components of a pixel.
Let's create a vector using math.vec
. This creates a Tensor
with a channel dimension called vector
.
point = vec(x=1, y=0)
print(f"{point:full:shape}")
(vectorᶜ=x,y)
1, 0
Here, point
is a Tensor
with one channel (c) dimension named vector
that lists two components, x
and y
.
Without the above formatting options, the vector will be printed in a more concise format.
point
(x=1, y=0) int64
We can use the built-in visualization tools to plot
our point.
plot(point)
Alternatively, we can wrap an existing native tensor. Then we have to specify the full shape.
wrap([1, 0], channel(vector='x,y'))
(x=1, y=0) int64
Next, let's create a collection of points at random locations. We can use an instance dimension to list the points.
points = math.random_normal(instance(points=3), channel(vector='x,y'))
points
(x=2.525, y=-0.804); (x=-1.636, y=1.658); (x=-0.293, y=-2.985) (pointsⁱ=3, vectorᶜ=x,y)
plot(points)
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/phiml/backend/_backend.py:1675: RuntimeWarning: invalid value encountered in power return base ** exp
ΦFlow provides a concise syntax for accessing elements or slices of the tensor: value.dimension[slice]
.
This syntax can also be used on all ΦFlow objects, not just tensors.
points.points[0]
(x=2.525, y=-0.804)
Since we have assigned item names to the vector
dimension, we can access slices by name as well.
points.vector['x']
(2.525, -1.636, -0.293) along pointsⁱ
There is an even shorter notation specifically for channel dimensions.
points['x']
(2.525, -1.636, -0.293) along pointsⁱ
To slice multiple dimensions, repeat the above syntax or pass a dictionary.
points.points[0].vector['x']
2.5249567
points[{'points': 0, 'vector': 'x'}]
2.5249567
Spatial dimensions represent data sampled at regular intervals. Tensors with spatial dimensions are interpreted as grids and the higher-level grid classes make use of them. The grid resolution is equal to the
grid = math.random_uniform(spatial(x=10))
grid
(xˢ=10) 0.396 ± 0.281 (7e-03...9e-01)
plot(grid)
The number of spatial dimensions equals the dimensionality of the physical space.
plot(math.random_uniform(spatial(x=10, y=8)))
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/phi/vis/_matplotlib/_matplotlib_plots.py:167: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect. plt.tight_layout() # because subplot titles can be added after figure creation
Tensors need not have identical shapes to be operated on jointly. ΦFlow automatically adds missing dimensions to tensors, i.e. tensors behave as if they were constant along every dimension that is not listed in their respective shapes.
plot(math.random_uniform(spatial(x=10)) * math.random_uniform(spatial(y=8)))
Finally, batch dimensions are the primary method of running computations in parallel.
Slices along batch dimensions do not interact at all.
Batch dimensions replace functions like vmap
that exist in other frameworks.
plot(math.random_uniform(batch(examples=4), spatial(x=10, y=8)), show_color_bar=False)
Tensors can be created from NumPy arrays as well as PyTorch, TensorFlow and Jax arrays. The dimension types and names need to be specified.
t = tensor(np.zeros((4, 32, 32)), batch('b'), spatial('x,y'))
t
(bᵇ=4, xˢ=32, yˢ=32) float64 const 0.0
While tensor()
automatically converts the data to the default backend (specified by the phi.**.flow
import), wrap()
keeps the data as-is or wraps it in a NumPy array.
wrap(np.zeros((4, 32, 32)), batch('b'), spatial('x,y'))
(bᵇ=4, xˢ=32, yˢ=32) float64 const 0.0
To retrieve the native version of a tensor, use .native(order)
, passing in the desired dimension order.
t.native('b,y,x')
array([[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]])
The NumPy representation can be retrieved the same way.
t.numpy('b,y,x')
array([[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]])
Φ.math
Overview