Introduction to Tensors in ΦFlow¶

▶️ Introduction Video

Like NumPy, TensorFlow, PyTorch and Jax, ΦFlow provides a Tensor class. There are, however, two crucial differences:

  1. ΦFlow tensors are backed by tensors of one of the other libraries, referred to as native tensors. All ΦFlow functions get translated into basic native operations.
  2. Dimensions of ΦFlow tensors have names and types. They are always referred to by name, not by their index. Reshaping takes place under-the-hood. The component can also be named.
In [1]:
from phi.flow import *
# from phi.torch.flow import *
# from phi.tf.flow import *
# from phi.jax.flow import *

Every dimension is assigned one of four types: channel, spatial, instance or batch (abbreviated as c,s,i,b).

Channel dimensions enumerate properties of a single object, be it the velocity components of a particle or grid cell, or the RGB components of a pixel.

Let's create a vector using math.vec. This creates a Tensor with a channel dimension called vector.

In [2]:
point = vec(x=1, y=0)
print(f"{point:full:shape}")
(vectorᶜ=x,y)
 1, 0

Here, point is a Tensor with one channel (c) dimension named vector that lists two components, x and y. Without the above formatting options, the vector will be printed in a more concise format.

In [3]:
point
Out[3]:
(x=1, y=0) int64

We can use the built-in visualization tools to plot our point.

In [4]:
plot(point)
Out[4]:

Alternatively, we can wrap an existing native tensor. Then we have to specify the full shape.

In [5]:
wrap([1, 0], channel(vector='x,y'))
Out[5]:
(x=1, y=0) int64

Next, let's create a collection of points at random locations. We can use an instance dimension to list the points.

In [6]:
points = math.random_normal(instance(points=3), channel(vector='x,y'))
points
Out[6]:
(x=-0.290, y=0.468); (x=0.466, y=0.235); (x=0.312, y=0.843) (pointsⁱ=3, vectorᶜ=x,y)
In [7]:
plot(points)
Out[7]:

ΦFlow provides a concise syntax for accessing elements or slices of the tensor: value.dimension[slice]. This syntax can also be used on all ΦFlow objects, not just tensors.

In [8]:
points.points[0]
Out[8]:
(x=-0.290, y=0.468)

Since we have assigned item names to the vector dimension, we can access slices by name as well.

In [9]:
points.vector['x']
Out[9]:
(-0.290, 0.466, 0.312) along pointsⁱ

There is an even shorter notation specifically for channel dimensions.

In [10]:
points['x']
Out[10]:
(-0.290, 0.466, 0.312) along pointsⁱ

To slice multiple dimensions, repeat the above syntax or pass a dictionary.

In [11]:
points.points[0].vector['x']
Out[11]:
-0.2895581
In [12]:
points[{'points': 0, 'vector': 'x'}]
Out[12]:
-0.2895581

Spatial dimensions represent data sampled at regular intervals. Tensors with spatial dimensions are interpreted as grids and the higher-level grid classes make use of them. The grid resolution is equal to the

In [13]:
grid = math.random_uniform(spatial(x=10))
grid
Out[13]:
(xˢ=10) 0.418 ± 0.267 (7e-02...8e-01)
In [14]:
plot(grid)
Out[14]:

The number of spatial dimensions equals the dimensionality of the physical space.

In [15]:
plot(math.random_uniform(spatial(x=10, y=8)))
/opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages/phi/vis/_matplotlib/_matplotlib_plots.py:167: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
  plt.tight_layout()  # because subplot titles can be added after figure creation
Out[15]:

Tensors need not have identical shapes to be operated on jointly. ΦFlow automatically adds missing dimensions to tensors, i.e. tensors behave as if they were constant along every dimension that is not listed in their respective shapes.

In [16]:
plot(math.random_uniform(spatial(x=10)) * math.random_uniform(spatial(y=8)))
Out[16]:

Finally, batch dimensions are the primary method of running computations in parallel. Slices along batch dimensions do not interact at all. Batch dimensions replace functions like vmap that exist in other frameworks.

In [17]:
plot(math.random_uniform(batch(examples=4), spatial(x=10, y=8)), show_color_bar=False)
Out[17]:

Tensors can be created from NumPy arrays as well as PyTorch, TensorFlow and Jax arrays. The dimension types and names need to be specified.

In [18]:
t = tensor(np.zeros((4, 32, 32)), batch('b'), spatial('x,y'))
t
Out[18]:
(bᵇ=4, xˢ=32, yˢ=32) float64 const 0.0

While tensor() automatically converts the data to the default backend (specified by the phi.**.flow import), wrap() keeps the data as-is or wraps it in a NumPy array.

In [19]:
wrap(np.zeros((4, 32, 32)), batch('b'), spatial('x,y'))
Out[19]:
(bᵇ=4, xˢ=32, yˢ=32) float64 const 0.0

To retrieve the native version of a tensor, use .native(order), passing in the desired dimension order.

In [20]:
t.native('b,y,x')
Out[20]:
array([[[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]],

       [[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]],

       [[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]],

       [[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]]])

The NumPy representation can be retrieved the same way.

In [21]:
t.numpy('b,y,x')
Out[21]:
array([[[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]],

       [[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]],

       [[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]],

       [[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]]])

Further Reading¶

  • Φ.math Overview
  • Optimization and Training: Automatic differentiation, neural network training
  • Performance: GPU, JIT compilation, profiler