Why Use ΦML's Tensors¶

Colab   •   🌐 ΦML   •   📖 Documentation   •   🔗 API   •   ▶ Videos   •   Examples

While you can call many ΦML function directly with native tensors, such as Jax tensors or NumPy arrays, we recommend wrapping them in ΦML tensors. These provide several benefits over the native tensors and allow you to write easy-to-read, more concise, more explicit, less error-prone code.

For an introduction into tensors and dimensions, check out the introduction notebook.

This notebook is work in progress. It will explain

  • Comparisons to native libraries.
  • show easy-to-read, more concise, more explicit, less error-prone code
In [1]:
%%capture
!pip install phiml

Named Dimensions¶

All tensor dimensions in ΦML are required to have a name and type. These properties are part of the tensor shape. When creating a ΦML tensor, you need to specify the names and types of all dimensions.

In [2]:
from phiml import math
from phiml.math import channel, batch, spatial, instance, dual  # dimension types

math.wrap([0, 1, 2], channel('integers'))
Out[2]:
(0, 1, 2) along integersᶜ int64
In [3]:
data = math.random_uniform(batch(examples=2), spatial(x=4, y=3))
data
Out[3]:
(examplesᵇ=2, xˢ=4, yˢ=3) 0.651 ± 0.253 (2e-01...1e+00)

Printing Options¶

As you can see, ΦML summarizes tensors by default and color-codes the result text. The Python formatting options let you customize how a tensor is printed, with options being separated by colons. Here are some examples:

In [4]:
print(f"{data:summary:color:shape:dtype:.5e}")
(examplesᵇ=2, xˢ=4, yˢ=3) float32 6.51401e-01 ± 2.52943e-01 (1.57866e-01...9.89369e-01)
In [5]:
print(f"{data:full:color:shape:dtype:.3f}")
examples=0  
 0.718, 0.831, 0.758, 0.983,
 0.545, 0.612, 0.190, 0.443,
 0.709, 0.339, 0.835, 0.989  along (xˢ=4, yˢ=3)
examples=1  
 0.596, 0.158, 0.454, 0.887,
 0.950, 0.887, 0.919, 0.868,
 0.675, 0.193, 0.441, 0.653  along (xˢ=4, yˢ=3)
In [6]:
print(f"{data:numpy:no-color:no-shape:no-dtype:.2f}")
[[[0.71 0.54 0.72]
  [0.34 0.61 0.83]
  [0.84 0.19 0.76]
  [0.99 0.44 0.98]]

 [[0.68 0.95 0.60]
  [0.19 0.89 0.16]
  [0.44 0.92 0.45]
  [0.65 0.87 0.89]]]

The order of the formatting arguments is not important. Supported options are:

Layout: The layout determines what is printed and where. The following options are available:

  • summary Summarizes the values by mean, standard deviation, minimum and maximum value.
  • row Prints the tensor as a single-line vector.
  • full Prints all values in the tensors as a multi-line string.
  • numpy Uses the formatting of NumPy

Number format: You can additionally specify a format string for floating-point numbers like .3f or .2e.

Color: Use the keywords color or no-color. Currently color will use ANSI color codes which are supported by most terminals, IDEs as well as Jupyter notebooks.

Additional tensor information: The keywords shape, no-shape, dtype and no-dtype can be used to show or hide additional properties of the tensor.

Wrapping and Unwrapping¶

You can wrap existing tensors in ΦML tensors using wrap() or tensor(). While tensor() will convert the data to the default backend, wrap() will keep the data as-is. In either case, you need to specify the dimension names and types when wrapping a native tensor.

In [7]:
math.use('torch')
math.tensor([0, 1, 2], batch('examples'))
Out[7]:
(0, 1, 2) along examplesᵇ int64

To unwrap a tensor, you can use tensor.native() or math.reshaped_native() for more control over the result shape. In both cases, the requested dimension order needs to be specified.

In [8]:
data.native('examples,x,y')
Out[8]:
array([[[0.7093237 , 0.54471785, 0.7179703 ],
        [0.3387153 , 0.6119882 , 0.8313931 ],
        [0.8354173 , 0.1900304 , 0.75763   ],
        [0.9893687 , 0.44290337, 0.9826485 ]],

       [[0.6751512 , 0.95048845, 0.59555596],
        [0.1932187 , 0.88710886, 0.15786637],
        [0.44071805, 0.9188371 , 0.4544151 ],
        [0.6534557 , 0.8680303 , 0.8866782 ]]], dtype=float32)

Similarly, you can get the NumPy representation:

In [9]:
data.numpy('examples,x,y')
Out[9]:
array([[[0.7093237 , 0.54471785, 0.7179703 ],
        [0.3387153 , 0.6119882 , 0.8313931 ],
        [0.8354173 , 0.1900304 , 0.75763   ],
        [0.9893687 , 0.44290337, 0.9826485 ]],

       [[0.6751512 , 0.95048845, 0.59555596],
        [0.1932187 , 0.88710886, 0.15786637],
        [0.44071805, 0.9188371 , 0.4544151 ],
        [0.6534557 , 0.8680303 , 0.8866782 ]]], dtype=float32)

Further Reading¶

Check out the examples to see how using ΦML's tensors is different from the other libraries.

Learn more about the dimension types and their advantages.

ΦML unifies data types as well and lets you set the floating point precision globally or by context.

While the dimensionality of neural networks must be specified during network creation, this is not the case for math functions. These automatically adapt to the number of spatial dimensions of the data that is passed in.

🌐 ΦML   •   📖 Documentation   •   🔗 API   •   ▶ Videos   •   Examples