• 🌐 ΦML
• 📖 Documentation
• 🔗 API
• ▶ Videos
•
Examples
While you can call many ΦML function directly with native tensors, such as Jax tensors or NumPy arrays, we recommend wrapping them in ΦML tensors. These provide several benefits over the native tensors and allow you to write easy-to-read, more concise, more explicit, less error-prone code.
For an introduction into tensors and dimensions, check out the introduction notebook.
%%capture
!pip install phiml
All tensor dimensions in ΦML have a name and type which are part of the tensor shape. When creating a ΦML tensor, you specify the names and types of all dimensions.
from phiml import math, wrap, tensor
from phiml.math import channel, batch, spatial, instance, dual # dimension types
wrap([0, 1, 2], channel('integers'))
(0, 1, 2) along integersᶜ int64
data = math.random_uniform(batch(examples=2), spatial(x=4, y=3))
data
(examplesᵇ=2, xˢ=4, yˢ=3) 0.457 ± 0.305 (1e-02...1e+00)
With all dims named, ΦML can automatically match dims without requiring you to expand, squeeze or transpose.
Let's add steps to our data
tensor along x
.
steps = wrap([0, 1, 2, 3], 'x:s')
steps
(0, 1, 2, 3) along xˢ int64
math.print(data + steps)
examples=0 0.2915136 , 1.058228 , 2.9162776 , 3.8889802 , 0.01447839, 1.8106928 , 2.8537886 , 3.2617612 , 0.40987378, 1.3197136 , 2.2405822 , 3.9765415 along (xˢ=4, yˢ=3) examples=1 0.37904504, 1.6151046 , 2.1647518 , 3.9279037 , 0.15631448, 1.316977 , 2.4150724 , 3.0895126 , 0.6713833 , 1.7011193 , 2.27419 , 3.2037597 along (xˢ=4, yˢ=3)
As you can see, the nth column values now lie in [n-1,n].
ΦML automatically expanded steps
to the shape (1, steps, 1)
before adding it to data
.
This feature works with all ΦML functions. You can think of it in the following way:
Tensors implicitly have all possible dimensions, but their values are constant along those not listed in the shape.
A consequence of the automatic reshaping is that
The dim order of ΦML tensors is irrelevant.
In fact, you should never count on a specific order or single out a dimension by its index.
Let's look at another example, creating a meshgrid from scratch.
from phiml import stack, arange
stack({
'x': arange(spatial(x=4)),
'y': arange(spatial(y=3))
}, channel('vector'), expand_values=True)
(yˢ=3, xˢ=4, vectorᶜ=x,y) 1.250 ± 1.010 (0e+00...3e+00)
The preferred way of referencing dims, e.g. for slicing tensors is by type or name.
data.examples[1].x[1:3].y[0]
(0.701, 0.274) along xˢ
data[{batch: 1, 'x': slice(1, 3), 'y': 0}]
(0.701, 0.274) along xˢ
Many functions will operate on dims of specific types by default, so you don't need to specify the axis.
math.fft(data) # operates on x and y because they are spatial
(examplesᵇ=2, xˢ=4, yˢ=3) complex64 |...| < 6.042431354522705
Batch dimensions will be ignored by all functions unless explicitly specified. This makes it trivial to vectorize code w/o using something like vmap
, simply by passing any batched input. An example of this can be seen here.
Functions that only require you to specify name and type, but not the size of dims, allow you to write <name>:<type_letter>
instead of passing a Shape
object. Single-letter names additionally default to type spatial and item names can be specified in parentheses.
wrap([1, 2, 3], 'x')
(1, 2, 3) along xˢ int64
tensor([1, 2, 3], 'vector:c')
(1, 2, 3) int64
stack([1, 2, 3], 'example:b')
(1, 2, 3) along exampleᵇ int64
As you can see, ΦML summarizes tensors by default and color-codes the result text. The Python formatting options let you customize how a tensor is printed, with options being separated by colons. Here are some examples:
print(f"{data:summary:color:shape:dtype:.5e}")
(examplesᵇ=2, xˢ=4, yˢ=3) float32 4.56565e-01 ± 3.04577e-01 (1.44784e-02...9.76542e-01)
print(f"{data:full:color:shape:dtype:.3f}")
examples=0 0.292, 0.058, 0.916, 0.889, 0.014, 0.811, 0.854, 0.262, 0.410, 0.320, 0.241, 0.977 along (xˢ=4, yˢ=3) examples=1 0.379, 0.615, 0.165, 0.928, 0.156, 0.317, 0.415, 0.090, 0.671, 0.701, 0.274, 0.204 along (xˢ=4, yˢ=3)
print(f"{data:numpy:no-color:no-shape:no-dtype:.2f}")
[[[0.41 0.01 0.29] [0.32 0.81 0.06] [0.24 0.85 0.92] [0.98 0.26 0.89]] [[0.67 0.16 0.38] [0.70 0.32 0.62] [0.27 0.42 0.16] [0.20 0.09 0.93]]]
The order of the formatting arguments is not important. Supported options are:
Layout: The layout determines what is printed and where. The following options are available:
summary
Summarizes the values by mean, standard deviation, minimum and maximum value.row
Prints the tensor as a single-line vector.full
Prints all values in the tensors as a multi-line string.numpy
Uses the formatting of NumPyNumber format:
You can additionally specify a format string for floating-point numbers like .3f
or .2e
.
Color:
Use the keywords color
or no-color
.
Currently color
will use ANSI color codes which are supported by most terminals, IDEs as well as Jupyter notebooks.
Additional tensor information:
The keywords shape
, no-shape
, dtype
and no-dtype
can be used to show or hide additional properties of the tensor.
You can wrap existing tensors in ΦML tensors using wrap()
or tensor()
.
While tensor()
will convert the data to the default backend, wrap()
will keep the data as-is.
In either case, you need to specify the dimension names and types when wrapping a native tensor.
math.use('torch')
math.tensor([0, 1, 2], batch('examples'))
(0, 1, 2) along examplesᵇ int64
To unwrap a tensor, you can use tensor.native()
or math.reshaped_native()
for more control over the result shape.
In both cases, the requested dimension order needs to be specified.
data.native('examples,x,y')
array([[[0.40987378, 0.01447839, 0.2915136 ], [0.31971365, 0.81069285, 0.05822796], [0.24058233, 0.85378855, 0.9162776 ], [0.9765415 , 0.26176107, 0.8889802 ]], [[0.6713833 , 0.15631448, 0.37904504], [0.7011193 , 0.31697702, 0.61510456], [0.27419004, 0.41507253, 0.16475175], [0.20375979, 0.08951265, 0.92790353]]], dtype=float32)
Similarly, you can get the NumPy representation:
data.numpy('examples,x,y')
array([[[0.40987378, 0.01447839, 0.2915136 ], [0.31971365, 0.81069285, 0.05822796], [0.24058233, 0.85378855, 0.9162776 ], [0.9765415 , 0.26176107, 0.8889802 ]], [[0.6713833 , 0.15631448, 0.37904504], [0.7011193 , 0.31697702, 0.61510456], [0.27419004, 0.41507253, 0.16475175], [0.20375979, 0.08951265, 0.92790353]]], dtype=float32)
Check out the examples to see how using ΦML's tensors is different from the other libraries.
Learn more about the dimension types and their advantages.
ΦML unifies data types as well and lets you set the floating point precision globally or by context.
While the dimensionality of neural networks must be specified during network creation, this is not the case for math functions. These automatically adapt to the number of spatial dimensions of the data that is passed in.
🌐 ΦML
• 📖 Documentation
• 🔗 API
• ▶ Videos
• Examples