Data Types in ΦML¶

Colab   •   🌐 ΦML   •   📖 Documentation   •   🔗 API   •   ▶ Videos   •   Examples

Need to differentiate but your input is an int tensor? Need an int64 tensor but got int32? Need a tensor but got an ndarray? Want an ndarray but your tensor is bound in a computational graph on the GPU? Worry no longer for ΦML has you covered!

In [1]:
%%capture
!pip install phiml

Floating Point Precision¶

A major difference between ΦML and its backends is the handling of floating point (FP) precision. NumPy automatically casts arrays to the highest precision and other ML libraries will raise errors if data types do not match. Instead, ΦML lets you set the FP precision globally using set_global_precision(64) or by context and automatically casts tensors to that precision when needed. The default is FP32 (single precision). Let's set the global precision to FP64 (double precision)!

In [2]:
from phiml import math

math.set_global_precision(64)  # double precision

From now on, all created float tensors will be of type float64.

In [3]:
math.zeros().dtype
Out[3]:
float64

We can run parts of our code with a different precision by executing them within a precision block:

In [4]:
with math.precision(32):
    print(math.zeros().dtype)
float32

ΦML automatically casts tensors to the current precision level during operations. Say we have a float64 tensor but want to run 32-bit operations.

In [5]:
tensor64 = math.ones()
with math.precision(32):
    print(math.sin(tensor64).dtype)
float32

Here, the tensor was cast to float32 before applying the sin function. If you want to explicitly cast a tensor to the current precision, use math.to_float()

This system precludes any precision conflicts and you will never accidentally execute code with the wrong precision!

Specifying Data Types¶

ΦML provides a unified data type class, DType. However, you only need to specify the data type when creating a new Tensor from scratch. When wrapping an existing tensor, the data type is kept as-is.

In [6]:
from phiml.math import DType

math.zeros(dtype=DType(float, 16))
Out[6]:
float16 0.0

Note that there are no global variables for data types. Simplify specify the kind and bit-length in the DType constructor. Actually, the explicit constructor call to DType() is not necessary. You can also pass the kind and bit-length as a tuple.

In [7]:
math.zeros(dtype=(float, 16))
Out[7]:
float16 0.0

In most cases, you want the bit-length to match the current floating point precision. Then, just specify the kind of data type (bool, int, float, or complex).

In [8]:
math.zeros(dtype=int)
Out[8]:
0
In [9]:
math.zeros(dtype=complex)
Out[9]:
complex128 0j
In [10]:
math.zeros(dtype=bool)
Out[10]:
False

Further Reading¶

Advantages of Precision Management with examples.

🌐 ΦML   •   📖 Documentation   •   🔗 API   •   ▶ Videos   •   Examples