%%capture
!pip install phiml
from phiml import math, backend
ΦML abstracts ComputeDevices
, such as CPUs, GPUs and TPUs.
You can obtain a list of available devices using Backend.list_devices()
BACKEND = math.use('torch')
BACKEND.list_devices('CPU')
[torch device 'CPU' (CPU 'cpu') | 15981 MB | 4 processors | ]
Compute devices are bound to the computing library that uses it, i.e. the CPU returned by PyTorch is not the same object as the CPU returned by NumPy.
from phiml.backend import NUMPY
NUMPY.list_devices('CPU')
[numpy device 'CPU' (CPU 'CPU') | 15981 MB | 4 processors | ]
We can set the default device per backend by reference or type.
BACKEND.set_default_device(BACKEND.list_devices()[0])
True
BACKEND.set_default_device('GPU')
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/phiml/backend/_backend.py:249: RuntimeWarning: torch: Cannot select 'GPU' because no device of this type is available. warnings.warn(f"{self.name}: Cannot select '{device}' because no device of this type is available.", RuntimeWarning)
False
Tensors created by that backend will be allocated on that device from now on. Already allocated tensors are left untouched.
Combining tensors allocated on different devices may lead to errors!
You can move any tensor to a different compute device
tensor = math.zeros()
tensor.device
torch device 'CPU' (CPU 'cpu') | 15981 MB | 4 processors |
math.to_device(tensor, 'CPU')
0.0
math.to_device()
also supports pytrees and data classes that contain tensors.
ΦML also supports moving tensors to different backend libraries without copying them.
🌐 ΦML • 📖 Documentation • 🔗 API • ▶ Videos • Examples