Module phi.math

Vectorized operations, tensors with named dimensions.

This package provides a common interface for tensor operations. Is internally uses NumPy, TensorFlow or PyTorch.

Main classes: Tensor, Shape, DType, Extrapolation.

The provided operations are not implemented directly. Instead, they delegate the actual computation to either NumPy, TensorFlow or PyTorch, depending on the configuration. This allows the user to write simulation code once and have it run with various computation backends.

See the documentation at https://tum-pbs.github.io/PhiFlow/Math.html

Expand source code
"""
Vectorized operations, tensors with named dimensions.

This package provides a common interface for tensor operations.
Is internally uses NumPy, TensorFlow or PyTorch.

Main classes: `Tensor`, `Shape`, `DType`, `Extrapolation`.

The provided operations are not implemented directly.
Instead, they delegate the actual computation to either NumPy, TensorFlow or PyTorch, depending on the configuration.
This allows the user to write simulation code once and have it run with various computation backends.

See the documentation at https://tum-pbs.github.io/PhiFlow/Math.html
"""

from .backend._dtype import DType
from .backend import NUMPY, precision, set_global_precision, get_precision

from ._config import GLOBAL_AXIS_ORDER
from ._shape import Shape, EMPTY_SHAPE, spatial, channel, batch, instance, merge_shapes, concat_shapes
from ._tensors import wrap, tensor, Tensor, TensorDim, TensorLike, Dict
from .extrapolation import Extrapolation
from ._ops import (
    choose_backend_t as choose_backend, all_available, convert, seed,
    native, numpy, reshaped_native, reshaped_tensor, copy, native_call,
    print_ as print,
    map_ as map,
    zeros, ones, fftfreq, random_normal, random_uniform, meshgrid, linspace, arange as range, range_tensor,  # creation operators (use default backend)
    zeros_like, ones_like,
    stack, unstack, concat,
    pad,
    pack_dims, unpack_dims, rename_dims, flatten, expand, transpose,  # reshape operations
    divide_no_nan,
    where, nonzero,
    sum_ as sum, mean, std, prod, max_ as max, min_ as min, any_ as any, all_ as all, quantile, median,  # reduce
    dot,
    abs_ as abs, sign,
    round_ as round, ceil, floor,
    maximum, minimum, clip,
    sqrt, exp, sin, cos, tan, log, log2, log10,
    to_float, to_int32, to_int64, to_complex, imag, real, conjugate,
    boolean_mask,
    isfinite,
    closest_grid_values, grid_sample, scatter, gather,
    fft, ifft, convolve, cumulative_sum,
    dtype, cast,
    close, assert_close,
    record_gradients, gradients, stop_gradient
)
from ._nd import (
    shift,
    vec_abs, vec_squared, vec_normalize, cross_product,
    normalize_to,
    l1_loss, l2_loss, frequency_loss,
    spatial_gradient, laplace,
    fourier_laplace, fourier_poisson, abs_square,
    downsample2x, upsample2x, sample_subgrid,
    extrapolate_valid_values,
)
from ._functional import (
    LinearFunction, jit_compile_linear, jit_compile,
    functional_gradient, custom_gradient, print_gradient,
    solve_linear, solve_nonlinear, minimize, Solve, SolveInfo, ConvergenceException, NotConverged, Diverged, SolveTape,
)


PI = 3.14159265358979323846
"""Value of π to double precision """
pi = PI

NUMPY = NUMPY  # to show up in pdoc
"""Default backend for NumPy arrays and SciPy objects."""

__all__ = [key for key in globals().keys() if not key.startswith('_')]

__pdoc__ = {
    'Extrapolation.__init__': False,
    'Shape.__init__': False,
    'SolveInfo.__init__': False,
    'TensorDim.__init__': False,
    'ConvergenceException.__init__': False,
    'Diverged.__init__': False,
    'NotConverged.__init__': False,
    'LinearFunction.__init__': False,
    'TensorLike.__variable_attrs__': True,
    'TensorLike.__value_attrs__': True,
    'TensorLike.__with_attrs__': True,
}

Sub-modules

phi.math.backend

Low-level library wrappers for delegating vector operations.

phi.math.extrapolation

Extrapolations are used for padding tensors and sampling coordinates lying outside the tensor bounds. Standard extrapolations are listed as global …

Global variables

var NUMPY

Default backend for NumPy arrays and SciPy objects.

var PI

Value of π to double precision

Functions

def abs(x) ‑> phi.math._tensors.Tensor

Computes ||x||1. Complex x result in matching precision float values.

Note: The gradient of this operation is undefined for x=0. TensorFlow and PyTorch return 0 while Jax returns 1.

Args

x
Tensor or TensorLike

Returns

Absolute value of x of same type as x.

Expand source code
def abs_(x) -> Tensor:
    """
    Computes *||x||<sub>1</sub>*.
    Complex `x` result in matching precision float values.

    *Note*: The gradient of this operation is undefined for *x=0*.
    TensorFlow and PyTorch return 0 while Jax returns 1.

    Args:
        x: `Tensor` or `TensorLike`

    Returns:
        Absolute value of `x` of same type as `x`.
    """
    return _backend_op1(x, Backend.abs)
def abs_square(complex_values: phi.math._tensors.Tensor) ‑> phi.math._tensors.Tensor

Squared magnitude of complex values.

Args

complex_values
complex Tensor

Returns

Tensor
real valued magnitude squared
Expand source code
def abs_square(complex_values: Tensor) -> Tensor:
    """
    Squared magnitude of complex values.

    Args:
      complex_values: complex `Tensor`

    Returns:
        Tensor: real valued magnitude squared

    """
    return math.imag(complex_values) ** 2 + math.real(complex_values) ** 2
def all(boolean_tensor: phi.math._tensors.Tensor, dim: str = None) ‑> phi.math._tensors.Tensor

Tests whether all entries of boolean_tensor are True along the specified dimensions.

Args

boolean_tensor
Tensor or list / tuple of Tensors.
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to add up the sequence of Tensors

Returns

Tensor without the reduced dimensions.

Expand source code
def all_(boolean_tensor: Tensor or list or tuple, dim: str or int or tuple or list or None or Shape = None) -> Tensor:
    """
    Tests whether all entries of `boolean_tensor` are `True` along the specified dimensions.

    Args:
        boolean_tensor: `Tensor` or `list` / `tuple` of Tensors.
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to add up the sequence of Tensors

    Returns:
        `Tensor` without the reduced dimensions.
    """
    return _reduce(boolean_tensor, dim, native_function=lambda backend, native, dim: backend.all(native, dim))
def all_available(*values: phi.math._tensors.Tensor) ‑> bool

Tests if the values of all given tensors are known and can be read at this point. Tracing placeholders are considered not available, even when they hold example values.

Tensors are not available during jit_compile(), jit_compile_linear() or while using TensorFlow's legacy graph mode.

Tensors are typically available when the backend operates in eager mode and is not currently tracing a function.

This can be used instead of the native checks

  • PyTorch: torch._C._get_tracing_state()
  • TensorFlow: tf.executing_eagerly()
  • Jax: isinstance(x, jax.core.Tracer)

Args

values
Tensors to check.

Returns

True if no value is a placeholder or being traced, False otherwise.

Expand source code
def all_available(*values: Tensor) -> bool:
    """
    Tests if the values of all given tensors are known and can be read at this point.
    Tracing placeholders are considered not available, even when they hold example values.

    Tensors are not available during `jit_compile()`, `jit_compile_linear()` or while using TensorFlow's legacy graph mode.
    
    Tensors are typically available when the backend operates in eager mode and is not currently tracing a function.

    This can be used instead of the native checks

    * PyTorch: `torch._C._get_tracing_state()`
    * TensorFlow: `tf.executing_eagerly()`
    * Jax: `isinstance(x, jax.core.Tracer)`

    Args:
      values: Tensors to check.

    Returns:
        `True` if no value is a placeholder or being traced, `False` otherwise.
    """
    from phi.math._functional import is_tracer
    for value in values:
        if is_tracer(value):
            return False
        natives = value._natives()
        natives_available = [choose_backend(native).is_available(native) for native in natives]
        if not all(natives_available):
            return False
    return True
def any(boolean_tensor: phi.math._tensors.Tensor, dim: str = None) ‑> phi.math._tensors.Tensor

Tests whether any entry of boolean_tensor is True along the specified dimensions.

Args

boolean_tensor
Tensor or list / tuple of Tensors.
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to add up the sequence of Tensors

Returns

Tensor without the reduced dimensions.

Expand source code
def any_(boolean_tensor: Tensor or list or tuple, dim: str or int or tuple or list or None or Shape = None) -> Tensor:
    """
    Tests whether any entry of `boolean_tensor` is `True` along the specified dimensions.

    Args:
        boolean_tensor: `Tensor` or `list` / `tuple` of Tensors.
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to add up the sequence of Tensors

    Returns:
        `Tensor` without the reduced dimensions.
    """
    return _reduce(boolean_tensor, dim, native_function=lambda backend, native, dim: backend.any(native, dim))
def assert_close(*values, rel_tolerance: float = 1e-05, abs_tolerance: float = 0, msg: str = '', verbose: bool = True)

Checks that all given tensors have equal values within the specified tolerance. Raises an AssertionError if the values of this tensor are not within tolerance of any of the other tensors.

Does not check that the shapes match as long as they can be broadcast to a common shape.

Args

values
Tensors or native tensors or numbers or sequences of numbers.
rel_tolerance
Relative tolerance.
abs_tolerance
Absolute tolerance.
msg
Optional error message.
verbose
Whether to print conflicting values.
Expand source code
def assert_close(*values,
                 rel_tolerance: float = 1e-5,
                 abs_tolerance: float = 0,
                 msg: str = "",
                 verbose: bool = True):
    """
    Checks that all given tensors have equal values within the specified tolerance.
    Raises an AssertionError if the values of this tensor are not within tolerance of any of the other tensors.
    
    Does not check that the shapes match as long as they can be broadcast to a common shape.

    Args:
      values: Tensors or native tensors or numbers or sequences of numbers.
      rel_tolerance: Relative tolerance.
      abs_tolerance: Absolute tolerance.
      msg: Optional error message.
      verbose: Whether to print conflicting values.
    """
    if not values:
        return
    phi_tensors = [t for t in values if isinstance(t, Tensor)]
    if phi_tensors:
        values = [compatible_tensor(t, phi_tensors[0].shape)._simplify() for t in values]  # use Tensor to infer dimensions
        for other in values[1:]:
            _assert_close(values[0], other, rel_tolerance, abs_tolerance, msg, verbose)
    elif all(isinstance(v, TensorLike) for v in values):
        tree0, tensors0 = disassemble_tree(values[0])
        for value in values[1:]:
            tree, tensors_ = disassemble_tree(value)
            assert tree0 == tree, f"Tree structures do not match: {tree0} and {tree}"
            for t0, t in zip(tensors0, tensors_):
                _assert_close(t0, t, rel_tolerance, abs_tolerance, msg, verbose)
    else:
        np_values = [choose_backend(t).numpy(t) for t in values]
        for other in np_values[1:]:
            np.testing.assert_allclose(np_values[0], other, rel_tolerance, abs_tolerance, err_msg=msg, verbose=verbose)
def batch(*args, **dims: int)

Returns the batch dimensions of an existing Shape or creates a new Shape with only batch dimensions.

Usage for filtering batch dimensions:

batch_dims = batch(shape)
batch_dims = batch(tensor)

Usage for creating a Shape with only batch dimensions:

batch_shape = batch('undef', batch=2)
# Out: (batch=2, undef=None)

Here, the dimension undef is created with an undefined size of None. Undefined sizes are automatically filled in by tensor(), wrap(), stack() and concat().

To create a shape with multiple types, use merge_shapes(), concat_shapes() or the syntax shape1 & shape2.

See Also: channel(), spatial(), instance()

Args

*args

Either

  • Shape or Tensor to filter or
  • Names of dimensions with undefined sizes as str.
**dims
Dimension sizes and names. Must be empty when used as a filter operation.

Returns

Shape containing only dimensions of type batch.

Expand source code
def batch(*args, **dims: int):
    """
    Returns the batch dimensions of an existing `Shape` or creates a new `Shape` with only batch dimensions.

    Usage for filtering batch dimensions:
    ```python
    batch_dims = batch(shape)
    batch_dims = batch(tensor)
    ```

    Usage for creating a `Shape` with only batch dimensions:
    ```python
    batch_shape = batch('undef', batch=2)
    # Out: (batch=2, undef=None)
    ```
    Here, the dimension `undef` is created with an undefined size of `None`.
    Undefined sizes are automatically filled in by `tensor`, `wrap`, `stack` and `concat`.

    To create a shape with multiple types, use `merge_shapes()`, `concat_shapes()` or the syntax `shape1 & shape2`.

    See Also:
        `channel`, `spatial`, `instance`

    Args:
        *args: Either

            * `Shape` or `Tensor` to filter or
            * Names of dimensions with undefined sizes as `str`.

        **dims: Dimension sizes and names. Must be empty when used as a filter operation.

    Returns:
        `Shape` containing only dimensions of type batch.
    """
    from phi.math import Tensor
    if all(isinstance(arg, str) for arg in args) or dims:
        for arg in args:
            parts = [s.strip() for s in arg.split(',')]
            for dim in parts:
                if dim not in dims:
                    dims[dim] = None
        return math.Shape(dims.values(), dims.keys(), [BATCH_DIM] * len(dims))
    elif len(args) == 1 and isinstance(args[0], Shape):
        return args[0].batch
    elif len(args) == 1 and isinstance(args[0], Tensor):
        return args[0].shape.batch
    else:
        raise AssertionError(f"batch() must be called either as a selector batch(Shape) or batch(Tensor) or as a constructor batch(*names, **dims). Got *args={args}, **dims={dims}")
def boolean_mask(x: phi.math._tensors.Tensor, dim: str, mask: phi.math._tensors.Tensor)

Discards values x.dim[i] where mask.dim[i]=False. All dimensions of mask that are not dim are treated as batch dimensions.

Alternative syntax: x.dim[mask].

Implementations:

Args

x
Tensor of values.
dim
Dimension of x to along which to discard slices.
mask
Boolean Tensor marking which values to keep. Must have the dimension dim matching `x´.

Returns

Selected values of x as Tensor with dimensions from x and mask.

Expand source code
def boolean_mask(x: Tensor, dim: str, mask: Tensor):
    """
    Discards values `x.dim[i]` where `mask.dim[i]=False`.
    All dimensions of `mask` that are not `dim` are treated as batch dimensions.

    Alternative syntax: `x.dim[mask]`.

    Implementations:

    * NumPy: Slicing
    * PyTorch: [`masked_select`](https://pytorch.org/docs/stable/generated/torch.masked_select.html)
    * TensorFlow: [`tf.boolean_mask`](https://www.tensorflow.org/api_docs/python/tf/boolean_mask)
    * Jax: Slicing

    Args:
        x: `Tensor` of values.
        dim: Dimension of `x` to along which to discard slices.
        mask: Boolean `Tensor` marking which values to keep. Must have the dimension `dim` matching `x´.

    Returns:
        Selected values of `x` as `Tensor` with dimensions from `x` and `mask`.
    """
    def uniform_boolean_mask(x: Tensor, mask_1d: Tensor):
        if dim in x.shape:
            x_native = x.native(x.shape.names)  # order does not matter
            mask_native = mask_1d.native()  # only has 1 dim
            backend = choose_backend(x_native, mask_native)
            result_native = backend.boolean_mask(x_native, mask_native, axis=x.shape.index(dim))
            new_shape = x.shape.with_sizes(backend.staticshape(result_native))
            return NativeTensor(result_native, new_shape)
        else:
            total = int(sum_(to_int64(mask_1d), mask_1d.shape))
            new_shape = mask_1d.shape.with_sizes([total])
            return expand(x, new_shape)

    return broadcast_op(uniform_boolean_mask, [x, mask], iter_dims=mask.shape.without(dim))
def cast(x: phi.math._tensors.Tensor, dtype: phi.math.backend._dtype.DType) ‑> phi.math._tensors.Tensor

Casts x to a different data type.

Implementations:

See Also: to_float(), to_int32(), to_int64(), to_complex().

Args

x
Tensor
dtype
New data type as DType, e.g. DType(int, 16).

Returns

Tensor with data type dtype()

Expand source code
def cast(x: Tensor, dtype: DType) -> Tensor:
    """
    Casts `x` to a different data type.

    Implementations:

    * NumPy: [`x.astype()`](numpy.ndarray.astype)
    * PyTorch: [`x.to()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.to)
    * TensorFlow: [`tf.cast`](https://www.tensorflow.org/api_docs/python/tf/cast)
    * Jax: [`jax.numpy.array`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.array.html)

    See Also:
        `to_float`, `to_int32`, `to_int64`, `to_complex`.

    Args:
        x: `Tensor`
        dtype: New data type as `phi.math.DType`, e.g. `DType(int, 16)`.

    Returns:
        `Tensor` with data type `dtype`
    """
    return x._op1(lambda native: choose_backend(native).cast(native, dtype=dtype))
def ceil(x) ‑> phi.math._tensors.Tensor

Computes ⌈x⌉ of the Tensor or TensorLike x.

Expand source code
def ceil(x) -> Tensor:
    """ Computes *⌈x⌉* of the `Tensor` or `TensorLike` `x`. """
    return _backend_op1(x, Backend.ceil)
def channel(*args, **dims: int)

Returns the channel dimensions of an existing Shape or creates a new Shape with only channel dimensions.

Usage for filtering channel dimensions:

channel_dims = channel(shape)
channel_dims = channel(tensor)

Usage for creating a Shape with only channel dimensions:

channel_shape = channel('undef', vector=2)
# Out: (vector=2, undef=None)

Here, the dimension undef is created with an undefined size of None. Undefined sizes are automatically filled in by tensor(), wrap(), stack() and concat().

To create a shape with multiple types, use merge_shapes(), concat_shapes() or the syntax shape1 & shape2.

See Also: spatial(), batch(), instance()

Args

*args

Either

  • Shape or Tensor to filter or
  • Names of dimensions with undefined sizes as str.
**dims
Dimension sizes and names. Must be empty when used as a filter operation.

Returns

Shape containing only dimensions of type channel.

Expand source code
def channel(*args, **dims: int):
    """
    Returns the channel dimensions of an existing `Shape` or creates a new `Shape` with only channel dimensions.

    Usage for filtering channel dimensions:
    ```python
    channel_dims = channel(shape)
    channel_dims = channel(tensor)
    ```

    Usage for creating a `Shape` with only channel dimensions:
    ```python
    channel_shape = channel('undef', vector=2)
    # Out: (vector=2, undef=None)
    ```
    Here, the dimension `undef` is created with an undefined size of `None`.
    Undefined sizes are automatically filled in by `tensor`, `wrap`, `stack` and `concat`.

    To create a shape with multiple types, use `merge_shapes()`, `concat_shapes()` or the syntax `shape1 & shape2`.

    See Also:
        `spatial`, `batch`, `instance`

    Args:
        *args: Either

            * `Shape` or `Tensor` to filter or
            * Names of dimensions with undefined sizes as `str`.

        **dims: Dimension sizes and names. Must be empty when used as a filter operation.

    Returns:
        `Shape` containing only dimensions of type channel.
    """
    from ._tensors import Tensor
    if all(isinstance(arg, str) for arg in args) or dims:
        for arg in args:
            parts = [s.strip() for s in arg.split(',')]
            for dim in parts:
                if dim not in dims:
                    dims[dim] = None
        return math.Shape(dims.values(), dims.keys(), [CHANNEL_DIM] * len(dims))
    elif len(args) == 1 and isinstance(args[0], Shape):
        return args[0].channel
    elif len(args) == 1 and isinstance(args[0], Tensor):
        return args[0].shape.channel
    else:
        raise AssertionError(f"channel() must be called either as a selector channel(Shape) or channel(Tensor) or as a constructor channel(*names, **dims). Got *args={args}, **dims={dims}")
def choose_backend(*values, prefer_default=False) ‑> phi.math.backend._backend.Backend

Choose backend for given Tensor or native tensor values. Backends need to be registered to be available, e.g. via the global import phi.<backend> or detect_backends().

Args

*values
Sequence of Tensors, native tensors or constants.
prefer_default
Whether to always select the default backend if it can work with values, see default_backend().

Returns

The selected Backend

Expand source code
def choose_backend_t(*values, prefer_default=False) -> Backend:
    """
    Choose backend for given `Tensor` or native tensor values.
    Backends need to be registered to be available, e.g. via the global import `phi.<backend>` or `phi.detect_backends()`.

    Args:
        *values: Sequence of `Tensor`s, native tensors or constants.
        prefer_default: Whether to always select the default backend if it can work with `values`, see `default_backend()`.

    Returns:
        The selected `phi.math.backend.Backend`
    """
    natives = sum([v._natives() if isinstance(v, Tensor) else (v,) for v in values], ())
    return choose_backend(*natives, prefer_default=prefer_default)
def clip(x: phi.math._tensors.Tensor, lower_limit: float, upper_limit: float)

Limits the values of the Tensor x to lie between lower_limit and upper_limit (inclusive).

Expand source code
def clip(x: Tensor, lower_limit: float or Tensor, upper_limit: float or Tensor):
    """ Limits the values of the `Tensor` `x` to lie between `lower_limit` and `upper_limit` (inclusive). """
    if isinstance(lower_limit, Number) and isinstance(upper_limit, Number):

        def clip_(x):
            return x._op1(lambda native: choose_backend(native).clip(native, lower_limit, upper_limit))

        return broadcast_op(clip_, [x])
    else:
        return maximum(lower_limit, minimum(x, upper_limit))
def close(*tensors, rel_tolerance=1e-05, abs_tolerance=0) ‑> bool

Checks whether all tensors have equal values within the specified tolerance.

Does not check that the shapes exactly match. Tensors with different shapes are reshaped before comparing.

Args

*tensors
Tensor or tensor-like (constant) each
rel_tolerance
relative tolerance (Default value = 1e-5)
abs_tolerance
absolute tolerance (Default value = 0)

Returns

Whether all given tensors are equal to the first tensor within the specified tolerance.

Expand source code
def close(*tensors, rel_tolerance=1e-5, abs_tolerance=0) -> bool:
    """
    Checks whether all tensors have equal values within the specified tolerance.
    
    Does not check that the shapes exactly match.
    Tensors with different shapes are reshaped before comparing.

    Args:
        *tensors: `Tensor` or tensor-like (constant) each
        rel_tolerance: relative tolerance (Default value = 1e-5)
        abs_tolerance: absolute tolerance (Default value = 0)

    Returns:
        Whether all given tensors are equal to the first tensor within the specified tolerance.
    """
    tensors = [wrap(t) for t in tensors]
    for other in tensors[1:]:
        if not _close(tensors[0], other, rel_tolerance=rel_tolerance, abs_tolerance=abs_tolerance):
            return False
    return True
def closest_grid_values(grid: phi.math._tensors.Tensor, coordinates: phi.math._tensors.Tensor, extrap: e_.Extrapolation, stack_dim_prefix='closest_')

Finds the neighboring grid points in all spatial directions and returns their values. The result will have 2^d values for each vector in coordiantes in d dimensions.

Args

grid
grid data. The grid is spanned by the spatial dimensions of the tensor
coordinates
tensor with 1 channel dimension holding vectors pointing to locations in grid index space
extrap
grid extrapolation
stack_dim_prefix
For each spatial dimension dim, stacks lower and upper closest values along dimension stack_dim_prefix+dim.

Returns

Tensor of shape (batch, coord_spatial, grid_spatial=(2, 2,…), grid_channel)

Expand source code
def closest_grid_values(grid: Tensor,
                        coordinates: Tensor,
                        extrap: 'e_.Extrapolation',
                        stack_dim_prefix='closest_'):
    """
    Finds the neighboring grid points in all spatial directions and returns their values.
    The result will have 2^d values for each vector in coordiantes in d dimensions.

    Args:
      grid: grid data. The grid is spanned by the spatial dimensions of the tensor
      coordinates: tensor with 1 channel dimension holding vectors pointing to locations in grid index space
      extrap: grid extrapolation
      stack_dim_prefix: For each spatial dimension `dim`, stacks lower and upper closest values along dimension `stack_dim_prefix+dim`.

    Returns:
      Tensor of shape (batch, coord_spatial, grid_spatial=(2, 2,...), grid_channel)

    """
    return broadcast_op(functools.partial(_closest_grid_values, extrap=extrap, stack_dim_prefix=stack_dim_prefix), [grid, coordinates])
def concat(values: tuple, dim: phi.math._shape.Shape) ‑> phi.math._tensors.Tensor

Concatenates a sequence of tensors along one dimension. The shapes of all values must be equal, except for the size of the concat dimension.

Args

values
Tensors to concatenate
dim
Concatenation dimension, must be present in all values. The size along dim is determined from values and can be set to undefined (None).

Returns

Concatenated Tensor

Expand source code
def concat(values: tuple or list, dim: Shape) -> Tensor:
    """
    Concatenates a sequence of tensors along one dimension.
    The shapes of all values must be equal, except for the size of the concat dimension.

    Args:
        values: Tensors to concatenate
        dim: Concatenation dimension, must be present in all `values`.
            The size along `dim` is determined from `values` and can be set to undefined (`None`).

    Returns:
        Concatenated `Tensor`
    """
    assert len(values) > 0, "concat() got empty sequence"
    assert isinstance(dim, Shape) and dim.rank == 1, f"dim must be a single-dimension Shape but got '{dim}' of type {type(dim)}"
    broadcast_shape = values[0].shape
    natives = [v.native(order=broadcast_shape.names) for v in values]
    backend = choose_backend(*natives)
    concatenated = backend.concat(natives, broadcast_shape.index(dim))
    return NativeTensor(concatenated, broadcast_shape.with_sizes(backend.staticshape(concatenated)))
def concat_shapes(*shapes: phi.math._shape.Shape)

Creates a Shape listing the dimensions of all shapes in the given order.

See Also: merge_shapes().

Args

*shapes
Shapes to concatenate. No two shapes must contain a dimension with the same name.

Returns

Combined Shape.

Expand source code
def concat_shapes(*shapes: Shape):
    """
    Creates a `Shape` listing the dimensions of all `shapes` in the given order.

    See Also:
        `merge_shapes()`.

    Args:
        *shapes: Shapes to concatenate. No two shapes must contain a dimension with the same name.

    Returns:
        Combined `Shape`.
    """
    names = sum([s.names for s in shapes], ())
    if len(set(names)) != len(names):
        raise IncompatibleShapes(f"Cannot concatenate shapes {list(shapes)}. Duplicate dimension names are not allowed.")
    sizes = sum([s.sizes for s in shapes], ())
    types = sum([s.types for s in shapes], ())
    return Shape(sizes, names, types)
def conjugate(x) ‑> phi.math._tensors.Tensor

See Also: imag(), real().

Args

x
Real or complex Tensor or TensorLike or native tensor.

Returns

Complex conjugate of x if x is complex, else x.

Expand source code
def conjugate(x) -> Tensor:
    """
    See Also:
        `imag()`, `real()`.

    Args:
        x: Real or complex `Tensor` or `TensorLike` or native tensor.

    Returns:
        Complex conjugate of `x` if `x` is complex, else `x`.
    """
    return _backend_op1(x, Backend.conj)
def convert(x, backend: phi.math.backend._backend.Backend = None, use_dlpack=True)

Convert the native representation of a Tensor or TensorLike to the native format of phi.math.backend.

Warning: This operation breaks the automatic differentiation chain.

See Also: convert().

Args

x
Tensor to convert. If x is a TensorLike, its variable attributes are converted.
backend
Target backend. If None, uses the current default backend, see default_backend().

Returns

Tensor with native representation belonging to phi.math.backend.

Expand source code
def convert(x, backend: Backend = None, use_dlpack=True):
    """
    Convert the native representation of a `Tensor` or `TensorLike` to the native format of `backend`.

    *Warning*: This operation breaks the automatic differentiation chain.

    See Also:
        `phi.math.backend.convert()`.

    Args:
        x: `Tensor` to convert. If `x` is a `TensorLike`, its variable attributes are converted.
        backend: Target backend. If `None`, uses the current default backend, see `phi.math.backend.default_backend()`.

    Returns:
        `Tensor` with native representation belonging to `backend`.
    """
    if isinstance(x, Tensor):
        return x._op1(lambda native: b_convert(native, backend, use_dlpack=use_dlpack))
    elif isinstance(x, TensorLike):
        return copy_with(x, **{a: convert(getattr(x, a), backend, use_dlpack=use_dlpack) for a in variable_attributes(x)})
    else:
        return choose_backend(x).as_tensor(x)
def convolve(value: phi.math._tensors.Tensor, kernel: phi.math._tensors.Tensor, extrapolation: e_.Extrapolation = None) ‑> phi.math._tensors.Tensor

Computes the convolution of value and kernel along the spatial axes of kernel.

The channel dimensions of value are reduced against the equally named dimensions of kernel. The result will have the non-reduced channel dimensions of kernel.

Args

value
Tensor whose shape includes all spatial dimensions of kernel.
kernel
Tensor used as convolutional filter.
extrapolation
If not None, pads value so that the result has the same shape as value.

Returns

Tensor

Expand source code
def convolve(value: Tensor,
             kernel: Tensor,
             extrapolation: 'e_.Extrapolation' = None) -> Tensor:
    """
    Computes the convolution of `value` and `kernel` along the spatial axes of `kernel`.

    The channel dimensions of `value` are reduced against the equally named dimensions of `kernel`.
    The result will have the non-reduced channel dimensions of `kernel`.

    Args:
        value: `Tensor` whose shape includes all spatial dimensions of `kernel`.
        kernel: `Tensor` used as convolutional filter.
        extrapolation: If not None, pads `value` so that the result has the same shape as `value`.

    Returns:
        `Tensor`
    """
    assert all(dim in value.shape for dim in kernel.shape.spatial.names), f"Value must have all spatial dimensions of kernel but got value {value} kernel {kernel}"
    conv_shape = kernel.shape.spatial
    in_channels = value.shape.channel
    out_channels = kernel.shape.channel.without(in_channels)
    batch = value.shape.batch & kernel.shape.batch
    if extrapolation is not None and extrapolation != e_.ZERO:
        value = pad(value, {dim: (kernel.shape.get_size(dim) // 2, (kernel.shape.get_size(dim) - 1) // 2)
                            for dim in conv_shape.name}, extrapolation)
    native_kernel = reshaped_native(kernel, (batch, out_channels, in_channels, *conv_shape.names), force_expand=in_channels)
    native_value = reshaped_native(value, (batch, in_channels, *conv_shape.names), force_expand=batch)
    backend = choose_backend(native_value, native_kernel)
    native_result = backend.conv(native_value, native_kernel, zero_padding=extrapolation == e_.ZERO)
    result = reshaped_tensor(native_result, (batch, out_channels, *conv_shape))
    return result
def copy(value: phi.math._tensors.Tensor)

Copies the data buffer and encapsulating Tensor object.

Args

value
Tensor to be copied.

Returns

Copy of value.

Expand source code
def copy(value: Tensor):
    """
    Copies the data buffer and encapsulating `Tensor` object.

    Args:
        value: `Tensor` to be copied.

    Returns:
        Copy of `value`.
    """
    if value._is_tracer:
        warnings.warn("Tracing tensors cannot be copied.")
        return value
    return value._op1(lambda native: choose_backend(native).copy(native))
def cos(x) ‑> phi.math._tensors.Tensor

Computes cos(x) of the Tensor or TensorLike x.

Expand source code
def cos(x) -> Tensor:
    """ Computes *cos(x)* of the `Tensor` or `TensorLike` `x`. """
    return _backend_op1(x, Backend.cos)
def cross_product(vec1: phi.math._tensors.Tensor, vec2: phi.math._tensors.Tensor) ‑> phi.math._tensors.Tensor

Computes the cross product of two vectors in 2D.

Args

vec1
Tensor with a single channel dimension called 'vector'
vec2
Tensor with a single channel dimension called 'vector'

Returns

Tensor

Expand source code
def cross_product(vec1: Tensor, vec2: Tensor) -> Tensor:
    """
    Computes the cross product of two vectors in 2D.

    Args:
        vec1: `Tensor` with a single channel dimension called `'vector'`
        vec2: `Tensor` with a single channel dimension called `'vector'`

    Returns:
        `Tensor`
    """
    vec1 = math.tensor(vec1)
    vec2 = math.tensor(vec2)
    spatial_rank = vec1.vector.size if 'vector' in vec1.shape else vec2.vector.size
    if spatial_rank == 2:  # Curl in 2D
        assert vec2.vector.exists
        if vec1.vector.exists:
            v1_x, v1_y = vec1.vector.unstack()
            v2_x, v2_y = vec2.vector.unstack()
            if GLOBAL_AXIS_ORDER.is_x_first:
                return v1_x * v2_y - v1_y * v2_x
            else:
                return - v1_x * v2_y + v1_y * v2_x
        else:
            v2_x, v2_y = vec2.vector.unstack()
            if GLOBAL_AXIS_ORDER.is_x_first:
                return vec1 * math.stack([-v2_y, v2_x], channel('vector'))
            else:
                return vec1 * math.stack([v2_y, -v2_x], channel('vector'))
    elif spatial_rank == 3:  # Curl in 3D
        raise NotImplementedError(f'spatial_rank={spatial_rank} not yet implemented')
    else:
        raise AssertionError(f'dims = {spatial_rank}. Vector product not available in > 3 dimensions')
def cumulative_sum(x: phi.math._tensors.Tensor, dim: str)

Performs a cumulative sum of x along dim.

Implementations:

Args

x
Tensor
dim
Dimension along which to sum, as str or Shape.

Returns

Tensor with the same shape as x.

Expand source code
def cumulative_sum(x: Tensor, dim: str or Shape):
    """
    Performs a cumulative sum of `x` along `dim`.

    Implementations:

    * NumPy: [`cumsum`](https://numpy.org/doc/stable/reference/generated/numpy.cumsum.html)
    * PyTorch: [`cumsum`](https://pytorch.org/docs/stable/generated/torch.cumsum.html)
    * TensorFlow: [`cumsum`](https://www.tensorflow.org/api_docs/python/tf/math/cumsum)
    * Jax: [`cumsum`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.cumsum.html)

    Args:
        x: `Tensor`
        dim: Dimension along which to sum, as `str` or `Shape`.

    Returns:
        `Tensor` with the same shape as `x`.
    """
    dim = parse_dim_order(dim)
    assert len(dim) == 1, f"dim must be a single dimension but got {dim}"
    native_x = x.native(x.shape)
    native_result = choose_backend(native_x).cumsum(native_x, x.shape.index(dim[0]))
    return NativeTensor(native_result, x.shape)
def custom_gradient(f: Callable, gradient: Callable)

Creates a function based on f that uses a custom gradient for the backpropagation pass.

Warning This method can lead to memory leaks if the gradient funcion is not called. Make sure to pass tensors without gradients if the gradient is not required, see stop_gradient().

Args

f
Forward function mapping Tensor arguments x to a single Tensor output or sequence of tensors y.
gradient
Function to compute the vector-Jacobian product for backpropropagation. Will be called as gradient(*x, *y, *dy) -> *dx.

Returns

Function with similar signature and return values as f. However, the returned function does not support keyword arguments.

Expand source code
def custom_gradient(f: Callable, gradient: Callable):
    """
    Creates a function based on `f` that uses a custom gradient for the backpropagation pass.

    *Warning* This method can lead to memory leaks if the gradient funcion is not called.
    Make sure to pass tensors without gradients if the gradient is not required, see `stop_gradient()`.

    Args:
        f: Forward function mapping `Tensor` arguments `x` to a single `Tensor` output or sequence of tensors `y`.
        gradient: Function to compute the vector-Jacobian product for backpropropagation. Will be called as `gradient(*x, *y, *dy) -> *dx`.

    Returns:
        Function with similar signature and return values as `f`. However, the returned function does not support keyword arguments.
    """
    return CustomGradientFunction(f, gradient)
def divide_no_nan(x: phi.math._tensors.Tensor, y: phi.math._tensors.Tensor)

Computes x/y with the Tensors x and y but returns 0 where y=0.

Expand source code
def divide_no_nan(x: Tensor, y: Tensor):
    """ Computes *x/y* with the `Tensor`s `x` and `y` but returns 0 where *y=0*. """
    return custom_op2(x, y,
                      l_operator=divide_no_nan,
                      l_native_function=lambda x_, y_: choose_backend(x_, y_).divide_no_nan(x_, y_),
                      r_operator=lambda y_, x_: divide_no_nan(x_, y_),
                      r_native_function=lambda y_, x_: choose_backend(x_, y_).divide_no_nan(x_, y_))
def dot(x: phi.math._tensors.Tensor, x_dims: str, y: phi.math._tensors.Tensor, y_dims: str) ‑> phi.math._tensors.Tensor

Computes the dot product along the specified dimensions. Contracts x_dims with y_dims by first multiplying the elements and then summing them up.

For one dimension, this is equal to matrix-matrix or matrix-vector multiplication.

The function replaces the traditional dot() / tensordot / matmul / einsum functions.

Args

x
First Tensor
x_dims
Dimensions of x to reduce against y
y
Second Tensor
y_dims
Dimensions of y to reduce against x.

Returns

Dot product as Tensor.

Expand source code
def dot(x: Tensor,
        x_dims: str or tuple or list or Shape or Callable or None,
        y: Tensor,
        y_dims: str or tuple or list or Shape or Callable or None) -> Tensor:
    """
    Computes the dot product along the specified dimensions.
    Contracts `x_dims` with `y_dims` by first multiplying the elements and then summing them up.

    For one dimension, this is equal to matrix-matrix or matrix-vector multiplication.

    The function replaces the traditional `dot` / `tensordot` / `matmul` / `einsum` functions.

    * NumPy: [`numpy.tensordot`](https://numpy.org/doc/stable/reference/generated/numpy.tensordot.html), [`numpy.einsum`](https://numpy.org/doc/stable/reference/generated/numpy.einsum.html)
    * PyTorch: [`torch.tensordot`](https://pytorch.org/docs/stable/generated/torch.tensordot.html#torch.tensordot), [`torch.einsum`](https://pytorch.org/docs/stable/generated/torch.einsum.html)
    * TensorFlow: [`tf.tensordot`](https://www.tensorflow.org/api_docs/python/tf/tensordot), [`tf.einsum`](https://www.tensorflow.org/api_docs/python/tf/einsum)
    * Jax: [`jax.numpy.tensordot`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.tensordot.html), [`jax.numpy.einsum`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.einsum.html)

    Args:
        x: First `Tensor`
        x_dims: Dimensions of `x` to reduce against `y`
        y: Second `Tensor`
        y_dims: Dimensions of `y` to reduce against `x`.

    Returns:
        Dot product as `Tensor`.
    """
    x_dims = _resolve_dims(x_dims, x.shape)
    y_dims = _resolve_dims(y_dims, y.shape)
    x_native = x.native(x.shape)
    y_native = y.native(y.shape)
    backend = choose_backend(x_native, y_native)
    remaining_shape_x = x.shape.without(x_dims)
    remaining_shape_y = y.shape.without(y_dims)
    if remaining_shape_y.only(remaining_shape_x).is_empty:  # no shared batch dimensions -> tensordot
        result_native = backend.tensordot(x_native, x.shape.indices(x_dims), y_native, y.shape.indices(y_dims))
    else:  # shared batch dimensions -> einsum
        REDUCE_LETTERS = list('ijklmn')
        KEEP_LETTERS = list('abcdefgh')
        x_letters = [(REDUCE_LETTERS if dim in x_dims else KEEP_LETTERS).pop(0) for dim in x.shape.names]
        x_letter_map = {dim: letter for dim, letter in zip(x.shape.names, x_letters)}
        REDUCE_LETTERS = list('ijklmn')
        y_letters = []
        for dim in y.shape.names:
            if dim in y_dims:
                y_letters.append(REDUCE_LETTERS.pop(0))
            else:
                if dim in x_letter_map:
                    y_letters.append(x_letter_map[dim])
                else:
                    y_letters.append(KEEP_LETTERS.pop(0))
        keep_letters = list('abcdefgh')[:-len(KEEP_LETTERS)]
        subscripts = f'{"".join(x_letters)},{"".join(y_letters)}->{"".join(keep_letters)}'
        result_native = backend.einsum(subscripts, x_native, y_native)
    result_shape = merge_shapes(x.shape.without(x_dims), y.shape.without(y_dims))  # don't check group match
    return NativeTensor(result_native, result_shape)
def downsample2x(grid: phi.math._tensors.Tensor, padding: Extrapolation = boundary, dims: tuple = None) ‑> phi.math._tensors.Tensor

Resamples a regular grid to half the number of spatial sample points per dimension. The grid values at the new points are determined via mean (linear interpolation).

Args

grid
full size grid
padding
grid extrapolation. Used to insert an additional value for odd spatial dims
dims
dims along which down-sampling is applied. If None, down-sample along all spatial dims.
grid
Tensor:
padding
Extrapolation: (Default value = extrapolation.BOUNDARY)
dims
tuple or None: (Default value = None)

Returns

half-size grid

Expand source code
def downsample2x(grid: Tensor,
                 padding: Extrapolation = extrapolation.BOUNDARY,
                 dims: tuple or None = None) -> Tensor:
    """
    Resamples a regular grid to half the number of spatial sample points per dimension.
    The grid values at the new points are determined via mean (linear interpolation).

    Args:
      grid: full size grid
      padding: grid extrapolation. Used to insert an additional value for odd spatial dims
      dims: dims along which down-sampling is applied. If None, down-sample along all spatial dims.
      grid: Tensor: 
      padding: Extrapolation:  (Default value = extrapolation.BOUNDARY)
      dims: tuple or None:  (Default value = None)

    Returns:
      half-size grid

    """
    dims = grid.shape.spatial.only(dims).names
    odd_dimensions = [dim for dim in dims if grid.shape.get_size(dim) % 2 != 0]
    grid = math.pad(grid, {dim: (0, 1) for dim in odd_dimensions}, padding)
    for dim in dims:
        grid = (grid[{dim: slice(1, None, 2)}] + grid[{dim: slice(0, None, 2)}]) / 2
    return grid
def dtype(x) ‑> phi.math.backend._dtype.DType

Returns the data type of x.

Args

x
Tensor or native tensor.

Returns

DType

Expand source code
def dtype(x) -> DType:
    """
    Returns the data type of `x`.

    Args:
        x: `Tensor` or native tensor.

    Returns:
        `DType`
    """
    if isinstance(x, Tensor):
        return x.dtype
    else:
        return choose_backend(x).dtype(x)
def exp(x) ‑> phi.math._tensors.Tensor

Computes exp(x) of the Tensor or TensorLike x.

Expand source code
def exp(x) -> Tensor:
    """ Computes *exp(x)* of the `Tensor` or `TensorLike` `x`. """
    return _backend_op1(x, Backend.exp)
def expand(value: phi.math._tensors.Tensor, dims: phi.math._shape.Shape)

Adds dimensions to a Tensor by implicitly repeating the tensor values along the new dimensions. If value already contains some of the new dimensions, a size and type check is performed instead.

This function replaces the usual tile / repeat functions of NumPy, PyTorch, TensorFlow and Jax.

Additionally, it replaces the traditional unsqueeze / expand_dims functions.

Args

value
Tensor
dims
Dimensions to be added as Shape

Returns

Expanded Tensor.

Expand source code
def expand(value: Tensor, dims: Shape):
    """
    Adds dimensions to a `Tensor` by implicitly repeating the tensor values along the new dimensions.
    If `value` already contains some of the new dimensions, a size and type check is performed instead.

    This function replaces the usual `tile` / `repeat` functions of
    [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.tile.html),
    [PyTorch](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.repeat),
    [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/tile) and
    [Jax](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.tile.html).

    Additionally, it replaces the traditional `unsqueeze` / `expand_dims` functions.

    Args:
        value: `Tensor`
        dims: Dimensions to be added as `Shape`

    Returns:
        Expanded `Tensor`.
    """
    value = wrap(value)
    shape = value.shape
    for dim in reversed(dims):
        if dim in value.shape:
            assert dim.size is None or shape.get_size(dim.name) == dim.size, f"Cannot expand tensor with shape {shape} by dimension {dim}"
            assert shape.get_type(dim) == dim.type, f"Cannot expand tensor with shape {shape} by dimension {dim} of type '{dim.type}' because the dimension types do not match. Original type of '{dim.name}' was {shape.get_type(dim.name)}."
        else:
            if dim.size is None:
                dim = dim.with_sizes([1])
            shape = concat_shapes(dim, shape)
    return CollapsedTensor(value, shape)
def extrapolate_valid_values(values: phi.math._tensors.Tensor, valid: phi.math._tensors.Tensor, distance_cells: int = 1) ‑> Tuple[phi.math._tensors.Tensor, phi.math._tensors.Tensor]

Extrapolates the values of values which are marked by the nonzero values of valid for distance_cells steps in all spatial directions. Overlapping extrapolated values get averaged. Extrapolation also includes diagonals.

Examples (1-step extrapolation), x marks the values for extrapolation: 200 000 111 004 00x 044 102 000 144 010 + 0x0 => 111 000 + 000 => 234 004 + 00x => 234 040 000 111 200 x00 220 200 x00 234

Args

values
Tensor which holds the values for extrapolation
valid
Tensor with same size as x marking the values for extrapolation with nonzero values
distance_cells
Number of extrapolation steps

Returns

values
Extrapolation result
valid
mask marking all valid values after extrapolation
Expand source code
def extrapolate_valid_values(values: Tensor, valid: Tensor, distance_cells: int = 1) -> Tuple[Tensor, Tensor]:
    """
    Extrapolates the values of `values` which are marked by the nonzero values of `valid` for `distance_cells` steps in all spatial directions.
    Overlapping extrapolated values get averaged. Extrapolation also includes diagonals.

    Examples (1-step extrapolation), x marks the values for extrapolation:
        200   000    111        004   00x    044        102   000    144
        010 + 0x0 => 111        000 + 000 => 234        004 + 00x => 234
        040   000    111        200   x00    220        200   x00    234

    Args:
        values: Tensor which holds the values for extrapolation
        valid: Tensor with same size as `x` marking the values for extrapolation with nonzero values
        distance_cells: Number of extrapolation steps

    Returns:
        values: Extrapolation result
        valid: mask marking all valid values after extrapolation
    """

    def binarize(x):
        return math.divide_no_nan(x, x)

    distance_cells = min(distance_cells, max(values.shape.sizes))
    for _ in range(distance_cells):
        valid = binarize(valid)
        valid_values = valid * values
        overlap = valid
        for dim in values.shape.spatial.names:
            values_l, values_r = shift(valid_values, (-1, 1), dims=dim, padding=extrapolation.ZERO)
            valid_values = math.sum_(values_l + values_r + valid_values, dim='shift')
            mask_l, mask_r = shift(overlap, (-1, 1), dims=dim, padding=extrapolation.ZERO)
            overlap = math.sum_(mask_l + mask_r + overlap, dim='shift')
        extp = math.divide_no_nan(valid_values, overlap)  # take mean where extrapolated values overlap
        values = math.where(valid, values, math.where(binarize(overlap), extp, values))
        valid = overlap
    return values, binarize(valid)
def fft(x: phi.math._tensors.Tensor, dims: str = None) ‑> phi.math._tensors.Tensor

Performs a fast Fourier transform (FFT) on all spatial dimensions of x.

The inverse operation is ifft().

Implementations:

Args

x
Uniform complex or float Tensor with at least one spatial dimension.
dims
Dimensions along which to perform the FFT. If None, performs the FFT along all spatial dimensions of x.

Returns

Ƒ(x) as complex Tensor

Expand source code
def fft(x: Tensor, dims: str or tuple or list or Shape = None) -> Tensor:
    """
    Performs a fast Fourier transform (FFT) on all spatial dimensions of x.
    
    The inverse operation is `ifft()`.

    Implementations:

    * NumPy: [`np.fft.fft`](https://numpy.org/doc/stable/reference/generated/numpy.fft.fft.html),
      [`numpy.fft.fft2`](https://numpy.org/doc/stable/reference/generated/numpy.fft.fft2.html),
      [`numpy.fft.fftn`](https://numpy.org/doc/stable/reference/generated/numpy.fft.fftn.html)
    * PyTorch: [`torch.fft.fft`](https://pytorch.org/docs/stable/fft.html)
    * TensorFlow: [`tf.signal.fft`](https://www.tensorflow.org/api_docs/python/tf/signal/fft),
      [`tf.signal.fft2d`](https://www.tensorflow.org/api_docs/python/tf/signal/fft2d),
      [`tf.signal.fft3d`](https://www.tensorflow.org/api_docs/python/tf/signal/fft3d)
    * Jax: [`jax.numpy.fft.fft`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.fft.fft.html),
      [`jax.numpy.fft.fft2`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.fft.fft2.html)
      [`jax.numpy.fft.fft`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.fft.fftn.html)

    Args:
        x: Uniform complex or float `Tensor` with at least one spatial dimension.
        dims: Dimensions along which to perform the FFT.
            If `None`, performs the FFT along all spatial dimensions of `x`.

    Returns:
        *Ƒ(x)* as complex `Tensor`
    """
    dims = parse_dim_order(dims) if dims is not None else x.shape.spatial.names
    x_native = x.native(x.shape)
    result_native = choose_backend(x_native).fft(x_native, x.shape.indices(dims))
    return NativeTensor(result_native, x.shape)
def fftfreq(resolution: phi.math._shape.Shape, dx: phi.math._tensors.Tensor = 1, dtype: phi.math.backend._dtype.DType = None)

Returns the discrete Fourier transform sample frequencies. These are the frequencies corresponding to the components of the result of math.fft on a tensor of shape resolution.

Args

resolution
Grid resolution measured in cells
dx
Distance between sampling points in real space.
dtype
Data type of the returned tensor (Default value = None)

Returns

Tensor holding the frequencies of the corresponding values computed by math.fft

Expand source code
def fftfreq(resolution: Shape, dx: Tensor or float = 1, dtype: DType = None):
    """
    Returns the discrete Fourier transform sample frequencies.
    These are the frequencies corresponding to the components of the result of `math.fft` on a tensor of shape `resolution`.

    Args:
        resolution: Grid resolution measured in cells
        dx: Distance between sampling points in real space.
        dtype: Data type of the returned tensor (Default value = None)

    Returns:
        `Tensor` holding the frequencies of the corresponding values computed by math.fft
    """
    k = meshgrid(**{dim: np.fft.fftfreq(int(n)) for dim, n in resolution.spatial._named_sizes})
    k /= dx
    return to_float(k) if dtype is None else cast(k, dtype)
def flatten(value: phi.math._tensors.Tensor, flat_dim: phi.math._shape.Shape = (flatⁱ=None)) ‑> phi.math._tensors.Tensor

Returns a Tensor with the same values as value but only a single dimension flat_dim. The order of the values in memory is not changed.

Args

value
Tensor
flat_dim
Dimension name and type as Shape object. The size is ignored.

Returns

Tensor

Expand source code
def flatten(value: Tensor, flat_dim: Shape = instance('flat')) -> Tensor:
    """
    Returns a `Tensor` with the same values as `value` but only a single dimension `flat_dim`.
    The order of the values in memory is not changed.

    Args:
        value: `Tensor`
        flat_dim: Dimension name and type as `Shape` object. The size is ignored.

    Returns:
        `Tensor`
    """
    assert isinstance(flat_dim, Shape) and flat_dim.rank == 1, flat_dim
    return pack_dims(value, value.shape, flat_dim)
def floor(x) ‑> phi.math._tensors.Tensor

Computes ⌊x⌋ of the Tensor or TensorLike x.

Expand source code
def floor(x) -> Tensor:
    """ Computes *⌊x⌋* of the `Tensor` or `TensorLike` `x`. """
    return _backend_op1(x, Backend.floor)
def fourier_laplace(grid: phi.math._tensors.Tensor, dx: phi.math._tensors.Tensor, times: int = 1)

Applies the spatial laplace operator to the given tensor with periodic boundary conditions.

Note: The results of fourier_laplace() and laplace() are close but not identical.

This implementation computes the laplace operator in Fourier space. The result for periodic fields is exact, i.e. no numerical instabilities can occur, even for higher-order derivatives.

Args

grid
tensor, assumed to have periodic boundary conditions
dx
distance between grid points, tensor-like, scalar or vector
times
number of times the laplace operator is applied. The computational cost is independent of this parameter.
grid
Tensor:
dx
Tensor or Shape or float or list or tuple:
times
int: (Default value = 1)

Returns

tensor of same shape as tensor()

Expand source code
def fourier_laplace(grid: Tensor,
                    dx: Tensor or Shape or float or list or tuple,
                    times: int = 1):
    """
    Applies the spatial laplace operator to the given tensor with periodic boundary conditions.
    
    *Note:* The results of `fourier_laplace` and `laplace` are close but not identical.
    
    This implementation computes the laplace operator in Fourier space.
    The result for periodic fields is exact, i.e. no numerical instabilities can occur, even for higher-order derivatives.

    Args:
      grid: tensor, assumed to have periodic boundary conditions
      dx: distance between grid points, tensor-like, scalar or vector
      times: number of times the laplace operator is applied. The computational cost is independent of this parameter.
      grid: Tensor: 
      dx: Tensor or Shape or float or list or tuple: 
      times: int:  (Default value = 1)

    Returns:
      tensor of same shape as `tensor`

    """
    frequencies = math.fft(math.to_complex(grid))
    k_squared = math.sum_(math.fftfreq(grid.shape) ** 2, 'vector')
    fft_laplace = -(2 * np.pi) ** 2 * k_squared
    result = math.real(math.ifft(frequencies * fft_laplace ** times))
    return math.cast(result / wrap(dx) ** 2, grid.dtype)
def fourier_poisson(grid: phi.math._tensors.Tensor, dx: phi.math._tensors.Tensor, times: int = 1)

Inverse operation to fourier_laplace().

Args

grid
Tensor:
dx
Tensor or Shape or float or list or tuple:
times
int: (Default value = 1)

Returns:

Expand source code
def fourier_poisson(grid: Tensor,
                    dx: Tensor or Shape or float or list or tuple,
                    times: int = 1):
    """
    Inverse operation to `fourier_laplace`.

    Args:
      grid: Tensor: 
      dx: Tensor or Shape or float or list or tuple: 
      times: int:  (Default value = 1)

    Returns:

    """
    frequencies = math.fft(math.to_complex(grid))
    k_squared = math.sum_(math.fftfreq(grid.shape) ** 2, 'vector')
    fft_laplace = -(2 * np.pi) ** 2 * k_squared
    # fft_laplace.tensor[(0,) * math.ndims(k_squared)] = math.inf  # assume NumPy array to edit
    result = math.real(math.ifft(math.divide_no_nan(frequencies, math.to_complex(fft_laplace ** times))))
    return math.cast(result * wrap(dx) ** 2, grid.dtype)
def frequency_loss(x, frequency_falloff: float = 100, threshold=1e-05, ignore_mean=False) ‑> phi.math._tensors.Tensor

Penalizes the squared values in frequency (Fourier) space. Lower frequencies are weighted more strongly then higher frequencies, depending on frequency_falloff.

Args

x
Tensor or TensorLike Values to penalize, typically actual - target.
frequency_falloff
Large values put more emphasis on lower frequencies, 1.0 weights all frequencies equally. Note: The total loss is not normalized. Varying the value will result in losses of different magnitudes.
threshold
Frequency amplitudes below this value are ignored. Setting this to zero may cause infinities or NaN values during backpropagation.
ignore_mean
If True, does not penalize the mean value (frequency=0 component).

Returns

Scalar loss value

Expand source code
def frequency_loss(x,
                   frequency_falloff: float = 100,
                   threshold=1e-5,
                   ignore_mean=False) -> Tensor:
    """
    Penalizes the squared `values` in frequency (Fourier) space.
    Lower frequencies are weighted more strongly then higher frequencies, depending on `frequency_falloff`.

    Args:
        x: `Tensor` or `TensorLike` Values to penalize, typically `actual - target`.
        frequency_falloff: Large values put more emphasis on lower frequencies, 1.0 weights all frequencies equally.
            *Note*: The total loss is not normalized. Varying the value will result in losses of different magnitudes.
        threshold: Frequency amplitudes below this value are ignored.
            Setting this to zero may cause infinities or NaN values during backpropagation.
        ignore_mean: If `True`, does not penalize the mean value (frequency=0 component).

    Returns:
      Scalar loss value
    """
    if isinstance(x, Tensor):
        if ignore_mean:
            x -= math.mean(x, x.shape.non_batch)
        k_squared = vec_squared(math.fftfreq(x.shape.spatial))
        weights = math.exp(-0.5 * k_squared * frequency_falloff ** 2)
        diff_fft = abs_square(math.fft(x) * weights)
        diff_fft = math.sqrt(math.maximum(diff_fft, threshold))
        return l2_loss(diff_fft)
    elif isinstance(x, TensorLike):
        return sum([frequency_loss(getattr(x, a), frequency_falloff, threshold, ignore_mean) for a in variable_values(x)])
    else:
        raise ValueError(x)
def functional_gradient(f: Callable, wrt: tuple = (0,), get_output=True) ‑> Callable

Creates a function which computes the gradient of f.

Example:

def loss_function(x, y):
    prediction = f(x)
    loss = math.l2_loss(prediction - y)
    return loss, prediction

dx, = functional_gradient(loss_function, get_output=False)(x, y)

(loss, prediction), (dx, dy) = functional_gradient(loss_function,
                                    wrt=(0, 1), get_output=True)(x, y)

Functional gradients are implemented for the following backends:

When the gradient function is invoked, f is called with tensors that track the gradient. For PyTorch, arg.requires_grad = True for all positional arguments of f.

Args

f
Function to be differentiated. f must return a floating point Tensor with rank zero. It can return additional tensors which are treated as auxiliary data and will be returned by the gradient function if return_values=True. All arguments for which the gradient is computed must be of dtype float or complex.
get_output
Whether the gradient function should also return the return values of f.
wrt
Arguments of f with respect to which the gradient should be computed. Example: wrt_indices=[0] computes the gradient with respect to the first argument of f.

Returns

Function with the same arguments as f that returns the value of f, auxiliary data and gradient of f if get_output=True, else just the gradient of f.

Expand source code
def functional_gradient(f: Callable, wrt: tuple or list = (0,), get_output=True) -> Callable:
    """
    Creates a function which computes the gradient of `f`.

    Example:
    ```python
    def loss_function(x, y):
        prediction = f(x)
        loss = math.l2_loss(prediction - y)
        return loss, prediction

    dx, = functional_gradient(loss_function, get_output=False)(x, y)

    (loss, prediction), (dx, dy) = functional_gradient(loss_function,
                                        wrt=(0, 1), get_output=True)(x, y)
    ```

    Functional gradients are implemented for the following backends:

    * PyTorch: [`torch.autograd.grad`](https://pytorch.org/docs/stable/autograd.html#torch.autograd.grad) / [`torch.autograd.backward`](https://pytorch.org/docs/stable/autograd.html#torch.autograd.backward)
    * TensorFlow: [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape)
    * Jax: [`jax.grad`](https://jax.readthedocs.io/en/latest/jax.html#jax.grad)

    When the gradient function is invoked, `f` is called with tensors that track the gradient.
    For PyTorch, `arg.requires_grad = True` for all positional arguments of `f`.

    Args:
        f: Function to be differentiated.
            `f` must return a floating point `Tensor` with rank zero.
            It can return additional tensors which are treated as auxiliary data and will be returned by the gradient function if `return_values=True`.
            All arguments for which the gradient is computed must be of dtype float or complex.
        get_output: Whether the gradient function should also return the return values of `f`.
        wrt: Arguments of `f` with respect to which the gradient should be computed.
            Example: `wrt_indices=[0]` computes the gradient with respect to the first argument of `f`.

    Returns:
        Function with the same arguments as `f` that returns the value of `f`, auxiliary data and gradient of `f` if `get_output=True`, else just the gradient of `f`.
    """
    return GradientFunction(f, wrt, get_output)
def gather(values: phi.math._tensors.Tensor, indices: phi.math._tensors.Tensor, dims: str = None)

Gathers the entries of values at positions described by indices.

See Also: scatter().

Args

values
Tensor containing values to gather.
indices
int Tensor. Multi-dimensional position references in values. Must contain a single channel dimension for the index vector matching the number of dims.
dims
Dimensions indexed by indices. If None, will default to all spatial dimensions or all instance dimensions, depending on which ones are present (but not both).

Returns

Tensor with combined batch dimensions, channel dimensions of values and spatial/instance dimensions of indices.

Expand source code
def gather(values: Tensor, indices: Tensor, dims: str or Shape or tuple or list = None):
    """
    Gathers the entries of `values` at positions described by `indices`.

    See Also:
        `scatter()`.

    Args:
        values: `Tensor` containing values to gather.
        indices: `int` `Tensor`. Multi-dimensional position references in `values`.
            Must contain a single channel dimension for the index vector matching the number of `dims`.
        dims: Dimensions indexed by `indices`.
            If `None`, will default to all spatial dimensions or all instance dimensions, depending on which ones are present (but not both).

    Returns:
        `Tensor` with combined batch dimensions, channel dimensions of `values` and spatial/instance dimensions of `indices`.
    """
    if dims is None:
        assert values.shape.instance.is_empty or values.shape.spatial.is_empty, f"Specify gather dimensions for values with both instance and spatial dimensions. Got {values.shape}"
        dims = values.shape.instance if values.shape.spatial.is_empty else values.shape.spatial
    dims = parse_dim_order(dims)
    batch = (values.shape.batch & indices.shape.batch).without(dims)
    channel = values.shape.channel.without(dims)
    native_values = reshaped_native(values, [batch, *dims, channel])
    native_indices = reshaped_native(indices, [batch, *indices.shape.non_batch.non_channel, indices.shape.channel])
    backend = choose_backend(native_values, native_indices)
    native_result = backend.batched_gather_nd(native_values, native_indices)
    result = reshaped_tensor(native_result, [batch, *indices.shape.non_channel.non_batch, values.shape.channel])
    return result
def get_precision() ‑> int

Gets the current target floating point precision in bits. The precision can be set globally using set_global_precision() or locally using with precision(p):.

Any Backend method may convert floating point values to this precision, even if the input had a different precision.

Returns

16 for half, 32 for single, 64 for double

Expand source code
def get_precision() -> int:
    """
    Gets the current target floating point precision in bits.
    The precision can be set globally using `set_global_precision()` or locally using `with precision(p):`.

    Any Backend method may convert floating point values to this precision, even if the input had a different precision.

    Returns:
        16 for half, 32 for single, 64 for double
    """
    return _PRECISION[-1]
def gradients(y: phi.math._tensors.Tensor, *x: phi.math._tensors.Tensor, grad_y: phi.math._tensors.Tensor = None)

Deprecated. Use functional_gradient() instead.

Computes the gradients dy/dx for each x. The parameters x must be marked prior to all operations for which gradients should be recorded using record_gradients().

Args

y
Scalar Tensor computed from x, typically loss.
*x
(Optional) Parameter tensors which were previously marked in record_gradients(). If empty, computes the gradients w.r.t. all marked tensors.
grad_y
(optional) Gradient at y, defaults to 1.

Returns

Single Tensor if one x was passed, else sequence of tensors.

Expand source code
def gradients(y: Tensor,
              *x: Tensor,
              grad_y: Tensor or None = None):
    """
    *Deprecated. Use `functional_gradient()` instead.*

    Computes the gradients dy/dx for each `x`.
    The parameters `x` must be marked prior to all operations for which gradients should be recorded using `record_gradients()`.

    Args:
        y: Scalar `Tensor` computed from `x`, typically loss.
        *x: (Optional) Parameter tensors which were previously marked in `record_gradients()`.
            If empty, computes the gradients w.r.t. all marked tensors.
        grad_y: (optional) Gradient at `y`, defaults to 1.

    Returns:
        Single `Tensor` if one `x` was passed, else sequence of tensors.
    """
    warnings.warn("math.gradients() is deprecated. Use functional_gradient() instead.", DeprecationWarning)
    assert isinstance(y, NativeTensor), f"{type(y)}"
    if len(x) == 0:
        x = _PARAM_STACK[-1]
    backend = choose_backend_t(y, *x)
    x_natives = sum([x_._natives() for x_ in x], ())
    native_gradients = list(backend.gradients(y.native(), x_natives, grad_y.native() if grad_y is not None else None))
    for i, grad in enumerate(native_gradients):
        assert grad is not None, f"Missing spatial_gradient for source with shape {x_natives[i].shape}"
    grads = [x_._op1(lambda native: native_gradients.pop(0)) for x_ in x]
    return grads[0] if len(x) == 1 else grads
def grid_sample(grid: phi.math._tensors.Tensor, coordinates: phi.math._tensors.Tensor, extrap: e_.Extrapolation)

Samples values of grid at the locations referenced by coordinates. Values lying in between sample points are determined via linear interpolation. For values outside the valid bounds of grid (coord < 0 or coord > grid.shape - 1), extrap is used to determine the neighboring grid values.

Args

grid
Grid with at least one spatial dimension and no instance dimensions.
coordinates
Coordinates with a single channel dimension called 'vector'. The size of the vector dimension must match the number of spatial dimensions of grid.
extrap
Extrapolation used to determine the values of grid outside its valid bounds.

Returns

Tensor with channel dimensions of grid, spatial and instance dimensions of coordinates and combined batch dimensions.

Expand source code
def grid_sample(grid: Tensor, coordinates: Tensor, extrap: 'e_.Extrapolation'):
    """
    Samples values of `grid` at the locations referenced by `coordinates`.
    Values lying in between sample points are determined via linear interpolation.
    For values outside the valid bounds of `grid` (`coord < 0 or coord > grid.shape - 1`), `extrap` is used to determine the neighboring grid values.

    Args:
        grid: Grid with at least one spatial dimension and no instance dimensions.
        coordinates: Coordinates with a single channel dimension called `'vector'`.
            The size of the `vector` dimension must match the number of spatial dimensions of `grid`.
        extrap: Extrapolation used to determine the values of `grid` outside its valid bounds.

    Returns:
        `Tensor` with channel dimensions of `grid`, spatial and instance dimensions of `coordinates` and combined batch dimensions.
    """
    result = broadcast_op(functools.partial(_grid_sample, extrap=extrap), [grid, coordinates])
    return result
def ifft(k: phi.math._tensors.Tensor, dims: str = None)

Inverse of fft().

Args

k
Complex or float Tensor with at least one spatial dimension.
dims
Dimensions along which to perform the inverse FFT. If None, performs the inverse FFT along all spatial dimensions of k.

Returns

Ƒ-1(k) as complex Tensor

Expand source code
def ifft(k: Tensor, dims: str or tuple or list or Shape = None):
    """
    Inverse of `fft()`.

    Args:
        k: Complex or float `Tensor` with at least one spatial dimension.
        dims: Dimensions along which to perform the inverse FFT.
            If `None`, performs the inverse FFT along all spatial dimensions of `k`.

    Returns:
        *Ƒ<sup>-1</sup>(k)* as complex `Tensor`
    """
    dims = parse_dim_order(dims) if dims is not None else k.shape.spatial.names
    k_native = k.native(k.shape)
    result_native = choose_backend(k_native).ifft(k_native, k.shape.indices(dims))
    return NativeTensor(result_native, k.shape)
def imag(x) ‑> phi.math._tensors.Tensor

See Also: real(), conjugate().

Args

x
Tensor or TensorLike or native tensor.

Returns

Imaginary component of x if x is complex, zeros otherwise.

Expand source code
def imag(x) -> Tensor:
    """
    See Also:
        `real()`, `conjugate()`.

    Args:
        x: `Tensor` or `TensorLike` or native tensor.

    Returns:
        Imaginary component of `x` if `x` is complex, zeros otherwise.
    """
    return _backend_op1(x, Backend.imag)
def instance(*args, **dims: int)

Returns the instance dimensions of an existing Shape or creates a new Shape with only instance dimensions.

Usage for filtering instance dimensions:

instance_dims = instance(shape)
instance_dims = instance(tensor)

Usage for creating a Shape with only instance dimensions:

instance_shape = instance('undef', points=2)
# Out: (points=2, undef=None)

Here, the dimension undef is created with an undefined size of None. Undefined sizes are automatically filled in by tensor(), wrap(), stack() and concat().

To create a shape with multiple types, use merge_shapes(), concat_shapes() or the syntax shape1 & shape2.

See Also: channel(), batch(), spatial()

Args

*args

Either

  • Shape or Tensor to filter or
  • Names of dimensions with undefined sizes as str.
**dims
Dimension sizes and names. Must be empty when used as a filter operation.

Returns

Shape containing only dimensions of type instance.

Expand source code
def instance(*args, **dims: int):
    """
    Returns the instance dimensions of an existing `Shape` or creates a new `Shape` with only instance dimensions.

    Usage for filtering instance dimensions:
    ```python
    instance_dims = instance(shape)
    instance_dims = instance(tensor)
    ```

    Usage for creating a `Shape` with only instance dimensions:
    ```python
    instance_shape = instance('undef', points=2)
    # Out: (points=2, undef=None)
    ```
    Here, the dimension `undef` is created with an undefined size of `None`.
    Undefined sizes are automatically filled in by `tensor`, `wrap`, `stack` and `concat`.

    To create a shape with multiple types, use `merge_shapes()`, `concat_shapes()` or the syntax `shape1 & shape2`.

    See Also:
        `channel`, `batch`, `spatial`

    Args:
        *args: Either

            * `Shape` or `Tensor` to filter or
            * Names of dimensions with undefined sizes as `str`.

        **dims: Dimension sizes and names. Must be empty when used as a filter operation.

    Returns:
        `Shape` containing only dimensions of type instance.
    """
    from phi.math import Tensor
    if all(isinstance(arg, str) for arg in args) or dims:
        for arg in args:
            parts = [s.strip() for s in arg.split(',')]
            for dim in parts:
                if dim not in dims:
                    dims[dim] = None
        return math.Shape(dims.values(), dims.keys(), [INSTANCE_DIM] * len(dims))
    elif len(args) == 1 and isinstance(args[0], Shape):
        return args[0].instance
    elif len(args) == 1 and isinstance(args[0], Tensor):
        return args[0].shape.instance
    else:
        raise AssertionError(f"instance() must be called either as a selector instance(Shape) or instance(Tensor) or as a constructor instance(*names, **dims). Got *args={args}, **dims={dims}")
def isfinite(x) ‑> phi.math._tensors.Tensor

Returns a Tensor or TensorLike matching x with values True where x has a finite value and False otherwise.

Expand source code
def isfinite(x) -> Tensor:
    """ Returns a `Tensor` or `TensorLike` matching `x` with values `True` where `x` has a finite value and `False` otherwise. """
    return _backend_op1(x, Backend.isfinite)
def jit_compile(f: Callable) ‑> Callable

Compiles a graph based on the function f. The graph compilation is performed just-in-time (jit), e.g. when the returned function is called for the first time.

The traced function will compute the same result as f but may run much faster. Some checks may be disabled in the compiled function.

Can be used as a decorator:

@math.jit_compile
def my_function(x: math.Tensor) -> math.Tensor:

Invoking the returned function may invoke re-tracing / re-compiling f after the first call if either

  • it is called with a different number of arguments,
  • the keyword arguments differ from previous invocations,
  • the positional tensor arguments have different dimension names or types (the dimension order also counts),
  • any positional Tensor arguments require a different backend than previous invocations,
  • TensorLike positional arguments do not match in non-variable properties.

Compilation is implemented for the following backends:

Jit-compilations cannot be nested, i.e. you cannot call jit_compile() while another function is being compiled. An exception to this is jit_compile_linear() which can be called from within a jit-compiled function.

See Also: jit_compile_linear()

Args

f
Function to be traced. All positional arguments must be of type Tensor or TensorLike returning a single Tensor or TensorLike.

Returns

Function with similar signature and return values as f.

Expand source code
def jit_compile(f: Callable) -> Callable:
    """
    Compiles a graph based on the function `f`.
    The graph compilation is performed just-in-time (jit), e.g. when the returned function is called for the first time.

    The traced function will compute the same result as `f` but may run much faster.
    Some checks may be disabled in the compiled function.

    Can be used as a decorator:
    ```python
    @math.jit_compile
    def my_function(x: math.Tensor) -> math.Tensor:
    ```

    Invoking the returned function may invoke re-tracing / re-compiling `f` after the first call if either

    * it is called with a different number of arguments,
    * the keyword arguments differ from previous invocations,
    * the positional tensor arguments have different dimension names or types (the dimension order also counts),
    * any positional `Tensor` arguments require a different backend than previous invocations,
    * `TensorLike` positional arguments do not match in non-variable properties.

    Compilation is implemented for the following backends:

    * PyTorch: [`torch.jit.trace`](https://pytorch.org/docs/stable/jit.html)
    * TensorFlow: [`tf.function`](https://www.tensorflow.org/guide/function)
    * Jax: [`jax.jit`](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html#using-jit-to-speed-up-functions)

    Jit-compilations cannot be nested, i.e. you cannot call `jit_compile()` while another function is being compiled.
    An exception to this is `jit_compile_linear()` which can be called from within a jit-compiled function.

    See Also:
        `jit_compile_linear()`

    Args:
        f: Function to be traced.
            All positional arguments must be of type `Tensor` or `TensorLike` returning a single `Tensor` or `TensorLike`.

    Returns:
        Function with similar signature and return values as `f`.
    """
    return f if isinstance(f, (JitFunction, LinearFunction)) else JitFunction(f)
def jit_compile_linear(f: Callable[[~X], ~Y]) ‑> phi.math._functional.LinearFunction[~X, ~Y]

Compile an optimized representation of the linear function f. For backends that support sparse tensors, a sparse matrix will be constructed for f.

Can be used as a decorator:

@math.jit_compile_linear
def my_linear_function(x: math.Tensor) -> math.Tensor:

Unlike jit_compile(), jit_compile_linear() can be called during a regular jit compilation.

See Also: jit_compile()

Args

f
Function that is linear in its positional arguments. All positional arguments must be of type Tensor and f must return a Tensor. f may be conditioned on keyword arguments. However, passing different values for these will cause f to be re-traced unless the conditioning arguments are also being traced.

Returns

LinearFunction with similar signature and return values as f.

Expand source code
def jit_compile_linear(f: Callable[[X], Y]) -> 'LinearFunction[X, Y]':  # TODO add cache control method, e.g. max_traces
    """
    Compile an optimized representation of the linear function `f`.
    For backends that support sparse tensors, a sparse matrix will be constructed for `f`.

    Can be used as a decorator:
    ```python
    @math.jit_compile_linear
    def my_linear_function(x: math.Tensor) -> math.Tensor:
    ```

    Unlike `jit_compile()`, `jit_compile_linear()` can be called during a regular jit compilation.

    See Also:
        `jit_compile()`

    Args:
        f: Function that is linear in its positional arguments.
            All positional arguments must be of type `Tensor` and `f` must return a `Tensor`.
            `f` may be conditioned on keyword arguments.
            However, passing different values for these will cause `f` to be re-traced unless the conditioning arguments are also being traced.

    Returns:
        `LinearFunction` with similar signature and return values as `f`.
    """
    if isinstance(f, JitFunction):
        f = f.f  # cannot trace linear function from jitted version
    return f if isinstance(f, LinearFunction) else LinearFunction(f)
def l1_loss(x) ‑> phi.math._tensors.Tensor

Computes i ||xi||1, summing over all non-batch dimensions.

Args

x
Tensor or TensorLike. For TensorLike objects, only value the sum over all value attributes is computed.

Returns

loss
Tensor
Expand source code
def l1_loss(x) -> Tensor:
    """
    Computes *∑<sub>i</sub> ||x<sub>i</sub>||<sub>1</sub>*, summing over all non-batch dimensions.

    Args:
        x: `Tensor` or `TensorLike`.
            For `TensorLike` objects, only value the sum over all value attributes is computed.

    Returns:
        loss: `Tensor`
    """
    if isinstance(x, Tensor):
        return math.sum_(abs(x), x.shape.non_batch)
    elif isinstance(x, TensorLike):
        return sum([l1_loss(getattr(x, a)) for a in variable_values(x)])
    else:
        raise ValueError(x)
def l2_loss(x) ‑> phi.math._tensors.Tensor

Computes i ||xi||22 / 2, summing over all non-batch dimensions.

Args

x
Tensor or TensorLike. For TensorLike objects, only value the sum over all value attributes is computed.

Returns

loss
Tensor
Expand source code
def l2_loss(x) -> Tensor:
    """
    Computes *∑<sub>i</sub> ||x<sub>i</sub>||<sub>2</sub><sup>2</sup> / 2*, summing over all non-batch dimensions.

    Args:
        x: `Tensor` or `TensorLike`.
            For `TensorLike` objects, only value the sum over all value attributes is computed.

    Returns:
        loss: `Tensor`
    """
    if isinstance(x, Tensor):
        if x.dtype.kind == complex:
            x = abs(x)
        return math.sum_(x ** 2, x.shape.non_batch) * 0.5
    elif isinstance(x, TensorLike):
        return sum([l2_loss(getattr(x, a)) for a in variable_values(x)])
    else:
        raise ValueError(x)
def laplace(x: phi.math._tensors.Tensor, dx: phi.math._tensors.Tensor = 1, padding: Extrapolation = boundary, dims: tuple = None)

Spatial Laplace operator as defined for scalar fields. If a vector field is passed, the laplace is computed component-wise.

Args

x
n-dimensional field of shape (batch, spacial dimensions…, components)
dx
scalar or 1d tensor
padding
extrapolation
dims
The second derivative along these dimensions is summed over

Returns

Tensor of same shape as x

Expand source code
def laplace(x: Tensor,
            dx: Tensor or float = 1,
            padding: Extrapolation = extrapolation.BOUNDARY,
            dims: tuple or None = None):
    """
    Spatial Laplace operator as defined for scalar fields.
    If a vector field is passed, the laplace is computed component-wise.

    Args:
        x: n-dimensional field of shape (batch, spacial dimensions..., components)
        dx: scalar or 1d tensor
        padding: extrapolation
        dims: The second derivative along these dimensions is summed over

    Returns:
        `phi.math.Tensor` of same shape as `x`

    """
    if isinstance(dx, (tuple, list)):
        dx = wrap(dx, batch('_laplace'))
    elif isinstance(dx, Tensor) and dx.vector.exists:
        dx = math.rename_dims(dx, 'vector', batch('_laplace'))
    if isinstance(x, Extrapolation):
        return x.spatial_gradient()
    left, center, right = shift(wrap(x), (-1, 0, 1), dims, padding, stack_dim=batch('_laplace'))
    result = (left + right - 2 * center) / dx
    result = math.sum_(result, '_laplace')
    return result
def linspace(start, stop, number: int, dim: phi.math._shape.Shape = (linspaceᶜ=None)) ‑> phi.math._tensors.Tensor

Returns number evenly spaced numbers between start and stop.

See Also: arange(), meshgrid().

Args

start
First value.
stop
Last value.
number
How many numbers to return, int.
dim
Dimension name and type as Shape object. The size of dim is ignored.

Returns

Tensor

Expand source code
def linspace(start, stop, number: int, dim: Shape = channel('linspace')) -> Tensor:
    """
    Returns `number` evenly spaced numbers between `start` and `stop`.

    See Also:
        `arange()`, `meshgrid()`.

    Args:
        start: First value.
        stop: Last value.
        number: How many numbers to return, `int`.
        dim: Dimension name and type as `Shape` object. The `size` of `dim` is ignored.

    Returns:
        `Tensor`
    """
    assert dim.rank == 1
    native = choose_backend(start, stop, number, prefer_default=True).linspace(start, stop, number)
    return NativeTensor(native, dim.with_sizes([number]))
def log(x) ‑> phi.math._tensors.Tensor

Computes the natural logarithm of the Tensor or TensorLike x.

Expand source code
def log(x) -> Tensor:
    """ Computes the natural logarithm of the `Tensor` or `TensorLike` `x`. """
    return _backend_op1(x, Backend.log)
def log10(x) ‑> phi.math._tensors.Tensor

Computes log(x) of the Tensor or TensorLike x with base 10.

Expand source code
def log10(x) -> Tensor:
    """ Computes *log(x)* of the `Tensor` or `TensorLike` `x` with base 10. """
    return _backend_op1(x, Backend.log10)
def log2(x) ‑> phi.math._tensors.Tensor

Computes log(x) of the Tensor or TensorLike x with base 2.

Expand source code
def log2(x) -> Tensor:
    """ Computes *log(x)* of the `Tensor` or `TensorLike` `x` with base 2. """
    return _backend_op1(x, Backend.log2)
def map(function, *values: phi.math._tensors.Tensor) ‑> phi.math._tensors.Tensor

Calls function on all elements of value.

Args

function
Function to be called on single elements contained in value. Must return a value that can be stored in tensors.
values
Tensors to iterate over. Number of tensors must match function signature.

Returns

Tensor of same shape as value.

Expand source code
def map_(function, *values: Tensor) -> Tensor:
    """
    Calls `function` on all elements of `value`.

    Args:
        function: Function to be called on single elements contained in `value`. Must return a value that can be stored in tensors.
        values: Tensors to iterate over. Number of tensors must match `function` signature.

    Returns:
        `Tensor` of same shape as `value`.
    """
    shape = merge_shapes(*[v.shape for v in values])
    values_reshaped = [CollapsedTensor(v, shape) for v in values]
    flat = [flatten(v) for v in values_reshaped]
    result = []
    for items in zip(*flat):
        result.append(function(*items))
    if None in result:
        assert all(r is None for r in result), f"map function returned None for some elements, {result}"
        return
    return wrap(result).vector.split(shape)
def max(value: phi.math._tensors.Tensor, dim: str = None) ‑> phi.math._tensors.Tensor

Determines the maximum value of values along the specified dimensions.

Args

value
Tensor or list / tuple of Tensors.
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to add up the sequence of Tensors

Returns

Tensor without the reduced dimensions.

Expand source code
def max_(value: Tensor or list or tuple, dim: str or int or tuple or list or None or Shape = None) -> Tensor:
    """
    Determines the maximum value of `values` along the specified dimensions.

    Args:
        value: `Tensor` or `list` / `tuple` of Tensors.
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to add up the sequence of Tensors

    Returns:
        `Tensor` without the reduced dimensions.
    """
    return _reduce(value, dim, native_function=lambda backend, native, dim: backend.max(native, dim))
def maximum(x: phi.math._tensors.Tensor, y: phi.math._tensors.Tensor)

Computes the element-wise maximum of x and y.

Expand source code
def maximum(x: Tensor or float, y: Tensor or float):
    """ Computes the element-wise maximum of `x` and `y`. """
    return custom_op2(x, y, maximum, lambda x_, y_: choose_backend(x_, y_).maximum(x_, y_))
def mean(value: phi.math._tensors.Tensor, dim: str = None) ‑> phi.math._tensors.Tensor

Computes the mean over values along the specified dimensions.

Args

value
Tensor or list / tuple of Tensors.
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to add up the sequence of Tensors

Returns

Tensor without the reduced dimensions.

Expand source code
def mean(value: Tensor or list or tuple, dim: str or int or tuple or list or None or Shape = None) -> Tensor:
    """
    Computes the mean over `values` along the specified dimensions.

    Args:
        value: `Tensor` or `list` / `tuple` of Tensors.
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to add up the sequence of Tensors

    Returns:
        `Tensor` without the reduced dimensions.
    """
    return _reduce(value, dim, native_function=lambda backend, native, dim: backend.mean(native, dim))
def median(value, dim: str = None)

Reduces dim of value by picking the median value. For odd dimension sizes (ambigous choice), the linear average of the two median values is computed.

Currently implemented via quantile().

Args

value
Tensor
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to add up the sequence of Tensors

Returns

Tensor

Expand source code
def median(value, dim: str or int or tuple or list or None or Shape or Callable = None):
    """
    Reduces `dim` of `value` by picking the median value.
    For odd dimension sizes (ambigous choice), the linear average of the two median values is computed.

    Currently implemented via `quantile()`.

    Args:
        value: `Tensor`
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to add up the sequence of Tensors

    Returns:
        `Tensor`
    """
    return quantile(value, 0.5, dim)
def merge_shapes(*shapes: phi.math._shape.Shape, check_exact: tuple = (), order=(<function batch>, <function instance>, <function spatial>, <function channel>))

Combines shapes into a single Shape, grouping dimensions by type. If dimensions with equal names are present in multiple shapes, their types and sizes must match.

The shorthand shape1 & shape2 merges shapes with check_exact=[spatial].

See Also: concat_shapes().

Args

*shapes
Shape objects to combine.
check_exact
Sequence of type filters, such as channel(), batch(), spatial() or instance(). These types are checked for exact match, i.e. shapes must either contain all dimensions of that type or none. The order of the dimensions does not matter. For example, when checking spatial(), the shapes spatial(x=5) and spatial(y=4) cannot be combined. However, spatial(x=5, y=4) can be combined with spatial(y=4, x=5) and channel('vector').
order
Dimension type order as tuple of type filters (channel(), batch(), spatial() or instance()). Dimensions are grouped by type while merging.

Returns

Merged Shape

Expand source code
def merge_shapes(*shapes: Shape, check_exact: tuple or list = (), order=(batch, instance, spatial, channel)):
    """
    Combines `shapes` into a single `Shape`, grouping dimensions by type.
    If dimensions with equal names are present in multiple shapes, their types and sizes must match.

    The shorthand `shape1 & shape2` merges shapes with `check_exact=[spatial]`.

    See Also:
        `concat_shapes()`.

    Args:
        *shapes: `Shape` objects to combine.
        check_exact: Sequence of type filters, such as `channel`, `batch`, `spatial` or `instance`.
            These types are checked for exact match, i.e. shapes must either contain all dimensions of that type or none.
            The order of the dimensions does not matter.
            For example, when checking `spatial`, the shapes `spatial(x=5)` and `spatial(y=4)` cannot be combined.
            However, `spatial(x=5, y=4)` can be combined with `spatial(y=4, x=5)` and `channel('vector')`.
        order: Dimension type order as `tuple` of type filters (`channel`, `batch`, `spatial` or `instance`). Dimensions are grouped by type while merging.

    Returns:
        Merged `Shape`
    """
    if not shapes:
        return EMPTY_SHAPE
    merged = []
    for dim_type in order:
        check_type_exact = dim_type in check_exact
        group = dim_type(shapes[0])
        for shape in shapes[1:]:
            shape = dim_type(shape)
            if check_type_exact:
                if group.rank == 0:
                    group = shape
                elif shape.rank > 0:  # check exact match
                    if shape.rank != group.rank:
                        raise IncompatibleShapes(f"Failed to combine {shapes} because a different number of {dim_type.__name__} dimensions are present but exact checks are enabled for dimensions of type {dim_type.__name__}. Try declaring all spatial dimensions in one call. Types are {[s.types for s in shapes]}", *shapes)
                    elif set(shape.names) != set(group.names):
                        raise IncompatibleShapes(f"Failed to combine {shapes} because {dim_type.__name__} dimensions do not match but exact checks were enabled for dimensions of type {dim_type.__name__}. Try declaring all spatial dimensions in one call. Types are {[s.types for s in shapes]}", *shapes)
                    elif shape._reorder(group) != group:
                        raise IncompatibleShapes(f"Failed to combine {shapes} because {dim_type.__name__} dimensions do not match but exact checks were enabled for dimensions of type {dim_type.__name__}. Try declaring all spatial dimensions in one call. Types are {[s.types for s in shapes]}", *shapes)
            else:
                for dim in shape:
                    if dim not in group:
                        group = group._expand(dim, pos=-1)
                    else:  # check size match
                        if not _size_equal(dim.size, group.get_size(dim.name)):
                            raise IncompatibleShapes(f"Cannot merge shapes {shapes} because dimension '{dim.name}' exists with different sizes.", *shapes)
        merged.append(group)
    return concat_shapes(*merged)
def meshgrid(dim_type=<function spatial>, stack_dim=(vectorᶜ=None), **dimensions: int) ‑> phi.math._tensors.Tensor

Generate a mesh-grid Tensor from keyword dimensions.

Args

**dimensions
Mesh-grid dimensions, mapping names to values. Values may be int, 1D Tensor or 1D native tensor.
dim_type
Dimension type of mesh-grid dimensions, one of spatial(), channel(), batch(), instance().
stack_dim
Vector dimension along which grids are stacked.

Returns

Mesh-grid Tensor

Expand source code
def meshgrid(dim_type=spatial, stack_dim=channel('vector'), **dimensions: int or Tensor) -> Tensor:
    """
    Generate a mesh-grid `Tensor` from keyword dimensions.

    Args:
        **dimensions: Mesh-grid dimensions, mapping names to values.
            Values may be `int`, 1D `Tensor` or 1D native tensor.
        dim_type: Dimension type of mesh-grid dimensions, one of `spatial`, `channel`, `batch`, `instance`.
        stack_dim: Vector dimension along which grids are stacked.

    Returns:
        Mesh-grid `Tensor`
    """
    assert 'vector' not in dimensions
    dim_values = []
    dim_sizes = []
    for dim, spec in dimensions.items():
        if isinstance(spec, int):
            dim_values.append(tuple(range(spec)))
            dim_sizes.append(spec)
        elif isinstance(spec, Tensor):
            assert spec.rank == 1, f"Only 1D sequences allowed, got {spec} for dimension '{dim}'."
            dim_values.append(spec.native())
            dim_sizes.append(spec.shape.volume)
        else:
            backend = choose_backend(spec)
            shape = backend.staticshape(spec)
            assert len(shape) == 1, "Only 1D sequences allowed, got {spec} for dimension '{dim}'."
            dim_values.append(spec)
            dim_sizes.append(shape[0])
    backend = choose_backend(*dim_values, prefer_default=True)
    indices_list = backend.meshgrid(*dim_values)
    grid_shape = dim_type(**{dim: size for dim, size in zip(dimensions.keys(), dim_sizes)})
    channels = [NativeTensor(t, grid_shape) for t in indices_list]
    return stack(channels, stack_dim)
def min(value: phi.math._tensors.Tensor, dim: str = None) ‑> phi.math._tensors.Tensor

Determines the minimum value of values along the specified dimensions.

Args

value
Tensor or list / tuple of Tensors.
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to add up the sequence of Tensors

Returns

Tensor without the reduced dimensions.

Expand source code
def min_(value: Tensor or list or tuple, dim: str or int or tuple or list or None or Shape = None) -> Tensor:
    """
    Determines the minimum value of `values` along the specified dimensions.

    Args:
        value: `Tensor` or `list` / `tuple` of Tensors.
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to add up the sequence of Tensors

    Returns:
        `Tensor` without the reduced dimensions.
    """
    return _reduce(value, dim, native_function=lambda backend, native, dim: backend.min(native, dim))
def minimize(f: Callable[[~X], ~Y], solve: phi.math._functional.Solve[~X, ~Y]) ‑> ~X

Finds a minimum of the scalar function f(x). The method argument of solve determines which method is used. All methods supported by scipy.optimize.minimize are supported, see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html .

This method is limited to backends that support functional_gradient(), currently PyTorch, TensorFlow and Jax.

To obtain additional information about the performed solve, use a SolveTape.

See Also: solve_nonlinear().

Args

f
Function whose output is subject to minimization. All positional arguments of f are optimized and must be Tensor or TensorLike. The first return value of f must be a scalar float Tensor or TensorLike.
solve
Solve object to specify method type, parameters and initial guess for x.

Returns

x
solution, the minimum point x.

Raises

NotConverged
If the desired accuracy was not be reached within the maximum number of iterations.
Diverged
If the optimization failed prematurely.
Expand source code
def minimize(f: Callable[[X], Y], solve: Solve[X, Y]) -> X:
    """
    Finds a minimum of the scalar function *f(x)*.
    The `method` argument of `solve` determines which method is used.
    All methods supported by `scipy.optimize.minimize` are supported,
    see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html .

    This method is limited to backends that support `functional_gradient()`, currently PyTorch, TensorFlow and Jax.

    To obtain additional information about the performed solve, use a `SolveTape`.

    See Also:
        `solve_nonlinear()`.

    Args:
        f: Function whose output is subject to minimization.
            All positional arguments of `f` are optimized and must be `Tensor` or `TensorLike`.
            The first return value of `f` must be a scalar float `Tensor` or `TensorLike`.
        solve: `Solve` object to specify method type, parameters and initial guess for `x`.

    Returns:
        x: solution, the minimum point `x`.

    Raises:
        NotConverged: If the desired accuracy was not be reached within the maximum number of iterations.
        Diverged: If the optimization failed prematurely.
    """
    assert (solve.relative_tolerance == 0).all, f"relative_tolerance must be zero for minimize() but got {solve.relative_tolerance}"
    x0_nest, x0_tensors = disassemble_tree(solve.x0)
    x0_tensors = [to_float(t) for t in x0_tensors]
    backend = choose_backend_t(*x0_tensors, prefer_default=True)
    batch_dims = merge_shapes(*[t.shape for t in x0_tensors]).batch
    x0_natives = []
    for t in x0_tensors:
        t._expand()
        assert t.shape.is_uniform
        x0_natives.append(reshaped_native(t, [batch_dims, t.shape.non_batch], force_expand=True))
    x0_flat = backend.concat(x0_natives, -1)

    def unflatten_assemble(x_flat, additional_dims: Shape = EMPTY_SHAPE):
        i = 0
        x_tensors = []
        for x0_native, x0_tensor in zip(x0_natives, x0_tensors):
            vol = backend.shape(x0_native)[-1]
            flat_native = x_flat[..., i:i + vol]
            x_tensors.append(reshaped_tensor(flat_native, [*additional_dims, batch_dims, x0_tensor.shape.non_batch]))
            i += vol
        x = assemble_tree(x0_nest, x_tensors)
        return x

    def native_function(x_flat):
        x = unflatten_assemble(x_flat)
        if isinstance(x, (tuple, list)):
            y = f(*x)
        else:
            y = f(x)
        _, y_tensors = disassemble_tree(y)
        return y_tensors[0].sum, reshaped_native(y_tensors[0], [batch_dims])

    atol = backend.to_float(reshaped_native(solve.absolute_tolerance, [batch_dims], force_expand=True))
    maxi = backend.to_int32(reshaped_native(solve.max_iterations, [batch_dims], force_expand=True))
    trj = _SOLVE_TAPES and any(t.record_trajectories for t in _SOLVE_TAPES)
    t = time.perf_counter()
    ret = backend.minimize(solve.method, native_function, x0_flat, atol, maxi, trj)
    t = time.perf_counter() - t
    if not trj:
        assert isinstance(ret, SolveResult)
        converged = reshaped_tensor(ret.converged, [batch_dims])
        diverged = reshaped_tensor(ret.diverged, [batch_dims])
        x = unflatten_assemble(ret.x)
        iterations = reshaped_tensor(ret.iterations, [batch_dims])
        function_evaluations = reshaped_tensor(ret.function_evaluations, [batch_dims])
        residual = reshaped_tensor(ret.residual, [batch_dims])
        result = SolveInfo(solve, x, residual, iterations, function_evaluations, converged, diverged, ret.method, ret.message, t)
    else:  # trajectory
        assert isinstance(ret, (tuple, list)) and all(isinstance(r, SolveResult) for r in ret)
        converged = reshaped_tensor(ret[-1].converged, [batch_dims])
        diverged = reshaped_tensor(ret[-1].diverged, [batch_dims])
        x = unflatten_assemble(ret[-1].x)
        x_ = unflatten_assemble(backend.stack([r.x for r in ret]), additional_dims=batch('trajectory'))
        residual = stack([reshaped_tensor(r.residual, [batch_dims]) for r in ret], batch('trajectory'))
        iterations = reshaped_tensor(ret[-1].iterations, [batch_dims])
        function_evaluations = stack([reshaped_tensor(r.function_evaluations, [batch_dims]) for r in ret], batch('trajectory'))
        result = SolveInfo(solve, x_, residual, iterations, function_evaluations, converged, diverged, ret[-1].method, ret[-1].message, t)
    for tape in _SOLVE_TAPES:
        tape._add(solve, trj, result)
    result.convergence_check(False)  # raises ConvergenceException
    return x
def minimum(x: phi.math._tensors.Tensor, y: phi.math._tensors.Tensor)

Computes the element-wise minimum of x and y.

Expand source code
def minimum(x: Tensor or float, y: Tensor or float):
    """ Computes the element-wise minimum of `x` and `y`. """
    return custom_op2(x, y, minimum, lambda x_, y_: choose_backend(x_, y_).minimum(x_, y_))
def native(value: phi.math._tensors.Tensor)

Returns the native tensor representation of value. If value is a Tensor, this is equal to calling Tensor.native(). Otherwise, checks that value is a valid tensor object and returns it.

Args

value
Tensor or native tensor or tensor-like.

Returns

Native tensor representation

Raises

ValueError if the tensor cannot be transposed to match target_shape

Expand source code
def native(value: Tensor or Number or tuple or list or Any):
    """
    Returns the native tensor representation of `value`.
    If `value` is a `phi.math.Tensor`, this is equal to calling `phi.math.Tensor.native()`.
    Otherwise, checks that `value` is a valid tensor object and returns it.

    Args:
        value: `Tensor` or native tensor or tensor-like.

    Returns:
        Native tensor representation

    Raises:
        ValueError if the tensor cannot be transposed to match target_shape
    """
    if isinstance(value, Tensor):
        return value.native()
    else:
        choose_backend(value)  # check that value is a native tensor
        return value
def native_call(f: Callable, *inputs: phi.math._tensors.Tensor, channels_last=None, channel_dim='vector')

Calls f with the native representations of the inputs tensors in standard layout and returns the result as a Tensor.

All inputs are converted to native tensors depending on channels_last:

  • channels_last=True: Dimension layout (total_batch_size, spatial_dims…, total_channel_size)
  • channels_last=False: Dimension layout (total_batch_size, total_channel_size, spatial_dims…)

All batch dimensions are compressed into a single dimension with total_batch_size = input.shape.batch.volume. The same is done for all channel dimensions.

Additionally, missing batch and spatial dimensions are added so that all inputs have the same batch and spatial shape.

Args

f
Function to be called on native tensors of inputs. The function output must have the same dimension layout as the inputs and the batch size must be identical.
*inputs
Uniform Tensor arguments
channels_last
(Optional) Whether to put channels as the last dimension of the native representation. If None, the channels are put in the default position associated with the current backend, see Backend.prefers_channels_last().
channel_dim
Name of the channel dimension of the result.

Returns

Tensor with batch and spatial dimensions of inputs and single channel dimension channel_dim.

Expand source code
def native_call(f: Callable, *inputs: Tensor, channels_last=None, channel_dim='vector'):
    """
    Calls `f` with the native representations of the `inputs` tensors in standard layout and returns the result as a `Tensor`.

    All inputs are converted to native tensors depending on `channels_last`:

    * `channels_last=True`: Dimension layout `(total_batch_size, spatial_dims..., total_channel_size)`
    * `channels_last=False`: Dimension layout `(total_batch_size, total_channel_size, spatial_dims...)`

    All batch dimensions are compressed into a single dimension with `total_batch_size = input.shape.batch.volume`.
    The same is done for all channel dimensions.

    Additionally, missing batch and spatial dimensions are added so that all `inputs` have the same batch and spatial shape.

    Args:
        f: Function to be called on native tensors of `inputs`.
            The function output must have the same dimension layout as the inputs and the batch size must be identical.
        *inputs: Uniform `Tensor` arguments
        channels_last: (Optional) Whether to put channels as the last dimension of the native representation.
            If `None`, the channels are put in the default position associated with the current backend,
            see `phi.math.backend.Backend.prefers_channels_last()`.
        channel_dim: Name of the channel dimension of the result.

    Returns:
        `Tensor` with batch and spatial dimensions of `inputs` and single channel dimension `channel_dim`.
    """
    if channels_last is None:
        backend = choose_backend_t(*inputs, prefer_default=True)
        channels_last = backend.prefers_channels_last()
    batch = merge_shapes(*[i.shape.batch for i in inputs])
    spatial = merge_shapes(*[i.shape.spatial for i in inputs])
    natives = []
    for i in inputs:
        groups = (batch, *i.shape.spatial.names, i.shape.channel) if channels_last else (batch, i.shape.channel, *i.shape.spatial.names)
        natives.append(reshaped_native(i, groups))
    output = f(*natives)
    if isinstance(output, (tuple, list)):
        raise NotImplementedError()
    else:
        groups = (batch, *spatial, channel(channel_dim)) if channels_last else (batch, channel(channel_dim), *spatial)
        result = reshaped_tensor(output, groups)
        if result.shape.get_size(channel_dim) == 1:
            result = result.dimension(channel_dim)[0]  # remove vector dim if not required
        return result
def nonzero(value: phi.math._tensors.Tensor, list_dim: phi.math._shape.Shape = (nonzeroⁱ=None), index_dim: phi.math._shape.Shape = (vectorᶜ=None))

Get spatial indices of non-zero / True values.

Batch dimensions are preserved by this operation. If channel dimensions are present, this method returns the indices where any component is nonzero.

Implementations:

Args

value
spatial tensor to find non-zero / True values in.
list_dim
Dimension listing non-zero values.
index_dim
Index dimension.

Returns

Tensor of shape (batch dims…, list_dim=#non-zero, index_dim=value.shape.spatial_rank)

Expand source code
def nonzero(value: Tensor, list_dim: Shape = instance('nonzero'), index_dim: Shape = channel('vector')):
    """
    Get spatial indices of non-zero / True values.
    
    Batch dimensions are preserved by this operation.
    If channel dimensions are present, this method returns the indices where any component is nonzero.

    Implementations:

    * NumPy: [`numpy.argwhere`](https://numpy.org/doc/stable/reference/generated/numpy.argwhere.html)
    * PyTorch: [`torch.nonzero`](https://pytorch.org/docs/stable/generated/torch.nonzero.html)
    * TensorFlow: [`tf.where(tf.not_equal(values, 0))`](https://www.tensorflow.org/api_docs/python/tf/where)
    * Jax: [`jax.numpy.nonzero`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.nonzero.html)

    Args:
        value: spatial tensor to find non-zero / True values in.
        list_dim: Dimension listing non-zero values.
        index_dim: Index dimension.

    Returns:
        `Tensor` of shape (batch dims..., `list_dim`=#non-zero, `index_dim`=value.shape.spatial_rank)

    """
    if value.shape.channel_rank > 0:
        value = sum_(abs(value), value.shape.channel)

    def unbatched_nonzero(value: Tensor):
        native = reshaped_native(value, [*value.shape.spatial])
        backend = choose_backend(native)
        indices = backend.nonzero(native)
        indices_shape = Shape(backend.staticshape(indices), (list_dim.name, index_dim.name), (list_dim.type, index_dim.type))
        return NativeTensor(indices, indices_shape)

    return broadcast_op(unbatched_nonzero, [value], iter_dims=value.shape.batch.names)
def normalize_to(target: phi.math._tensors.Tensor, source: float, epsilon=1e-05)

Multiplies the target so that its sum matches the source.

Args

target
Tensor
source
Tensor or constant
epsilon
Small number to prevent division by zero.

Returns

Normalized tensor of the same shape as target

Expand source code
def normalize_to(target: Tensor, source: float or Tensor, epsilon=1e-5):
    """
    Multiplies the target so that its sum matches the source.

    Args:
        target: `Tensor`
        source: `Tensor` or constant
        epsilon: Small number to prevent division by zero.

    Returns:
        Normalized tensor of the same shape as target
    """
    target_total = math.sum_(target)
    denominator = math.maximum(target_total, epsilon) if epsilon is not None else target_total
    source_total = math.sum_(source)
    return target * (source_total / denominator)
def numpy(value: phi.math._tensors.Tensor)

Converts value to a numpy.ndarray where value must be a Tensor, backend tensor or tensor-like. If value is a Tensor, this is equal to calling Tensor.numpy().

Note: Using this function breaks the autograd chain. The returned tensor is not differentiable. To get a differentiable tensor, use Tensor.native() instead.

Transposes the underlying tensor to match the name order and adds singleton dimensions for new dimension names. If a dimension of the tensor is not listed in order, a ValueError is raised.

If value is a NumPy array, it may be returned directly.

Returns

NumPy representation of value

Raises

ValueError if the tensor cannot be transposed to match target_shape

Expand source code
def numpy(value: Tensor or Number or tuple or list or Any):
    """
    Converts `value` to a `numpy.ndarray` where value must be a `Tensor`, backend tensor or tensor-like.
    If `value` is a `phi.math.Tensor`, this is equal to calling `phi.math.Tensor.numpy()`.

    *Note*: Using this function breaks the autograd chain. The returned tensor is not differentiable.
    To get a differentiable tensor, use `Tensor.native()` instead.

    Transposes the underlying tensor to match the name order and adds singleton dimensions for new dimension names.
    If a dimension of the tensor is not listed in `order`, a `ValueError` is raised.

    If `value` is a NumPy array, it may be returned directly.

    Returns:
        NumPy representation of `value`

    Raises:
        ValueError if the tensor cannot be transposed to match target_shape
    """
    if isinstance(value, Tensor):
        return value.numpy()
    else:
        backend = choose_backend(value)
        return backend.numpy(value)
def ones(*shape: phi.math._shape.Shape, dtype: phi.math.backend._dtype.DType = None) ‑> phi.math._tensors.Tensor

Define a tensor with specified shape with value 1.0/ 1 / True everywhere.

This method may not immediately allocate the memory to store the values.

See Also: ones_like(), zeros().

Args

*shape
This (possibly empty) sequence of Shapes is concatenated, preserving the order.
dtype
Data type as DType object. Defaults to float matching the current precision setting.

Returns

Tensor

Expand source code
def ones(*shape: Shape, dtype: DType = None) -> Tensor:
    """
    Define a tensor with specified shape with value `1.0`/ `1` / `True` everywhere.
    
    This method may not immediately allocate the memory to store the values.

    See Also:
        `ones_like()`, `zeros()`.

    Args:
        *shape: This (possibly empty) sequence of `Shape`s is concatenated, preserving the order.
        dtype: Data type as `DType` object. Defaults to `float` matching the current precision setting.

    Returns:
        `Tensor`
    """
    return _initialize(lambda shape, dtype: CollapsedTensor(NativeTensor(default_backend().ones((), dtype=dtype), EMPTY_SHAPE), shape), shape, dtype)
def ones_like(tensor: phi.math._tensors.Tensor) ‑> phi.math._tensors.Tensor

Create a Tensor containing only 1.0 / 1 / True with the same shape and dtype as obj.

Expand source code
def ones_like(tensor: Tensor) -> Tensor:
    """ Create a `Tensor` containing only `1.0` / `1` / `True` with the same shape and dtype as `obj`. """
    return zeros(tensor.shape, dtype=tensor.dtype) + 1
def pack_dims(value: phi.math._tensors.Tensor, dims: phi.math._shape.Shape, packed_dim: phi.math._shape.Shape, pos: int = None)

Compresses multiple dimensions into a single dimension by concatenating the elements. Elements along the new dimensions are laid out according to the order of dims. If the order of dims differs from the current dimension order, the tensor is transposed accordingly. This function replaces the traditional reshape for these cases.

The type of the new dimension will be equal to the types of dims. If dims have varying types, the new dimension will be a batch dimension.

See Also: unpack_dims()

Args

value
Tensor containing the dimensions dims.
dims
Dimensions to be compressed in the specified order.
packed_dim
Name and type of the new dimension.
pos
Index of new dimension. None for automatic, -1 for last, 0 for first.

Returns

Tensor with compressed shape.

Expand source code
def pack_dims(value: Tensor,
              dims: Shape or tuple or list,
              packed_dim: Shape,
              pos: int or None = None):
    """
    Compresses multiple dimensions into a single dimension by concatenating the elements.
    Elements along the new dimensions are laid out according to the order of `dims`.
    If the order of `dims` differs from the current dimension order, the tensor is transposed accordingly.
    This function replaces the traditional `reshape` for these cases.

    The type of the new dimension will be equal to the types of `dims`.
    If `dims` have varying types, the new dimension will be a batch dimension.

    See Also:
        `unpack_dims()`

    Args:
        value: Tensor containing the dimensions `dims`.
        dims: Dimensions to be compressed in the specified order.
        packed_dim: Name and type of the new dimension.
        pos: Index of new dimension. `None` for automatic, `-1` for last, `0` for first.

    Returns:
        `Tensor` with compressed shape.
    """
    dims = dims.names if isinstance(dims, Shape) else dims
    if len(dims) == 0 or all(dim not in value.shape for dim in dims):
        return CollapsedTensor(value, value.shape._expand(packed_dim.with_sizes([1]), pos))
    if len(dims) == 1:
        return rename_dims(value, dims, packed_dim)
    order = value.shape._order_group(dims)
    native = value.native(order)
    if pos is None:
        pos = min(value.shape.indices(dims))
    new_shape = value.shape.without(dims)._expand(packed_dim.with_sizes([value.shape.only(dims).volume]), pos)
    native = choose_backend(native).reshape(native, new_shape.sizes)
    return NativeTensor(native, new_shape)
def pad(value: phi.math._tensors.Tensor, widths: dict, mode: e_.Extrapolation) ‑> phi.math._tensors.Tensor

Pads a tensor along the specified dimensions, determining the added values using the given extrapolation. Unlike Extrapolation.pad(), this function can handle negative widths which slice off outer values.

Args

value
Tensor to be padded
widths
dict mapping dimension name (str) to (lower, upper) where lower and upper are int that can be positive (pad), negative (slice) or zero (pass).
mode
Extrapolation used to determine values added from positive widths.

Returns

Padded Tensor

Expand source code
def pad(value: Tensor, widths: dict, mode: 'e_.Extrapolation') -> Tensor:
    """
    Pads a tensor along the specified dimensions, determining the added values using the given extrapolation.
    Unlike `Extrapolation.pad()`, this function can handle negative widths which slice off outer values.

    Args:
        value: `Tensor` to be padded
        widths: `dict` mapping dimension name (`str`) to `(lower, upper)`
            where `lower` and `upper` are `int` that can be positive (pad), negative (slice) or zero (pass).
        mode: `Extrapolation` used to determine values added from positive `widths`.

    Returns:
        Padded `Tensor`
    """
    has_negative_widths = any(w[0] < 0 or w[1] < 0 for w in widths.values())
    slices = None
    if has_negative_widths:
        slices = {dim: slice(max(0, -w[0]), min(0, w[1]) or None) for dim, w in widths.items()}
        widths = {dim: (max(0, w[0]), max(0, w[1])) for dim, w in widths.items()}
    result = mode.pad(value, widths)
    return result[slices] if has_negative_widths else result
def precision(floating_point_bits: int)

Sets the floating point precision for the local context.

Usage: with precision(p):

This overrides the global setting, see set_global_precision().

Args

floating_point_bits
16 for half, 32 for single, 64 for double
Expand source code
@contextmanager
def precision(floating_point_bits: int):
    """
    Sets the floating point precision for the local context.

    Usage: `with precision(p):`

    This overrides the global setting, see `set_global_precision()`.

    Args:
        floating_point_bits: 16 for half, 32 for single, 64 for double
    """
    _PRECISION.append(floating_point_bits)
    try:
        yield None
    finally:
        _PRECISION.pop(-1)
def print(obj: phi.math._tensors.Tensor = None, name: str = '')

Print a tensor with no more than two spatial dimensions, slicing it along all batch and channel dimensions.

Unlike NumPy's array printing, the dimensions are sorted. Elements along the alphabetically first dimension is printed to the right, the second dimension upward. Typically, this means x right, y up.

Args

obj
tensor-like
name
name of the tensor

Returns:

Expand source code
def print_(obj: Tensor or TensorLike or Number or tuple or list or None = None, name: str = ""):
    """
    Print a tensor with no more than two spatial dimensions, slicing it along all batch and channel dimensions.
    
    Unlike NumPy's array printing, the dimensions are sorted.
    Elements along the alphabetically first dimension is printed to the right, the second dimension upward.
    Typically, this means x right, y up.

    Args:
        obj: tensor-like
        name: name of the tensor

    Returns:

    """
    def variables(obj) -> dict:
        if hasattr(obj, '__variable_attrs__') or hasattr(obj, '__value_attrs__'):
            return {f".{a}": getattr(obj, a) for a in variable_attributes(obj)}
        elif isinstance(obj, (tuple, list)):
            return {f"[{i}]": item for i, item in enumerate(obj)}
        elif isinstance(obj, dict):
            return obj
        else:
            raise ValueError(f"Not TensorLike: {type(obj)}")

    if obj is None:
        print()
    elif isinstance(obj, Tensor):
        _print_tensor(obj, name)
    elif isinstance(obj, TensorLike):
        for n, val in variables(obj).items():
            print_(val, name + n)
    else:
        value = wrap(obj)
        _print_tensor(value, name)
def print_gradient(value: phi.math._tensors.Tensor, name='', detailed=False) ‑> phi.math._tensors.Tensor

Prints the gradient vector of value when computed. The gradient at value is the vector-Jacobian product of all operations between the output of this function and the loss value.

The gradient is not printed in jit mode, see jit_compile().

Example

def f(x):
    x = math.print_gradient(x, 'dx')
    return math.l1_loss(x)

math.functional_gradient(f)(math.ones(x=6))

Args

value
Tensor for which the gradient may be computed later.
name
(Optional) Name to print along with the gradient values
detailed
If False, prints a short summary of the gradient tensor.

Returns

identity(value) which when differentiated, prints the gradient vector.

Expand source code
def print_gradient(value: Tensor, name="", detailed=False) -> Tensor:
    """
    Prints the gradient vector of `value` when computed.
    The gradient at `value` is the vector-Jacobian product of all operations between the output of this function and the loss value.

    The gradient is not printed in jit mode, see `jit_compile()`.

    Example:
        ```python
        def f(x):
            x = math.print_gradient(x, 'dx')
            return math.l1_loss(x)

        math.functional_gradient(f)(math.ones(x=6))
        ```

    Args:
        value: `Tensor` for which the gradient may be computed later.
        name: (Optional) Name to print along with the gradient values
        detailed: If `False`, prints a short summary of the gradient tensor.

    Returns:
        `identity(value)` which when differentiated, prints the gradient vector.
    """
    def print_grad(_x, _y, dx):
        if all_available(_x, dx):
            if detailed:
                print_(dx, name=name)
            else:
                print(f"{name}:  \t{dx}")
        return dx,
    identity = custom_gradient(lambda x: x, print_grad)
    return identity(value)
def prod(value: phi.math._tensors.Tensor, dim: str = None) ‑> phi.math._tensors.Tensor

Multiplies values along the specified dimensions.

Args

value
Tensor or list / tuple of Tensors.
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to add up the sequence of Tensors

Returns

Tensor without the reduced dimensions.

Expand source code
def prod(value: Tensor or list or tuple, dim: str or int or tuple or list or None or Shape = None) -> Tensor:
    """
    Multiplies `values` along the specified dimensions.

    Args:
        value: `Tensor` or `list` / `tuple` of Tensors.
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to add up the sequence of Tensors

    Returns:
        `Tensor` without the reduced dimensions.
    """
    return _reduce(value, dim,
                   native_function=lambda backend, native, dim: backend.prod(native, dim),
                   collapsed_function=lambda inner, red_shape: inner ** red_shape.volume)
def quantile(value: phi.math._tensors.Tensor, quantiles: float, dim: str = None)

Compute the q-th quantile of value along dim for each q in quantiles.

Implementations:

Args

value
Tensor
quantiles
Single quantile or tensor of quantiles to compute. Must be of type float, tuple, list or Tensor.
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to reduce the sequence of Tensors

Returns

Tensor with dimensions of quantiles and non-reduced dimensions of value.

Expand source code
def quantile(value: Tensor,
             quantiles: float or tuple or list or Tensor,
             dim: str or int or tuple or list or None or Shape or Callable = None):
    """
    Compute the q-th quantile of `value` along `dim` for each q in `quantiles`.

    Implementations:

    * NumPy: [`quantile`](https://numpy.org/doc/stable/reference/generated/numpy.quantile.html)
    * PyTorch: [`quantile`](https://pytorch.org/docs/stable/generated/torch.quantile.html#torch.quantile)
    * TensorFlow: [`tfp.stats.percentile`](https://www.tensorflow.org/probability/api_docs/python/tfp/stats/percentile)
    * Jax: [`quantile`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.quantile.html)

    Args:
        value: `Tensor`
        quantiles: Single quantile or tensor of quantiles to compute.
            Must be of type `float`, `tuple`, `list` or `Tensor`.
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to reduce the sequence of Tensors

    Returns:
        `Tensor` with dimensions of `quantiles` and non-reduced dimensions of `value`.
    """
    dims = _resolve_dims(dim, value.shape)
    native_values = reshaped_native(value, [*value.shape.without(dims), value.shape.only(dims)])
    backend = choose_backend(native_values)
    q = tensor(quantiles, default_list_dim=instance('quantiles'))
    native_quantiles = reshaped_native(q, [q.shape])
    native_result = backend.quantile(native_values, native_quantiles)
    return reshaped_tensor(native_result, [q.shape, *value.shape.without(dims)])
def random_normal(*shape: phi.math._shape.Shape, dtype: phi.math.backend._dtype.DType = None) ‑> phi.math._tensors.Tensor

Creates a Tensor with the specified shape, filled with random values sampled from a normal / Gaussian distribution.

Implementations:

Args

*shape
This (possibly empty) sequence of Shapes is concatenated, preserving the order.
dtype
(optional) floating point DType. If None, a float tensor with the current default precision is created, see get_precision().

Returns

Tensor

Expand source code
def random_normal(*shape: Shape, dtype: DType = None) -> Tensor:
    """
    Creates a `Tensor` with the specified shape, filled with random values sampled from a normal / Gaussian distribution.

    Implementations:

    * NumPy: [`numpy.random.standard_normal`](https://numpy.org/doc/stable/reference/random/generated/numpy.random.standard_normal.html)
    * PyTorch: [`torch.randn`](https://pytorch.org/docs/stable/generated/torch.randn.html)
    * TensorFlow: [`tf.random.normal`](https://www.tensorflow.org/api_docs/python/tf/random/normal)
    * Jax: [`jax.random.normal`](https://jax.readthedocs.io/en/latest/_autosummary/jax.random.normal.html)

    Args:
        *shape: This (possibly empty) sequence of `Shape`s is concatenated, preserving the order.
        dtype: (optional) floating point `DType`. If `None`, a float tensor with the current default precision is created, see `get_precision()`.

    Returns:
        `Tensor`
    """

    def uniform_random_normal(shape, dtype):
        native = choose_backend(*shape.sizes, prefer_default=True).random_normal(shape.sizes)
        native = native if dtype is None else native.astype(dtype)
        return NativeTensor(native, shape)

    return _initialize(uniform_random_normal, shape, dtype)
def random_uniform(*shape: phi.math._shape.Shape, dtype: phi.math.backend._dtype.DType = None) ‑> phi.math._tensors.Tensor

Creates a Tensor with the specified shape, filled with random values sampled from a uniform distribution.

Args

*shape
This (possibly empty) sequence of Shapes is concatenated, preserving the order.
dtype
(optional) floating point DType. If None, a float tensor with the current default precision is created, see get_precision().

Returns

Tensor

Expand source code
def random_uniform(*shape: Shape, dtype: DType = None) -> Tensor:
    """
    Creates a `Tensor` with the specified shape, filled with random values sampled from a uniform distribution.

    Args:
        *shape: This (possibly empty) sequence of `Shape`s is concatenated, preserving the order.
        dtype: (optional) floating point `DType`. If `None`, a float tensor with the current default precision is created, see `get_precision()`.

    Returns:
        `Tensor`
    """

    def uniform_random_uniform(shape, dtype):
        native = choose_backend(*shape.sizes, prefer_default=True).random_uniform(shape.sizes)
        native = native if dtype is None else native.astype(dtype)
        return NativeTensor(native, shape)

    return _initialize(uniform_random_uniform, shape, dtype)
def range(dim: phi.math._shape.Shape, start_or_stop: int, stop: int = None, step=1)

Returns evenly spaced values between start and stop. If only one limit is given, 0 is used for the start.

See Also: range_tensor(), linspace(), meshgrid().

Args

dim
Dimension name and type as Shape object. The size of dim is ignored.
start_or_stop
Start if two limits are given, stop otherwise. int
stop
(Optional) stop
step
Distance between values.

Returns

Tensor

Expand source code
def arange(dim: Shape, start_or_stop: int, stop: int or None = None, step=1):
    """
    Returns evenly spaced values between `start` and `stop`.
    If only one limit is given, `0` is used for the start.

    See Also:
        `range_tensor()`, `linspace()`, `meshgrid()`.

    Args:
        dim: Dimension name and type as `Shape` object. The `size` of `dim` is ignored.
        start_or_stop: Start if two limits are given, stop otherwise. `int`
        stop: (Optional) `stop`
        step: Distance between values.

    Returns:
        `Tensor`
    """
    if stop is None:
        start, stop = 0, start_or_stop
    else:
        start = start_or_stop
    native = choose_backend(start, stop, prefer_default=True).range(start, stop, step, DType(int, 32))
    return NativeTensor(native, dim.with_sizes([stop - start]))
def range_tensor(shape: phi.math._shape.Shape)

Returns a Tensor with given shape containing the linear indices of each element. For 1D tensors, this equivalent to arange() with step=1.

See Also: arange(), meshgrid().

Args

shape
Tensor shape.

Returns

Tensor

Expand source code
def range_tensor(shape: Shape):
    """
    Returns a `Tensor` with given `shape` containing the linear indices of each element.
    For 1D tensors, this equivalent to `arange()` with `step=1`.

    See Also:
        `arange()`, `meshgrid()`.

    Args:
        shape: Tensor shape.

    Returns:
        `Tensor`
    """
    data = arange(spatial('range'), 0, shape.volume)
    return unpack_dims(data, 'range', shape)
def real(x) ‑> phi.math._tensors.Tensor

See Also: imag(), conjugate().

Args

x
Tensor or TensorLike or native tensor.

Returns

Real component of x.

Expand source code
def real(x) -> Tensor:
    """
    See Also:
        `imag()`, `conjugate()`.

    Args:
        x: `Tensor` or `TensorLike` or native tensor.

    Returns:
        Real component of `x`.
    """
    return _backend_op1(x, Backend.real)
def record_gradients(*x: phi.math._tensors.Tensor, persistent=False)

Deprecated. Use functional_gradient() instead.

Context expression to record gradients for operations within that directly or indirectly depend on x.

The function gradients() may be called within the context to evaluate the gradients of a Tensor derived from x w.r.t. x.

Args

*x
Parameters for which gradients of the form dL/dx may be computed
persistent
if False, gradients() may only be called once within the context
Expand source code
@contextmanager
def record_gradients(*x: Tensor, persistent=False):
    """
    *Deprecated. Use `functional_gradient()` instead.*

    Context expression to record gradients for operations within that directly or indirectly depend on `x`.

    The function `gradients()` may be called within the context to evaluate the gradients of a Tensor derived from `x` w.r.t. `x`.

    Args:
        *x: Parameters for which gradients of the form dL/dx may be computed
        persistent: if `False`, `gradients()` may only be called once within the context
    """
    warnings.warn("math.record_gradients() is deprecated. Use functional_gradient() instead.", DeprecationWarning)
    for x_ in x:
        x_._expand()
    natives = sum([x_._natives() for x_ in x], ())
    backend = choose_backend(*natives)
    ctx = backend.record_gradients(natives, persistent=persistent)
    _PARAM_STACK.append(x)
    ctx.__enter__()
    try:
        yield None
    finally:
        ctx.__exit__(None, None, None)
        _PARAM_STACK.pop(0)
def rename_dims(value: phi.math._tensors.Tensor, dims: str, names: str)

Change the name and optionally the type of some dimensions of value.

Args

value
Shape or Tensor.
dims
Existing dimensions of value.
names

Either

  • Sequence of names matching dims as tuple, list or str. This replaces only the dimension names but leaves the types untouched.
  • Shape matching dims to replace names and types.

Returns

Same type as value.

Expand source code
def rename_dims(value: Tensor or Shape, dims: str or tuple or list or Shape, names: str or tuple or list or Shape):
    """
    Change the name and optionally the type of some dimensions of `value`.

    Args:
        value: `Shape` or `Tensor`.
        dims: Existing dimensions of `value`.
        names: Either

            * Sequence of names matching `dims` as `tuple`, `list` or `str`. This replaces only the dimension names but leaves the types untouched.
            * `Shape` matching `dims` to replace names and types.

    Returns:
        Same type as `value`.
    """
    if isinstance(value, Shape):
        return value._replace_names_and_types(dims, names)
    else:
        assert isinstance(value, Tensor), "value must be a Shape or Tensor."
        return value._with_shape_replaced(value.shape._replace_names_and_types(dims, names))
def reshaped_native(value: phi.math._tensors.Tensor, groups: tuple, force_expand: Any = False, to_numpy=False)

Returns a native representation of value where dimensions are laid out according to groups.

See Also: native(), pack_dims(), reshaped_tensor().

Args

value
Tensor
groups
Sequence of dimension names as str or groups of dimensions to be packed_dim as Shape.
force_expand
bool or sequence of dimensions. If True, repeats the tensor along missing dimensions. If False, puts singleton dimensions where possible. If a sequence of dimensions is provided, only forces the expansion for groups containing those dimensions.
to_numpy
If True, converts the native tensor to a numpy.ndarray.

Returns

Native tensor with dimensions matching groups.

Expand source code
def reshaped_native(value: Tensor,
                    groups: tuple or list,
                    force_expand: Any = False,
                    to_numpy=False):
    """
    Returns a native representation of `value` where dimensions are laid out according to `groups`.

    See Also:
        `native()`, `pack_dims()`, `reshaped_tensor()`.

    Args:
        value: `Tensor`
        groups: Sequence of dimension names as `str` or groups of dimensions to be packed_dim as `Shape`.
        force_expand: `bool` or sequence of dimensions.
            If `True`, repeats the tensor along missing dimensions.
            If `False`, puts singleton dimensions where possible.
            If a sequence of dimensions is provided, only forces the expansion for groups containing those dimensions.
        to_numpy: If True, converts the native tensor to a `numpy.ndarray`.

    Returns:
        Native tensor with dimensions matching `groups`.
    """
    assert isinstance(value, Tensor), f"value must be a Tensor but got {type(value)}"
    order = []
    for i, group in enumerate(groups):
        if isinstance(group, Shape):
            present = value.shape.only(group)
            if force_expand is True or present.volume > 1 or (force_expand is not False and group.only(force_expand).volume > 1):
                value = expand(value, group)
            value = pack_dims(value, group, batch(f"group{i}"))
            order.append(f"group{i}")
        else:
            assert isinstance(group, str), f"Groups must be either str or Shape but got {group}"
            order.append(group)
    return value.numpy(order) if to_numpy else value.native(order)
def reshaped_tensor(value: Any, groups: tuple, check_sizes=False, convert=True)

Creates a Tensor from a native tensor or tensor-like whereby the dimensions of value are split according to groups.

See Also: tensor(), reshaped_native(), unpack_dims().

Args

value
Native tensor or tensor-like.
groups
Sequence of dimension groups to be packed_dim as tuple[Shape] or list[Shape].
check_sizes
If True, group sizes must match the sizes of value exactly. Otherwise, allows singleton dimensions.
convert
If True, converts the data to the native format of the current default backend. If False, wraps the data in a Tensor but keeps the given data reference if possible.

Returns

Tensor with all dimensions from groups

Expand source code
def reshaped_tensor(value: Any,
                    groups: tuple or list,
                    check_sizes=False,
                    convert=True):
    """
    Creates a `Tensor` from a native tensor or tensor-like whereby the dimensions of `value` are split according to `groups`.

    See Also:
        `phi.math.tensor()`, `reshaped_native()`, `unpack_dims()`.

    Args:
        value: Native tensor or tensor-like.
        groups: Sequence of dimension groups to be packed_dim as `tuple[Shape]` or `list[Shape]`.
        check_sizes: If True, group sizes must match the sizes of `value` exactly. Otherwise, allows singleton dimensions.
        convert: If True, converts the data to the native format of the current default backend.
            If False, wraps the data in a `Tensor` but keeps the given data reference if possible.

    Returns:
        `Tensor` with all dimensions from `groups`
    """
    assert all(isinstance(g, Shape) for g in groups), "groups must be a sequence of Shapes"
    dims = [batch(f'group{i}') for i, group in enumerate(groups)]
    value = tensor(value, *dims, convert=convert)
    for i, group in enumerate(groups):
        if value.shape.get_size(f'group{i}') == group.volume:
            value = unpack_dims(value, f'group{i}', group)
        elif check_sizes:
            raise AssertionError(f"Group {group} does not match dimension {i} of value {value.shape}")
        else:
            value = unpack_dims(value, f'group{i}', group)
    return value
def round(x) ‑> phi.math._tensors.Tensor

Rounds the Tensor or TensorLike x to the closest integer.

Expand source code
def round_(x) -> Tensor:
    """ Rounds the `Tensor` or `TensorLike` `x` to the closest integer. """
    return _backend_op1(x, Backend.round)
def sample_subgrid(grid: phi.math._tensors.Tensor, start: phi.math._tensors.Tensor, size: phi.math._shape.Shape) ‑> phi.math._tensors.Tensor

Samples a sub-grid from grid with equal distance between sampling points. The values at the new sample points are determined via linear interpolation.

Args

grid
Tensor to be resampled. Values are assumed to be sampled at cell centers.
start
Origin point of sub-grid within grid, measured in number of cells. Must have a single dimension called vector. Example: start=(1, 0.5) would slice off the first grid point in dim 1 and take the mean of neighbouring points in dim 2. The order of dims must be equal to size and grid.shape.spatial.
size
Resolution of the sub-grid. Must not be larger than the resolution of grid. The order of dims must be equal to start and grid.shape.spatial.

Returns

Sub-grid as Tensor

Expand source code
def sample_subgrid(grid: Tensor, start: Tensor, size: Shape) -> Tensor:
    """
    Samples a sub-grid from `grid` with equal distance between sampling points.
    The values at the new sample points are determined via linear interpolation.

    Args:
        grid: `Tensor` to be resampled. Values are assumed to be sampled at cell centers.
        start: Origin point of sub-grid within `grid`, measured in number of cells.
            Must have a single dimension called `vector`.
            Example: `start=(1, 0.5)` would slice off the first grid point in dim 1 and take the mean of neighbouring points in dim 2.
            The order of dims must be equal to `size` and `grid.shape.spatial`.
        size: Resolution of the sub-grid. Must not be larger than the resolution of `grid`.
            The order of dims must be equal to `start` and `grid.shape.spatial`.

    Returns:
      Sub-grid as `Tensor`
    """
    assert start.shape.names == ('vector',)
    assert grid.shape.spatial.names == size.names
    assert math.all_available(start), "Cannot perform sample_subgrid() during tracing, 'start' must be known."
    discard = {}
    for dim, d_start, d_size in zip(grid.shape.spatial.names, start, size.sizes):
        discard[dim] = slice(int(d_start), int(d_start) + d_size + (1 if d_start != 0 else 0))
    grid = grid[discard]
    upper_weight = start % 1
    lower_weight = 1 - upper_weight
    for i, dim in enumerate(grid.shape.spatial.names):
        if upper_weight[i].native() not in (0, 1):
            lower, upper = shift(grid, (0, 1), [dim], padding=None, stack_dim=None)
            grid = upper * upper_weight[i] + lower * lower_weight[i]
    return grid
def scatter(base_grid: phi.math._tensors.Tensor, indices: phi.math._tensors.Tensor, values: phi.math._tensors.Tensor, mode: str = 'update', outside_handling: str = 'discard', indices_gradient=False)

Scatters values into base_grid at indices. instance dimensions of indices and/or values are reduced during scattering. Depending on mode, this method has one of the following effects:

  • mode='update': Replaces the values of base_grid at indices by values. The result is undefined if indices contains duplicates.
  • mode='add': Adds values to base_grid at indices. The values corresponding to duplicate indices are accumulated.
  • mode='mean': Replaces the values of base_grid at indices by the mean of all values with the same index.

Implementations:

See Also: gather().

Args

base_grid
Tensor into which values are scattered.
indices
Tensor of n-dimensional indices at which to place values. Must have a single channel dimension with size matching the number of spatial dimensions of base_grid. This dimension is optional if the spatial rank is 1. Must also contain all scatter_dims.
values
Tensor of values to scatter at indices.
mode
Scatter mode as str. One of ('add', 'mean', 'update')
outside_handling

Defines how indices lying outside the bounds of base_grid are handled.

  • 'discard': outside indices are ignored.
  • 'clamp': outside indices are projected onto the closest point inside the grid.
  • 'undefined': All points are expected to lie inside the grid. Otherwise an error may be thrown or an undefined tensor may be returned.
indices_gradient
Whether to allow the gradient of this operation to be backpropagated through indices.

Returns

Copy of base_grid with updated values at indices.

Expand source code
def scatter(base_grid: Tensor or Shape,
            indices: Tensor,
            values: Tensor,
            mode: str = 'update',
            outside_handling: str = 'discard',
            indices_gradient=False):
    """
    Scatters `values` into `base_grid` at `indices`.
    instance dimensions of `indices` and/or `values` are reduced during scattering.
    Depending on `mode`, this method has one of the following effects:

    * `mode='update'`: Replaces the values of `base_grid` at `indices` by `values`. The result is undefined if `indices` contains duplicates.
    * `mode='add'`: Adds `values` to `base_grid` at `indices`. The values corresponding to duplicate indices are accumulated.
    * `mode='mean'`: Replaces the values of `base_grid` at `indices` by the mean of all `values` with the same index.

    Implementations:

    * NumPy: Slice assignment / `numpy.add.at`
    * PyTorch: [`torch.scatter`](https://pytorch.org/docs/stable/generated/torch.scatter.html), [`torch.scatter_add`](https://pytorch.org/docs/stable/generated/torch.scatter_add.html)
    * TensorFlow: [`tf.tensor_scatter_nd_add`](https://www.tensorflow.org/api_docs/python/tf/tensor_scatter_nd_add), [`tf.tensor_scatter_nd_update`](https://www.tensorflow.org/api_docs/python/tf/tensor_scatter_nd_update)
    * Jax: [`jax.lax.scatter_add`](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scatter_add.html), [`jax.lax.scatter`](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scatter.html)

    See Also:
        `gather()`.

    Args:
        base_grid: `Tensor` into which `values` are scattered.
        indices: `Tensor` of n-dimensional indices at which to place `values`.
            Must have a single channel dimension with size matching the number of spatial dimensions of `base_grid`.
            This dimension is optional if the spatial rank is 1.
            Must also contain all `scatter_dims`.
        values: `Tensor` of values to scatter at `indices`.
        mode: Scatter mode as `str`. One of ('add', 'mean', 'update')
        outside_handling: Defines how indices lying outside the bounds of `base_grid` are handled.

            * `'discard'`: outside indices are ignored.
            * `'clamp'`: outside indices are projected onto the closest point inside the grid.
            * `'undefined'`: All points are expected to lie inside the grid. Otherwise an error may be thrown or an undefined tensor may be returned.
        indices_gradient: Whether to allow the gradient of this operation to be backpropagated through `indices`.

    Returns:
        Copy of `base_grid` with updated values at `indices`.
    """
    assert mode in ('update', 'add', 'mean')
    assert outside_handling in ('discard', 'clamp', 'undefined')
    assert isinstance(indices_gradient, bool)
    grid_shape = base_grid if isinstance(base_grid, Shape) else base_grid.shape
    assert indices.shape.channel.names == ('vector',) or (grid_shape.spatial_rank + grid_shape.instance_rank == 1 and indices.shape.channel_rank == 0)
    batches = values.shape.non_channel.non_instance & indices.shape.non_channel.non_instance
    channels = grid_shape.channel & values.shape.channel
    # --- Set up grid ---
    if isinstance(base_grid, Shape):
        with choose_backend_t(indices, values):
            base_grid = zeros(base_grid & batches & values.shape.channel)
        if mode != 'add':
            base_grid += math.nan
    # --- Handle outside indices ---
    if outside_handling == 'clamp':
        indices = clip(indices, 0, tensor(grid_shape.spatial, channel('vector')) - 1)
    elif outside_handling == 'discard':
        indices_inside = min_((round_(indices) >= 0) & (round_(indices) < tensor(grid_shape.spatial, channel('vector'))), 'vector')
        indices = boolean_mask(indices, indices.shape.instance.name, indices_inside)
        if instance(values).rank > 0:
            values = boolean_mask(values, values.shape.instance.name, indices_inside)
        if indices.shape.is_non_uniform:
            raise NotImplementedError()
    lists = indices.shape.instance & values.shape.instance

    def scatter_forward(base_grid, indices, values):
        indices = to_int32(round_(indices))
        native_grid = reshaped_native(base_grid, [batches, *base_grid.shape.instance, *base_grid.shape.spatial, channels], force_expand=True)
        native_values = reshaped_native(values, [batches, lists, channels], force_expand=True)
        native_indices = reshaped_native(indices, [batches, lists, 'vector'], force_expand=True)
        backend = choose_backend(native_indices, native_values, native_grid)
        if mode in ('add', 'update'):
            native_result = backend.scatter(native_grid, native_indices, native_values, mode=mode)
        else:  # mean
            zero_grid = backend.zeros_like(native_grid)
            summed = backend.scatter(zero_grid, native_indices, native_values, mode='add')
            count = backend.scatter(zero_grid, native_indices, backend.ones_like(native_values), mode='add')
            native_result = summed / backend.maximum(count, 1)
            native_result = backend.where(count == 0, native_grid, native_result)
        return reshaped_tensor(native_result, [batches, *instance(base_grid), *spatial(base_grid), channels], check_sizes=True)

    def scatter_backward(shaped_base_grid_, shaped_indices_, shaped_values_, output, d_output):
        from ._nd import spatial_gradient
        values_grad = gather(d_output, shaped_indices_)
        spatial_gradient_indices = gather(spatial_gradient(d_output), shaped_indices_)
        indices_grad = mean(spatial_gradient_indices * shaped_values_, 'vector_')
        return None, indices_grad, values_grad

    scatter_function = scatter_forward
    if indices_gradient:
        from phi.math import custom_gradient
        scatter_function = custom_gradient(scatter_forward, scatter_backward)

    result = scatter_function(base_grid, indices, values)
    return result
def seed(seed: int)

Sets the current seed of all backends and the built-in random package.

Calling this function with a fixed value at the start of an application yields reproducible results as long as the same backend is used.

Args

seed
Seed to use.
Expand source code
def seed(seed: int):
    """
    Sets the current seed of all backends and the built-in `random` package.

    Calling this function with a fixed value at the start of an application yields reproducible results
    as long as the same backend is used.

    Args:
        seed: Seed to use.
    """
    for backend in BACKENDS:
        backend.seed(seed)
    import random
    random.seed(0)
def set_global_precision(floating_point_bits: int)

Sets the floating point precision of DYNAMIC_BACKEND which affects all registered backends.

If floating_point_bits is an integer, all floating point tensors created henceforth will be of the corresponding data type, float16, float32 or float64. Operations may also convert floating point values to this precision, even if the input had a different precision.

If floating_point_bits is None, new tensors will default to float32 unless specified otherwise. The output of math operations has the same precision as its inputs.

Args

floating_point_bits
one of (16, 32, 64, None)
Expand source code
def set_global_precision(floating_point_bits: int):
    """
    Sets the floating point precision of DYNAMIC_BACKEND which affects all registered backends.

    If `floating_point_bits` is an integer, all floating point tensors created henceforth will be of the corresponding data type, float16, float32 or float64.
    Operations may also convert floating point values to this precision, even if the input had a different precision.

    If `floating_point_bits` is None, new tensors will default to float32 unless specified otherwise.
    The output of math operations has the same precision as its inputs.

    Args:
      floating_point_bits: one of (16, 32, 64, None)
    """
    _PRECISION[0] = floating_point_bits
def shift(x: phi.math._tensors.Tensor, offsets: tuple, dims: tuple = None, padding: Extrapolation = boundary, stack_dim: phi.math._shape.Shape = (shiftᶜ=None)) ‑> list

shift Tensor by a fixed offset and abiding by extrapolation

Args

x
Input data
offsets
Shift size
dims
Dimensions along which to shift, defaults to None
padding
padding to be performed at the boundary, defaults to extrapolation.BOUNDARY
stack_dim
dimensions to be stacked, defaults to 'shift'

Returns

list
offset_tensor
Expand source code
def shift(x: Tensor,
          offsets: tuple,
          dims: tuple or None = None,
          padding: Extrapolation or None = extrapolation.BOUNDARY,
          stack_dim: Shape or None = channel('shift')) -> list:
    """
    shift Tensor by a fixed offset and abiding by extrapolation

    Args:
        x: Input data
        offsets: Shift size
        dims: Dimensions along which to shift, defaults to None
        padding: padding to be performed at the boundary, defaults to extrapolation.BOUNDARY
        stack_dim: dimensions to be stacked, defaults to 'shift'

    Returns:
        list: offset_tensor

    """
    if stack_dim is None:
        assert len(dims) == 1
    x = wrap(x)
    dims = dims if dims is not None else x.shape.spatial.names
    pad_lower = max(0, -min(offsets))
    pad_upper = max(0, max(offsets))
    if padding:
        x = math.pad(x, {axis: (pad_lower, pad_upper) for axis in dims}, mode=padding)
    offset_tensors = []
    for offset in offsets:
        components = []
        for dimension in dims:
            if padding:
                slices = {dim: slice(pad_lower + offset, (-pad_upper + offset) or None) if dim == dimension else slice(pad_lower, -pad_upper or None) for dim in dims}
            else:
                slices = {dim: slice(pad_lower + offset, (-pad_upper + offset) or None) if dim == dimension else slice(None, None) for dim in dims}
            components.append(x[slices])
        offset_tensors.append(stack(components, stack_dim) if stack_dim is not None else components[0])
    return offset_tensors
def sign(x)

The sign of positive numbers is 1 and -1 for negative numbers. The sign of 0 is undefined.

Args

x
Tensor or TensorLike

Returns

Tensor or TensorLike matching x.

Expand source code
def sign(x):
    """
    The sign of positive numbers is 1 and -1 for negative numbers.
    The sign of 0 is undefined.

    Args:
        x: `Tensor` or `TensorLike`

    Returns:
        `Tensor` or `TensorLike` matching `x`.
    """
    return _backend_op1(x, Backend.sign)
def sin(x) ‑> phi.math._tensors.Tensor

Computes sin(x) of the Tensor or TensorLike x.

Expand source code
def sin(x) -> Tensor:
    """ Computes *sin(x)* of the `Tensor` or `TensorLike` `x`. """
    return _backend_op1(x, Backend.sin)
def solve_linear(f: Callable[[~X], ~Y], y: ~Y, solve: phi.math._functional.Solve[~X, ~Y], f_args: tuple = (), f_kwargs: dict = None) ‑> ~X

Solves the system of linear equations f(x) = y and returns x. For maximum performance, compile f using jit_compile_linear() beforehand. Then, an optimized representation of f (such as a sparse matrix) will be used to solve the linear system.

To obtain additional information about the performed solve, use a SolveTape.

The gradient of this operation will perform another linear solve with the parameters specified by Solve.gradient_solve.

See Also: solve_nonlinear(), jit_compile_linear().

Args

f
Linear function with Tensor or TensorLike first parameter and return value. f can have additional arguments.
y
Desired output of f(x) as Tensor or TensorLike.
solve
Solve object specifying optimization method, parameters and initial guess for x.
f_args
Additional Tensor or TensorLike arguments to be passed to f. f need not be linear in these arguments. Use this instead of lambda function since a lambda will not be recognized as calling a jit-compiled function.
f_kwargs
Additional keyword arguments to be passed to f. These arguments can be of any type.

Returns

x
solution of the linear system of equations f(x) = y as Tensor or TensorLike.

Raises

NotConverged
If the desired accuracy was not be reached within the maximum number of iterations.
Diverged
If the solve failed prematurely.
Expand source code
def solve_linear(f: Callable[[X], Y],
                 y: Y, solve: Solve[X, Y],
                 f_args: tuple or list = (),
                 f_kwargs: dict = None) -> X:
    """
    Solves the system of linear equations *f(x) = y* and returns *x*.
    For maximum performance, compile `f` using `jit_compile_linear()` beforehand.
    Then, an optimized representation of `f` (such as a sparse matrix) will be used to solve the linear system.

    To obtain additional information about the performed solve, use a `SolveTape`.

    The gradient of this operation will perform another linear solve with the parameters specified by `Solve.gradient_solve`.

    See Also:
        `solve_nonlinear()`, `jit_compile_linear()`.

    Args:
        f: Linear function with `Tensor` or `TensorLike` first parameter and return value.
            `f` can have additional arguments.
        y: Desired output of `f(x)` as `Tensor` or `TensorLike`.
        solve: `Solve` object specifying optimization method, parameters and initial guess for `x`.
        f_args: Additional `Tensor` or `TensorLike` arguments to be passed to `f`.
            `f` need not be linear in these arguments.
            Use this instead of lambda function since a lambda will not be recognized as calling a jit-compiled function.
        f_kwargs: Additional keyword arguments to be passed to `f`.
            These arguments can be of any type.

    Returns:
        x: solution of the linear system of equations `f(x) = y` as `Tensor` or `TensorLike`.

    Raises:
        NotConverged: If the desired accuracy was not be reached within the maximum number of iterations.
        Diverged: If the solve failed prematurely.
    """
    y_tree, y_tensors = disassemble_tree(y)
    x0_tree, x0_tensors = disassemble_tree(solve.x0)
    assert len(x0_tensors) == len(y_tensors) == 1, "Only single-tensor linear solves are currently supported"
    backend = choose_backend_t(*y_tensors, *x0_tensors)

    if not all_available(*y_tensors, *x0_tensors):  # jit mode
        f = jit_compile_linear(f) if backend.supports(Backend.sparse_coo_tensor) else jit_compile(f)

    if isinstance(f, LinearFunction) and (backend.supports(Backend.sparse_coo_tensor) or backend.supports(Backend.csr_matrix)):
        matrix, bias = f.sparse_matrix_and_bias(solve.x0, *f_args, **(f_kwargs or {}))
        return _matrix_solve(y - bias, solve, matrix, backend=backend)  # custom_gradient
    else:
        # arg_tree, arg_tensors = disassemble_tree(f_args)
        # arg_tensors = cached(arg_tensors)
        # f_args = assemble_tree(arg_tree, arg_tensors)
        f_args = cached(f_args)
        # x0_tensors = cached(x0_tensors)
        # solve = copy_with(solve, x0=assemble_tree(x0_tree, x0_tensors))
        solve = cached(solve)
        return _function_solve(y, solve, f_args, f_kwargs=f_kwargs or {}, f=f, backend=backend)  # custom_gradient
def solve_nonlinear(f: Callable, y, solve: phi.math._functional.Solve) ‑> phi.math._tensors.Tensor

Solves the non-linear equation f(x) = y by minimizing the norm of the residual.

This method is limited to backends that support functional_gradient(), currently PyTorch, TensorFlow and Jax.

To obtain additional information about the performed solve, use a SolveTape.

See Also: minimize(), solve_linear().

Args

f
Function whose output is optimized to match y. All positional arguments of f are optimized and must be Tensor or TensorLike. The output of f must match y.
y
Desired output of f(x) as Tensor or TensorLike.
solve
Solve object specifying optimization method, parameters and initial guess for x.

Returns

x
Solution fulfilling f(x) = y within specified tolerance as Tensor or TensorLike.

Raises

NotConverged
If the desired accuracy was not be reached within the maximum number of iterations.
Diverged
If the solve failed prematurely.
Expand source code
def solve_nonlinear(f: Callable, y, solve: Solve) -> Tensor:
    """
    Solves the non-linear equation *f(x) = y* by minimizing the norm of the residual.

    This method is limited to backends that support `functional_gradient()`, currently PyTorch, TensorFlow and Jax.

    To obtain additional information about the performed solve, use a `SolveTape`.

    See Also:
        `minimize()`, `solve_linear()`.

    Args:
        f: Function whose output is optimized to match `y`.
            All positional arguments of `f` are optimized and must be `Tensor` or `TensorLike`.
            The output of `f` must match `y`.
        y: Desired output of `f(x)` as `Tensor` or `TensorLike`.
        solve: `Solve` object specifying optimization method, parameters and initial guess for `x`.

    Returns:
        x: Solution fulfilling `f(x) = y` within specified tolerance as `Tensor` or `TensorLike`.

    Raises:
        NotConverged: If the desired accuracy was not be reached within the maximum number of iterations.
        Diverged: If the solve failed prematurely.
    """
    from ._nd import l2_loss

    def min_func(x):
        diff = f(x) - y
        l2 = l2_loss(diff)
        return l2

    rel_tol_to_abs = solve.relative_tolerance * l2_loss(y, batch_norm=True)
    solve.absolute_tolerance = rel_tol_to_abs
    solve.relative_tolerance = 0
    return minimize(min_func, solve)
def spatial(*args, **dims: int)

Returns the spatial dimensions of an existing Shape or creates a new Shape with only spatial dimensions.

Usage for filtering spatial dimensions:

spatial_dims = spatial(shape)
spatial_dims = spatial(tensor)

Usage for creating a Shape with only spatial dimensions:

spatial_shape = spatial('undef', x=2, y=3)
# Out: (x=2, y=3, undef=None)

Here, the dimension undef is created with an undefined size of None. Undefined sizes are automatically filled in by tensor(), wrap(), stack() and concat().

To create a shape with multiple types, use merge_shapes(), concat_shapes() or the syntax shape1 & shape2.

See Also: channel(), batch(), instance()

Args

*args

Either

  • Shape or Tensor to filter or
  • Names of dimensions with undefined sizes as str.
**dims
Dimension sizes and names. Must be empty when used as a filter operation.

Returns

Shape containing only dimensions of type spatial.

Expand source code
def spatial(*args, **dims: int):
    """
    Returns the spatial dimensions of an existing `Shape` or creates a new `Shape` with only spatial dimensions.

    Usage for filtering spatial dimensions:
    ```python
    spatial_dims = spatial(shape)
    spatial_dims = spatial(tensor)
    ```

    Usage for creating a `Shape` with only spatial dimensions:
    ```python
    spatial_shape = spatial('undef', x=2, y=3)
    # Out: (x=2, y=3, undef=None)
    ```
    Here, the dimension `undef` is created with an undefined size of `None`.
    Undefined sizes are automatically filled in by `tensor`, `wrap`, `stack` and `concat`.

    To create a shape with multiple types, use `merge_shapes()`, `concat_shapes()` or the syntax `shape1 & shape2`.

    See Also:
        `channel`, `batch`, `instance`

    Args:
        *args: Either

            * `Shape` or `Tensor` to filter or
            * Names of dimensions with undefined sizes as `str`.

        **dims: Dimension sizes and names. Must be empty when used as a filter operation.

    Returns:
        `Shape` containing only dimensions of type spatial.
    """
    from phi.math import Tensor
    if all(isinstance(arg, str) for arg in args) or dims:
        for arg in args:
            parts = [s.strip() for s in arg.split(',')]
            for dim in parts:
                if dim not in dims:
                    dims[dim] = None
        return math.Shape(dims.values(), dims.keys(), [SPATIAL_DIM] * len(dims))
    elif len(args) == 1 and isinstance(args[0], Shape):
        return args[0].spatial
    elif len(args) == 1 and isinstance(args[0], Tensor):
        return args[0].shape.spatial
    else:
        raise AssertionError(f"spatial() must be called either as a selector spatial(Shape) or spatial(Tensor) or as a constructor spatial(*names, **dims). Got *args={args}, **dims={dims}")
def spatial_gradient(grid: phi.math._tensors.Tensor, dx: float = 1, difference: str = 'central', padding: Extrapolation = boundary, dims: tuple = None, stack_dim: phi.math._shape.Shape = (gradientᶜ=None))

Calculates the spatial_gradient of a scalar channel from finite differences. The spatial_gradient vectors are in reverse order, lowest dimension first.

Args

grid
grid values
dims
optional) sequence of dimension names
dx
physical distance between grid points (default 1)
difference
type of difference, one of ('forward', 'backward', 'central') (default 'forward')
padding
tensor padding mode
stack_dim
name of the new vector dimension listing the spatial_gradient w.r.t. the various axes

Returns

tensor of shape (batch_size, spatial_dimensions…, spatial rank)

Expand source code
def spatial_gradient(grid: Tensor,
                     dx: float or int = 1,
                     difference: str = 'central',
                     padding: Extrapolation or None = extrapolation.BOUNDARY,
                     dims: tuple or None = None,
                     stack_dim: Shape = channel('gradient')):
    """
    Calculates the spatial_gradient of a scalar channel from finite differences.
    The spatial_gradient vectors are in reverse order, lowest dimension first.

    Args:
      grid: grid values
      dims: optional) sequence of dimension names
      dx: physical distance between grid points (default 1)
      difference: type of difference, one of ('forward', 'backward', 'central') (default 'forward')
      padding: tensor padding mode
      stack_dim: name of the new vector dimension listing the spatial_gradient w.r.t. the various axes

    Returns:
      tensor of shape (batch_size, spatial_dimensions..., spatial rank)

    """
    grid = wrap(grid)
    if difference.lower() == 'central':
        left, right = shift(grid, (-1, 1), dims, padding, stack_dim=stack_dim)
        return (right - left) / (dx * 2)
    elif difference.lower() == 'forward':
        left, right = shift(grid, (0, 1), dims, padding, stack_dim=stack_dim)
        return (right - left) / dx
    elif difference.lower() == 'backward':
        left, right = shift(grid, (-1, 0), dims, padding, stack_dim=stack_dim)
        return (right - left) / dx
    else:
        raise ValueError('Invalid difference type: {}. Can be CENTRAL or FORWARD'.format(difference))
def sqrt(x) ‑> phi.math._tensors.Tensor

Computes sqrt(x) of the Tensor or TensorLike x.

Expand source code
def sqrt(x) -> Tensor:
    """ Computes *sqrt(x)* of the `Tensor` or `TensorLike` `x`. """
    return _backend_op1(x, Backend.sqrt)
def stack(values: tuple, dim: phi.math._shape.Shape)

Lazy stack. Stacks values along the new dimension dim.

Args

values
Sequence of Tensor objects to be stacked.
dim
Single-dimension Shape. This dimension must not be present with any of the values. The size along dim is determined from len(values) and can be set to undefined (None).

Returns

Tensor containing values stacked along dim.

Expand source code
def stack(values: tuple or list, dim: Shape):
    """
    Lazy stack.
    Stacks `values` along the new dimension `dim`.

    Args:
        values: Sequence of `Tensor` objects to be stacked.
        dim: Single-dimension `Shape`. This dimension must not be present with any of the `values`.
            The size along `dim` is determined from `len(values)` and can be set to undefined (`None`).

    Returns:
        `Tensor` containing `values` stacked along `dim`.
    """
    values = cast_same(*values)

    def inner_stack(*values):
        return TensorStack(values, dim)

    result = broadcast_op(inner_stack, values)
    return result
def std(value: phi.math._tensors.Tensor, dim: str = None) ‑> phi.math._tensors.Tensor

Computes the standard deviation over values along the specified dimensions.

Args

value
Tensor or list / tuple of Tensors.
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to add up the sequence of Tensors

Returns

Tensor without the reduced dimensions.

Expand source code
def std(value: Tensor or list or tuple, dim: str or int or tuple or list or None or Shape = None) -> Tensor:
    """
    Computes the standard deviation over `values` along the specified dimensions.

    Args:
        value: `Tensor` or `list` / `tuple` of Tensors.
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to add up the sequence of Tensors

    Returns:
        `Tensor` without the reduced dimensions.
    """
    return _reduce(value, dim,
                   native_function=lambda backend, native, dim: backend.std(native, dim),
                   collapsed_function=lambda inner, red_shape: inner,
                   unaffected_function=lambda value: value * 0)
def stop_gradient(x)

Disables gradients for the given tensor. This may switch off the gradients for x itself or create a copy of x with disabled gradients.

Implementations:

Args

x
Tensor or TensorLike for which gradients should be disabled.

Returns

Copy of x.

Expand source code
def stop_gradient(x):
    """
    Disables gradients for the given tensor.
    This may switch off the gradients for `x` itself or create a copy of `x` with disabled gradients.

    Implementations:

    * PyTorch: [`x.detach()`](https://pytorch.org/docs/stable/autograd.html#torch.Tensor.detach)
    * TensorFlow: [`tf.stop_gradient`](https://www.tensorflow.org/api_docs/python/tf/stop_gradient)
    * Jax: [`jax.lax.stop_gradient`](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.stop_gradient.html)

    Args:
        x: `Tensor` or `TensorLike` for which gradients should be disabled.

    Returns:
        Copy of `x`.
    """
    if isinstance(x, Tensor):
        return x._op1(lambda native: choose_backend(native).stop_gradient(native))
    elif isinstance(x, TensorLike):
        nest, values = disassemble_tree(x)
        new_values = [stop_gradient(v) for v in values]
        return assemble_tree(nest, new_values)
    else:
        return wrap(choose_backend(x).stop_gradient(x))
def sum(value: phi.math._tensors.Tensor, dim: str = None) ‑> phi.math._tensors.Tensor

Sums values along the specified dimensions.

Args

value
Tensor or list / tuple of Tensors.
dim

Dimension or dimensions to be reduced. One of

  • None to reduce all non-batch dimensions
  • str containing single dimension or comma-separated list of dimensions
  • Tuple[str] or List[str]
  • Shape
  • batch(), instance(), spatial(), channel() to select dimensions by type
  • '0' when isinstance(value, (tuple, list)) to add up the sequence of Tensors

Returns

Tensor without the reduced dimensions.

Expand source code
def sum_(value: Tensor or list or tuple,
         dim: str or int or tuple or list or None or Shape = None) -> Tensor:
    """
    Sums `values` along the specified dimensions.

    Args:
        value: `Tensor` or `list` / `tuple` of Tensors.
        dim: Dimension or dimensions to be reduced. One of

            * `None` to reduce all non-batch dimensions
            * `str` containing single dimension or comma-separated list of dimensions
            * `Tuple[str]` or `List[str]`
            * `Shape`
            * `batch`, `instance`, `spatial`, `channel` to select dimensions by type
            * `'0'` when `isinstance(value, (tuple, list))` to add up the sequence of Tensors

    Returns:
        `Tensor` without the reduced dimensions.
    """
    return _reduce(value, dim,
                   native_function=lambda backend, native, dim: backend.sum(native, dim),
                   collapsed_function=lambda inner, red_shape: inner * red_shape.volume)
def tan(x) ‑> phi.math._tensors.Tensor

Computes tan(x) of the Tensor or TensorLike x.

Expand source code
def tan(x) -> Tensor:
    """ Computes *tan(x)* of the `Tensor` or `TensorLike` `x`. """
    return _backend_op1(x, Backend.tan)
def tensor(data: phi.math._tensors.Tensor, *shape: phi.math._shape.Shape, convert: bool = True, default_list_dim=(vectorᶜ=None)) ‑> phi.math._tensors.Tensor

Create a Tensor from the specified data. If convert=True, converts data to the preferred format of the default backend.

data must be one of the following:

  • Number: returns a dimensionless Tensor.
  • Native tensor such as NumPy array, TensorFlow tensor or PyTorch tensor.
  • tuple or list of numbers: backs the Tensor with native tensor.
  • tuple or list of non-numbers: creates tensors for the items and stacks them.
  • Tensor: renames dimensions and dimension types if names is specified. Converts all internal native values of the tensor if convert=True.
  • Shape: creates a 1D tensor listing the dimension sizes.

While specifying names is optional in some cases, it is recommended to always specify them.

Dimension types are always inferred from the dimension names if specified.

Implementations:

See Also: wrap() which uses convert=False.

Args

data
native tensor, scalar, sequence, Shape or Tensor
shape
Ordered dimensions and types. If sizes are defined, they will be checked against data.`
convert
If True, converts the data to the native format of the current default backend. If False, wraps the data in a Tensor but keeps the given data reference if possible.

Raises

AssertionError
if dimension names are not provided and cannot automatically be inferred
ValueError
if data is not tensor-like

Returns

Tensor containing same values as data

Expand source code
def tensor(data: Tensor or Shape or tuple or list or numbers.Number,
           *shape: Shape,
           convert: bool = True,
           default_list_dim=channel('vector')) -> Tensor:  # TODO assume convert_unsupported, add convert_external=False for constants
    """
    Create a Tensor from the specified `data`.
    If `convert=True`, converts `data` to the preferred format of the default backend.

    `data` must be one of the following:
    
    * Number: returns a dimensionless Tensor.
    * Native tensor such as NumPy array, TensorFlow tensor or PyTorch tensor.
    * `tuple` or `list` of numbers: backs the Tensor with native tensor.
    * `tuple` or `list` of non-numbers: creates tensors for the items and stacks them.
    * Tensor: renames dimensions and dimension types if `names` is specified. Converts all internal native values of the tensor if `convert=True`.
    * Shape: creates a 1D tensor listing the dimension sizes.
    
    While specifying `names` is optional in some cases, it is recommended to always specify them.
    
    Dimension types are always inferred from the dimension names if specified.

    Implementations:

    * NumPy: [`numpy.array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html)
    * PyTorch: [`torch.tensor`](https://pytorch.org/docs/stable/generated/torch.tensor.html), [`torch.from_numpy`](https://pytorch.org/docs/stable/generated/torch.from_numpy.html)
    * TensorFlow: [`tf.convert_to_tensor`](https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor)
    * Jax: [`jax.numpy.array`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.array.html)

    See Also:
        `phi.math.wrap()` which uses `convert=False`.

    Args:
      data: native tensor, scalar, sequence, Shape or Tensor
      shape: Ordered dimensions and types. If sizes are defined, they will be checked against `data`.`
      convert: If True, converts the data to the native format of the current default backend.
        If False, wraps the data in a `Tensor` but keeps the given data reference if possible.

    Raises:
      AssertionError: if dimension names are not provided and cannot automatically be inferred
      ValueError: if `data` is not tensor-like

    Returns:
      Tensor containing same values as data
    """
    assert all(isinstance(s, Shape) for s in shape), f"Cannot create tensor because shape needs to be one or multiple Shape instances but got {shape}"
    shape = None if len(shape) == 0 else concat_shapes(*shape)
    if isinstance(data, Tensor):
        if convert:
            backend = data.default_backend
            if backend != default_backend():
                data = data._op1(lambda n: convert_(n, use_dlpack=False))
        if shape is None:
            return data
        else:
            if None in shape.sizes:
                shape = shape.with_sizes(data.shape.sizes)
            return data._with_shape_replaced(shape)
    elif isinstance(data, Shape):
        assert shape is not None
        data = data.sizes
    elif isinstance(data, (numbers.Number, bool, str)):
        assert not shape, f"Trying to create a zero-dimensional Tensor from value '{data}' but shape={shape}"
        if convert:
            data = default_backend().as_tensor(data, convert_external=True)
        return NativeTensor(data, EMPTY_SHAPE)
    if isinstance(data, (tuple, list)):
        array = np.array(data)
        if array.dtype != object:
            data = array
        else:
            inner_shape = [] if shape is None else [shape[1:]]
            elements = [tensor(d, *inner_shape, convert=convert) for d in data]
            common_shape = merge_shapes(*[e.shape for e in elements])
            stack_dim = default_list_dim if shape is None else shape[0].with_sizes([len(elements)])
            assert all(stack_dim not in t.shape for t in elements), f"Cannot stack tensors with dimension '{stack_dim}' because a tensor already has that dimension."
            elements = [CollapsedTensor(e, common_shape) if e.shape.rank < common_shape.rank else e for e in elements]
            from ._ops import cast_same
            elements = cast_same(*elements)
            return TensorStack(elements, stack_dim)
    try:
        backend = choose_backend(data)
        if shape is None:
            assert backend.ndims(data) <= 1, "Specify dimension names for tensors with more than 1 dimension"
            shape = default_list_dim if backend.ndims(data) == 1 else EMPTY_SHAPE
            shape = shape.with_sizes(backend.staticshape(data))
        else:
            # fill in sizes or check them
            sizes = backend.staticshape(data)
            assert len(sizes) == len(shape), f"Rank of given shape {shape} does not match data with sizes {sizes}"
            for size, s in zip(sizes, shape.sizes):
                if s is not None:
                    assert s == size, f"Given shape {shape} does not match data with sizes {sizes}. Consider leaving the sizes undefined."
            shape = shape.with_sizes(sizes)
        if convert:
            data = convert_(data, use_dlpack=False)
        return NativeTensor(data, shape)
    except NoBackendFound:
        raise ValueError(f"{type(data)} is not supported. Only (Tensor, tuple, list, np.ndarray, native tensors) are allowed.\nCurrent backends: {BACKENDS}")
def to_complex(x) ‑> phi.math._tensors.Tensor

Converts the given tensor to complex floating point format with the currently specified precision.

The precision can be set globally using math.set_global_precision() and locally using with math.precision().

See the phi.math module documentation at https://tum-pbs.github.io/PhiFlow/Math.html

See Also: cast().

Args

x
values to convert

Returns

Tensor of same shape as x

Expand source code
def to_complex(x) -> Tensor:
    """
    Converts the given tensor to complex floating point format with the currently specified precision.

    The precision can be set globally using `math.set_global_precision()` and locally using `with math.precision()`.

    See the `phi.math` module documentation at https://tum-pbs.github.io/PhiFlow/Math.html

    See Also:
        `cast()`.

    Args:
        x: values to convert

    Returns:
        `Tensor` of same shape as `x`
    """
    return _backend_op1(x, Backend.to_complex)
def to_float(x) ‑> phi.math._tensors.Tensor

Converts the given tensor to floating point format with the currently specified precision.

The precision can be set globally using math.set_global_precision() and locally using with math.precision().

See the phi.math module documentation at https://tum-pbs.github.io/PhiFlow/Math.html

See Also: cast().

Args

x
Tensor or TensorLike to convert

Returns

Tensor or TensorLike matching x.

Expand source code
def to_float(x) -> Tensor:
    """
    Converts the given tensor to floating point format with the currently specified precision.
    
    The precision can be set globally using `math.set_global_precision()` and locally using `with math.precision()`.
    
    See the `phi.math` module documentation at https://tum-pbs.github.io/PhiFlow/Math.html

    See Also:
        `cast()`.

    Args:
        x: `Tensor` or `TensorLike` to convert

    Returns:
        `Tensor` or `TensorLike` matching `x`.
    """
    return _backend_op1(x, Backend.to_float)
def to_int32(x)

Converts the Tensor or TensorLike x to 32-bit integer.

Expand source code
def to_int32(x):
    """ Converts the `Tensor` or `TensorLike` `x` to 32-bit integer. """
    return _backend_op1(x, Backend.to_int32)
def to_int64(x) ‑> phi.math._tensors.Tensor

Converts the Tensor or TensorLike x to 64-bit integer.

Expand source code
def to_int64(x) -> Tensor:
    """ Converts the `Tensor` or `TensorLike` `x` to 64-bit integer. """
    return _backend_op1(x, Backend.to_int64)
def transpose(x, axes)

Swap the dimension order of x. This is done implicitly if x is a Tensor.

Implementations:

Args

x
Tensor or native tensor.
axes
tuple or list

Returns

Tensor or native tensor, depending on x.

Expand source code
def transpose(x, axes):
    """
    Swap the dimension order of `x`.
    This is done implicitly if `x` is a `Tensor`.

    Implementations:

    * NumPy: [`numpy.transpose`](https://numpy.org/doc/stable/reference/generated/numpy.transpose.html)
    * PyTorch: [`x.permute`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.permute)
    * TensorFlow: [`tf.transpose`](https://www.tensorflow.org/api_docs/python/tf/transpose)
    * Jax: [`jax.numpy.transpose`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.transpose.html)

    Args:
        x: `Tensor` or native tensor.
        axes: `tuple` or `list`

    Returns:
        `Tensor` or native tensor, depending on `x`.
    """
    if isinstance(x, Tensor):
        return CollapsedTensor(x, x.shape[axes])  # TODO avoid nesting
    else:
        return choose_backend(x).transpose(x, axes)
def unpack_dims(value: phi.math._tensors.Tensor, dim: str, unpacked_dims: phi.math._shape.Shape)

Decompresses a tensor dimension by unstacking the elements along it. This function replaces the traditional reshape for these cases. The compressed dimension dim is assumed to contain elements laid out according to the order of split_dims.

See Also: pack_dims()

Args

value
Tensor for which one dimension should be split.
dim
Compressed dimension to be decompressed.
unpacked_dims
Ordered new dimensions to replace dim as Shape.

Returns

Tensor with decompressed shape

Expand source code
def unpack_dims(value: Tensor, dim: str, unpacked_dims: Shape):
    """
    Decompresses a tensor dimension by unstacking the elements along it.
    This function replaces the traditional `reshape` for these cases.
    The compressed dimension `dim` is assumed to contain elements laid out according to the order of `split_dims`.

    See Also:
        `pack_dims()`

    Args:
        value: `Tensor` for which one dimension should be split.
        dim: Compressed dimension to be decompressed.
        unpacked_dims: Ordered new dimensions to replace `dim` as `Shape`.

    Returns:
        `Tensor` with decompressed shape
    """
    if unpacked_dims.rank == 0:
        return value.dimension(dim)[0]  # remove dim
    if unpacked_dims.rank == 1:
        return rename_dims(value, dim, unpacked_dims)
    else:
        native = value.native(value.shape.names)
        new_shape = value.shape.without(dim)
        i = value.shape.index(dim)
        for d in unpacked_dims:
            new_shape = new_shape._expand(d, pos=i)
            i += 1
        native_reshaped = choose_backend(native).reshape(native, new_shape.sizes)
        return NativeTensor(native_reshaped, new_shape)
def unstack(value: phi.math._tensors.Tensor, dim: str)
Expand source code
def unstack(value: Tensor, dim: str):
    """ Alias for `Tensor.unstack()` """
    return value.unstack(dim)
def upsample2x(grid: phi.math._tensors.Tensor, padding: Extrapolation = boundary, dims: tuple = None) ‑> phi.math._tensors.Tensor

Resamples a regular grid to double the number of spatial sample points per dimension. The grid values at the new points are determined via linear interpolation.

Args

grid
half-size grid
padding
grid extrapolation
dims
dims along which up-sampling is applied. If None, up-sample along all spatial dims.
grid
Tensor:
padding
Extrapolation: (Default value = extrapolation.BOUNDARY)
dims
tuple or None: (Default value = None)

Returns

double-size grid

Expand source code
def upsample2x(grid: Tensor,
               padding: Extrapolation = extrapolation.BOUNDARY,
               dims: tuple or None = None) -> Tensor:
    """
    Resamples a regular grid to double the number of spatial sample points per dimension.
    The grid values at the new points are determined via linear interpolation.

    Args:
      grid: half-size grid
      padding: grid extrapolation
      dims: dims along which up-sampling is applied. If None, up-sample along all spatial dims.
      grid: Tensor: 
      padding: Extrapolation:  (Default value = extrapolation.BOUNDARY)
      dims: tuple or None:  (Default value = None)

    Returns:
      double-size grid

    """
    for i, dim in enumerate(grid.shape.spatial.only(dims)):
        left, center, right = shift(grid, (-1, 0, 1), dim.names, padding, None)
        interp_left = 0.25 * left + 0.75 * center
        interp_right = 0.75 * center + 0.25 * right
        stacked = math.stack([interp_left, interp_right], spatial('_interleave'))
        grid = math.pack_dims(stacked, (dim.name, '_interleave'), dim)
    return grid
def vec_abs(vec: phi.math._tensors.Tensor, vec_dim: str = None)

Computes the vector length of vec. If vec_dim is None, the combined channel dimensions of vec are interpreted as a vector.

Expand source code
def vec_abs(vec: Tensor, vec_dim: str or tuple or list or Shape = None):
    """ Computes the vector length of `vec`. If `vec_dim` is None, the combined channel dimensions of `vec` are interpreted as a vector. """
    return math.sqrt(math.sum_(vec ** 2, dim=vec.shape.channel if vec_dim is None else vec_dim))
def vec_normalize(vec: phi.math._tensors.Tensor, vec_dim: str = None)

Normalizes the vectors in vec. If vec_dim is None, the combined channel dimensions of vec are interpreted as a vector.

Expand source code
def vec_normalize(vec: Tensor, vec_dim: str or tuple or list or Shape = None):
    """ Normalizes the vectors in `vec`. If `vec_dim` is None, the combined channel dimensions of `vec` are interpreted as a vector. """
    return vec / vec_abs(vec, vec_dim=vec_dim)
def vec_squared(vec: phi.math._tensors.Tensor, vec_dim: str = None)

Computes the squared length of vec. If vec_dim is None, the combined channel dimensions of vec are interpreted as a vector.

Expand source code
def vec_squared(vec: Tensor, vec_dim: str or tuple or list or Shape = None):
    """ Computes the squared length of `vec`. If `vec_dim` is None, the combined channel dimensions of `vec` are interpreted as a vector. """
    return math.sum_(vec ** 2, dim=vec.shape.channel if vec_dim is None else vec_dim)
def where(condition: phi.math._tensors.Tensor, value_true: phi.math._tensors.Tensor, value_false: phi.math._tensors.Tensor)

Builds a tensor by choosing either values from value_true or value_false depending on condition. If condition is not of type boolean, non-zero values are interpreted as True.

This function requires non-None values for value_true and value_false. To get the indices of True / non-zero values, use :func:nonzero().

Args

condition
determines where to choose values from value_true or from value_false
value_true
Values to pick where condition != 0 / True
value_false
Values to pick where condition == 0 / False

Returns

Tensor containing dimensions of all inputs.

Expand source code
def where(condition: Tensor or float or int, value_true: Tensor or float or int, value_false: Tensor or float or int):
    """
    Builds a tensor by choosing either values from `value_true` or `value_false` depending on `condition`.
    If `condition` is not of type boolean, non-zero values are interpreted as True.
    
    This function requires non-None values for `value_true` and `value_false`.
    To get the indices of True / non-zero values, use :func:`nonzero`.

    Args:
      condition: determines where to choose values from value_true or from value_false
      value_true: Values to pick where `condition != 0 / True`
      value_false: Values to pick where `condition == 0 / False`

    Returns:
        `Tensor` containing dimensions of all inputs.
    """
    condition = tensor(condition)
    value_true = tensor(value_true)
    value_false = tensor(value_false)
    shape, (c, vt, vf) = broadcastable_native_tensors(condition, value_true, value_false)
    result = choose_backend(c, vt, vf).where(c, vt, vf)
    return NativeTensor(result, shape)
def wrap(data: phi.math._tensors.Tensor, *shape: phi.math._shape.Shape) ‑> phi.math._tensors.Tensor

Short for tensor() with convert=False.

Expand source code
def wrap(data: Tensor or Shape or tuple or list or numbers.Number,
         *shape: Shape) -> Tensor:
    """ Short for `phi.math.tensor()` with `convert=False`. """
    return tensor(data, *shape, convert=False)  # TODO inline, simplify
def zeros(*shape: phi.math._shape.Shape, dtype: phi.math.backend._dtype.DType = None) ‑> phi.math._tensors.Tensor

Define a tensor with specified shape with value 0.0 / 0 / False everywhere.

This method may not immediately allocate the memory to store the values.

See Also: zeros_like(), ones().

Args

*shape
This (possibly empty) sequence of Shapes is concatenated, preserving the order.
dtype
Data type as DType object. Defaults to float matching the current precision setting.

Returns

Tensor

Expand source code
def zeros(*shape: Shape, dtype: DType = None) -> Tensor:
    """
    Define a tensor with specified shape with value `0.0` / `0` / `False` everywhere.
    
    This method may not immediately allocate the memory to store the values.

    See Also:
        `zeros_like()`, `ones()`.

    Args:
        *shape: This (possibly empty) sequence of `Shape`s is concatenated, preserving the order.
        dtype: Data type as `DType` object. Defaults to `float` matching the current precision setting.

    Returns:
        `Tensor`
    """
    return _initialize(lambda shape, dtype: CollapsedTensor(NativeTensor(default_backend().zeros((), dtype=dtype), EMPTY_SHAPE), shape), shape, dtype)
def zeros_like(obj) ‑> phi.math._tensors.Tensor

Create a Tensor containing only 0.0 / 0 / False with the same shape and dtype as obj.

Expand source code
def zeros_like(obj) -> Tensor:
    """ Create a `Tensor` containing only `0.0` / `0` / `False` with the same shape and dtype as `obj`. """
    nest, values = disassemble_tree(obj)
    values0 = [zeros(t.shape, dtype=t.dtype) for t in values]
    return assemble_tree(nest, values0)

Classes

class ConvergenceException

Base class for exceptions raised when a solve does not converge.

See Also: Diverged, NotConverged.

Expand source code
class ConvergenceException(RuntimeError):
    """
    Base class for exceptions raised when a solve does not converge.

    See Also:
        `Diverged`, `NotConverged`.
    """

    def __init__(self, result: SolveInfo):
        RuntimeError.__init__(self, result.msg)
        self.result: SolveInfo = result
        """ `SolveInfo` holding information about the solve. """

Ancestors

  • builtins.RuntimeError
  • builtins.Exception
  • builtins.BaseException

Subclasses

  • phi.math._functional.Diverged
  • phi.math._functional.NotConverged

Instance variables

var result

SolveInfo holding information about the solve.

class DType (kind: type, bits: int = 8)

Instances of DType represent the kind and size of data elements. The data type of a Tensor can be obtained via Tensor.dtype.

The following kinds of data types are supported:

  • float with 32 / 64 bits
  • complex with 64 / 128 bits
  • int with 8 / 16 / 32 / 64 bits
  • bool with 8 bits
  • str with 8n bits

Unlike with many computing libraries, there are no global variables corresponding to the available types. Instead, data types can simply be instantiated as needed.

Args

kind
Python type, one of (bool, int, float, complex, str)
bits
number of bits per element, a multiple of 8.
Expand source code
class DType:
    """
    Instances of `DType` represent the kind and size of data elements.
    The data type of a `Tensor` can be obtained via `phi.math.Tensor.dtype`.

    The following kinds of data types are supported:

    * `float` with 32 / 64 bits
    * `complex` with 64 / 128 bits
    * `int` with 8 / 16 / 32 / 64 bits
    * `bool` with 8 bits
    * `str` with 8*n* bits

    Unlike with many computing libraries, there are no global variables corresponding to the available types.
    Instead, data types can simply be instantiated as needed.
    """

    def __init__(self, kind: type, bits: int = 8):
        """
        Args:
            kind: Python type, one of `(bool, int, float, complex, str)`
            bits: number of bits per element, a multiple of 8.
        """
        assert kind in (bool, int, float, complex, str)
        if kind is bool:
            assert bits == 8
        else:
            assert isinstance(bits, int)
        self.kind = kind
        """ Python class corresponding to the type of data, ignoring precision. One of (bool, int, float, complex) """
        self.bits = bits
        """ Number of bits used to store a single value of this type. See `DType.itemsize`. """

    @property
    def precision(self):
        """ Floating point precision. Only defined if `kind in (float, complex)`. For complex values, returns half of `DType.bits`. """
        if self.kind == float:
            return self.bits
        if self.kind == complex:
            return self.bits // 2
        else:
            return None

    @property
    def itemsize(self):
        """ Number of bytes used to storea single value of this type. See `DType.bits`. """
        assert self.bits % 8 == 0
        return self.bits // 8

    def __eq__(self, other):
        return isinstance(other, DType) and self.kind == other.kind and self.bits == other.bits

    def __ne__(self, other):
        return not self == other

    def __hash__(self):
        return hash(self.kind) + hash(self.bits)

    def __repr__(self):
        return f"{self.kind.__name__}{self.bits}"

Instance variables

var bits

Number of bits used to store a single value of this type. See DType.itemsize.

var itemsize

Number of bytes used to storea single value of this type. See DType.bits.

Expand source code
@property
def itemsize(self):
    """ Number of bytes used to storea single value of this type. See `DType.bits`. """
    assert self.bits % 8 == 0
    return self.bits // 8
var kind

Python class corresponding to the type of data, ignoring precision. One of (bool, int, float, complex)

var precision

Floating point precision. Only defined if kind in (float, complex). For complex values, returns half of DType.bits.

Expand source code
@property
def precision(self):
    """ Floating point precision. Only defined if `kind in (float, complex)`. For complex values, returns half of `DType.bits`. """
    if self.kind == float:
        return self.bits
    if self.kind == complex:
        return self.bits // 2
    else:
        return None
class Dict (*args, **kwargs)

Dictionary of Tensor or TensorLike values. In addition to dictionary functions, supports mathematical operators with other Dicts and lookup via .key syntax. Dict implements TensorLike so instances can be passed to math operations like sin().

Expand source code
class Dict(dict):
    """
    Dictionary of `Tensor` or `TensorLike` values.
    In addition to dictionary functions, supports mathematical operators with other `Dict`s and lookup via `.key` syntax.
    `Dict` implements `TensorLike` so instances can be passed to math operations like `sin`.
    """

    def __value_attrs__(self):
        return tuple(self.keys())
    
    # --- Dict[key] ---

    def __getattr__(self, key):
        try:
            return self[key]
        except KeyError as k:
            raise AttributeError(k)

    def __setattr__(self, key, value):
        self[key] = value

    def __delattr__(self, key):
        try:
            del self[key]
        except KeyError as k:
            raise AttributeError(k)
        
    # --- operators ---
    
    def __neg__(self):
        return Dict({k: -v for k, v in self.items()})
    
    def __invert__(self):
        return Dict({k: ~v for k, v in self.items()})
    
    def __abs__(self):
        return Dict({k: abs(v) for k, v in self.items()})
    
    def __round__(self, n=None):
        return Dict({k: round(v) for k, v in self.items()})

    def __add__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val + other[key] for key, val in self.items()})
        else:
            return Dict({key: val + other for key, val in self.items()})

    def __radd__(self, other):
        if isinstance(other, Dict):
            return Dict({key: other[key] + val for key, val in self.items()})
        else:
            return Dict({key: other + val for key, val in self.items()})

    def __sub__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val - other[key] for key, val in self.items()})
        else:
            return Dict({key: val - other for key, val in self.items()})

    def __rsub__(self, other):
        if isinstance(other, Dict):
            return Dict({key: other[key] - val for key, val in self.items()})
        else:
            return Dict({key: other - val for key, val in self.items()})

    def __mul__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val * other[key] for key, val in self.items()})
        else:
            return Dict({key: val * other for key, val in self.items()})

    def __rmul__(self, other):
        if isinstance(other, Dict):
            return Dict({key: other[key] * val for key, val in self.items()})
        else:
            return Dict({key: other * val for key, val in self.items()})

    def __truediv__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val / other[key] for key, val in self.items()})
        else:
            return Dict({key: val / other for key, val in self.items()})

    def __rtruediv__(self, other):
        if isinstance(other, Dict):
            return Dict({key: other[key] / val for key, val in self.items()})
        else:
            return Dict({key: other / val for key, val in self.items()})

    def __floordiv__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val // other[key] for key, val in self.items()})
        else:
            return Dict({key: val // other for key, val in self.items()})

    def __rfloordiv__(self, other):
        if isinstance(other, Dict):
            return Dict({key: other[key] // val for key, val in self.items()})
        else:
            return Dict({key: other // val for key, val in self.items()})

    def __pow__(self, power, modulo=None):
        assert modulo is None
        if isinstance(power, Dict):
            return Dict({key: val ** power[key] for key, val in self.items()})
        else:
            return Dict({key: val ** power for key, val in self.items()})

    def __rpow__(self, other):
        if isinstance(other, Dict):
            return Dict({key: other[key] ** val for key, val in self.items()})
        else:
            return Dict({key: other ** val for key, val in self.items()})

    def __mod__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val % other[key] for key, val in self.items()})
        else:
            return Dict({key: val % other for key, val in self.items()})

    def __rmod__(self, other):
        if isinstance(other, Dict):
            return Dict({key: other[key] % val for key, val in self.items()})
        else:
            return Dict({key: other % val for key, val in self.items()})

    def __eq__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val == other[key] for key, val in self.items()})
        else:
            return Dict({key: val == other for key, val in self.items()})

    def __ne__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val != other[key] for key, val in self.items()})
        else:
            return Dict({key: val != other for key, val in self.items()})

    def __lt__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val < other[key] for key, val in self.items()})
        else:
            return Dict({key: val < other for key, val in self.items()})

    def __le__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val <= other[key] for key, val in self.items()})
        else:
            return Dict({key: val <= other for key, val in self.items()})

    def __gt__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val > other[key] for key, val in self.items()})
        else:
            return Dict({key: val > other for key, val in self.items()})

    def __ge__(self, other):
        if isinstance(other, Dict):
            return Dict({key: val >= other[key] for key, val in self.items()})
        else:
            return Dict({key: val >= other for key, val in self.items()})

    # --- overridden methods ---

    def copy(self):
        return Dict(self)

Ancestors

  • builtins.dict

Methods

def copy(self)

D.copy() -> a shallow copy of D

Expand source code
def copy(self):
    return Dict(self)
class Diverged

Raised if the optimization was stopped prematurely and cannot continue. This may indicate that no solution exists.

The values of the last estimate x may or may not be finite.

This exception inherits from ConvergenceException.

See Also: NotConverged.

Expand source code
class Diverged(ConvergenceException):
    """
    Raised if the optimization was stopped prematurely and cannot continue.
    This may indicate that no solution exists.

    The values of the last estimate `x` may or may not be finite.

    This exception inherits from `ConvergenceException`.

    See Also:
        `NotConverged`.
    """

    def __init__(self, result: SolveInfo):
        ConvergenceException.__init__(self, result)

Ancestors

  • phi.math._functional.ConvergenceException
  • builtins.RuntimeError
  • builtins.Exception
  • builtins.BaseException
class Extrapolation

Extrapolations are used to determine values of grids or other structures outside the sampled bounds. They play a vital role in padding and sampling.

Args

pad_rank
low-ranking extrapolations are handled first during mixed-extrapolation padding. The typical order is periodic=1, boundary=2, symmetric=3, reflect=4, constant=5.
Expand source code
class Extrapolation:
    """
    Extrapolations are used to determine values of grids or other structures outside the sampled bounds.
    They play a vital role in padding and sampling.
    """

    def __init__(self, pad_rank):
        """
        Args:
            pad_rank: low-ranking extrapolations are handled first during mixed-extrapolation padding.
                The typical order is periodic=1, boundary=2, symmetric=3, reflect=4, constant=5.
        """
        self.pad_rank = pad_rank

    def to_dict(self) -> dict:
        """
        Serialize this extrapolation to a dictionary that is serializable (JSON-writable).
        
        Use `from_dict()` to restore the Extrapolation object.
        """
        raise NotImplementedError()

    def spatial_gradient(self) -> 'Extrapolation':
        """Returns the extrapolation for the spatial spatial_gradient of a tensor/field with this extrapolation."""
        raise NotImplementedError()

    def valid_outer_faces(self, dim):
        """ `(lower: bool, upper: bool)` indicating whether the values sampled at the outer-most faces of a staggered grid with this extrapolation are valid, i.e. need to be stored and are not redundant. """
        raise NotImplementedError()

    def pad(self, value: Tensor, widths: dict) -> Tensor:
        """
        Pads a tensor using values from self.pad_values()

        Args:
          value: tensor to be padded
          widths: name: str -> (lower: int, upper: int)}
          value: Tensor: 
          widths: dict: 

        Returns:

        """
        for dim in widths:
            assert (w > 0 for w in widths[dim]), "Negative widths not allowed in Extrapolation.pad(). Use math.pad() instead."
            values = []
            if widths[dim][False] > 0:
                values.append(self.pad_values(value, widths[dim][False], dim, False))
            values.append(value)
            if widths[dim][True] > 0:
                values.append(self.pad_values(value, widths[dim][True], dim, True))
            value = math.concat(values, value.shape[dim])
        return value

    def pad_values(self, value: Tensor, width: int, dimension: str, upper_edge: bool) -> Tensor:
        """
        Determines the values with which the given tensor would be padded at the specified using this extrapolation.

        Args:
          value: tensor to be padded
          width: number of cells to pad perpendicular to the face. Must be larger than zero.
          dimension: axis in which to pad
          upper_edge: True for upper edge, False for lower edge
          value: Tensor: 
          width: int: 
          dimension: str: 
          upper_edge: bool: 

        Returns:
          tensor that can be concatenated to value for padding

        """
        raise NotImplementedError()

    def transform_coordinates(self, coordinates: Tensor, shape: Shape) -> Tensor:
        """
        If is_copy_pad, transforms outsider coordinates to point to the index from which the value should be copied.
        
        Otherwise, the grid tensor is assumed to hold the correct boundary values for this extrapolation at the edge.
        Coordinates are then snapped to the valid index range.
        This is the default implementation.

        Args:
          coordinates: integer coordinates in index space
          shape: tensor shape
          coordinates: Tensor: 
          shape: Shape: 

        Returns:
          transformed coordinates

        """
        return math.clip(coordinates, 0, math.wrap(shape.spatial - 1, channel('vector')))

    @property
    def is_copy_pad(self):
        """:return: True if all pad values are copies of existing values in the tensor to be padded"""
        return False

    @property
    def native_grid_sample_mode(self) -> Union[str, None]:
        return None

    def __getitem__(self, item):
        return self

Subclasses

  • ConstantExtrapolation
  • phi.math.extrapolation._CopyExtrapolation
  • phi.math.extrapolation._MixedExtrapolation
  • phi.math.extrapolation._NoExtrapolation

Instance variables

var is_copy_pad

:return: True if all pad values are copies of existing values in the tensor to be padded

Expand source code
@property
def is_copy_pad(self):
    """:return: True if all pad values are copies of existing values in the tensor to be padded"""
    return False
var native_grid_sample_mode : Optional[str]
Expand source code
@property
def native_grid_sample_mode(self) -> Union[str, None]:
    return None

Methods

def pad(self, value: phi.math._tensors.Tensor, widths: dict) ‑> phi.math._tensors.Tensor

Pads a tensor using values from self.pad_values()

Args

value
tensor to be padded
widths
name: str -> (lower: int, upper: int)}
value
Tensor:
widths
dict:

Returns:

Expand source code
def pad(self, value: Tensor, widths: dict) -> Tensor:
    """
    Pads a tensor using values from self.pad_values()

    Args:
      value: tensor to be padded
      widths: name: str -> (lower: int, upper: int)}
      value: Tensor: 
      widths: dict: 

    Returns:

    """
    for dim in widths:
        assert (w > 0 for w in widths[dim]), "Negative widths not allowed in Extrapolation.pad(). Use math.pad() instead."
        values = []
        if widths[dim][False] > 0:
            values.append(self.pad_values(value, widths[dim][False], dim, False))
        values.append(value)
        if widths[dim][True] > 0:
            values.append(self.pad_values(value, widths[dim][True], dim, True))
        value = math.concat(values, value.shape[dim])
    return value
def pad_values(self, value: phi.math._tensors.Tensor, width: int, dimension: str, upper_edge: bool) ‑> phi.math._tensors.Tensor

Determines the values with which the given tensor would be padded at the specified using this extrapolation.

Args

value
tensor to be padded
width
number of cells to pad perpendicular to the face. Must be larger than zero.
dimension
axis in which to pad
upper_edge
True for upper edge, False for lower edge
value
Tensor:
width
int:
dimension
str:
upper_edge
bool:

Returns

tensor that can be concatenated to value for padding

Expand source code
def pad_values(self, value: Tensor, width: int, dimension: str, upper_edge: bool) -> Tensor:
    """
    Determines the values with which the given tensor would be padded at the specified using this extrapolation.

    Args:
      value: tensor to be padded
      width: number of cells to pad perpendicular to the face. Must be larger than zero.
      dimension: axis in which to pad
      upper_edge: True for upper edge, False for lower edge
      value: Tensor: 
      width: int: 
      dimension: str: 
      upper_edge: bool: 

    Returns:
      tensor that can be concatenated to value for padding

    """
    raise NotImplementedError()
def spatial_gradient(self) ‑> Extrapolation

Returns the extrapolation for the spatial spatial_gradient of a tensor/field with this extrapolation.

Expand source code
def spatial_gradient(self) -> 'Extrapolation':
    """Returns the extrapolation for the spatial spatial_gradient of a tensor/field with this extrapolation."""
    raise NotImplementedError()
def to_dict(self) ‑> dict

Serialize this extrapolation to a dictionary that is serializable (JSON-writable).

Use from_dict() to restore the Extrapolation object.

Expand source code
def to_dict(self) -> dict:
    """
    Serialize this extrapolation to a dictionary that is serializable (JSON-writable).
    
    Use `from_dict()` to restore the Extrapolation object.
    """
    raise NotImplementedError()
def transform_coordinates(self, coordinates: phi.math._tensors.Tensor, shape: phi.math._shape.Shape) ‑> phi.math._tensors.Tensor

If is_copy_pad, transforms outsider coordinates to point to the index from which the value should be copied.

Otherwise, the grid tensor is assumed to hold the correct boundary values for this extrapolation at the edge. Coordinates are then snapped to the valid index range. This is the default implementation.

Args

coordinates
integer coordinates in index space
shape
tensor shape
coordinates
Tensor:
shape
Shape:

Returns

transformed coordinates

Expand source code
def transform_coordinates(self, coordinates: Tensor, shape: Shape) -> Tensor:
    """
    If is_copy_pad, transforms outsider coordinates to point to the index from which the value should be copied.
    
    Otherwise, the grid tensor is assumed to hold the correct boundary values for this extrapolation at the edge.
    Coordinates are then snapped to the valid index range.
    This is the default implementation.

    Args:
      coordinates: integer coordinates in index space
      shape: tensor shape
      coordinates: Tensor: 
      shape: Shape: 

    Returns:
      transformed coordinates

    """
    return math.clip(coordinates, 0, math.wrap(shape.spatial - 1, channel('vector')))
def valid_outer_faces(self, dim)

(lower: bool, upper: bool) indicating whether the values sampled at the outer-most faces of a staggered grid with this extrapolation are valid, i.e. need to be stored and are not redundant.

Expand source code
def valid_outer_faces(self, dim):
    """ `(lower: bool, upper: bool)` indicating whether the values sampled at the outer-most faces of a staggered grid with this extrapolation are valid, i.e. need to be stored and are not redundant. """
    raise NotImplementedError()
class LinearFunction

Just-in-time compiled linear function of Tensor arguments and return values.

Use jit_compile_linear() to create a linear function representation.

Expand source code
class LinearFunction(Generic[X, Y], Callable[[X], Y]):
    """
    Just-in-time compiled linear function of `Tensor` arguments and return values.

    Use `jit_compile_linear()` to create a linear function representation.
    """

    def __init__(self, f):
        self.f = f
        self.tracers: Dict[SignatureKey, ShiftLinTracer] = {}
        self.nl_jit = JitFunction(f)  # for backends that do not support sparse matrices

    def _trace(self, in_key: SignatureKey) -> 'ShiftLinTracer':
        assert in_key.shapes[0].is_uniform, f"math.jit_compile_linear() only supports uniform tensors for function input and output but input shape was {in_key.shapes[0]}"
        with in_key.backend:
            x = math.ones(in_key.shapes[0])
            tracer = ShiftLinTracer(x, {EMPTY_SHAPE: math.ones()}, x.shape, math.zeros(x.shape))
        f_input = assemble_tree(in_key.tree, [tracer])
        assert isinstance(f_input, tuple)
        condition_args = [in_key.kwargs[f'_condition_arg[{i}]'] for i in range(in_key.kwargs['n_condition_args'])]
        kwargs = {k: v for k, v in in_key.kwargs.items() if not (k.startswith('_condition_arg[') or k == 'n_condition_args')}
        result = self.f(*f_input, *condition_args, **kwargs)
        _, result_tensors = disassemble_tree(result)
        assert len(result_tensors) == 1, f"Linear function must return a single Tensor or tensor-like but got {result}"
        result_tensor = result_tensors[0]
        assert isinstance(result_tensor, ShiftLinTracer), f"Tracing linear function '{self.f.__name__}' failed. Make sure only linear operations are used."
        return result_tensor

    def _get_or_trace(self, key: SignatureKey):
        if not key.tracing and key in self.tracers:
            return self.tracers[key]
        else:
            tracer = self._trace(key)
            if not key.tracing:
                self.tracers[key] = tracer
                if len(self.tracers) >= 4:
                    warnings.warn(f"Φ-lin: The compiled linear function '{self.f.__name__}' was traced {len(self.tracers)} times. Performing many traces may be slow and cause memory leaks. A trace is performed when the function is called with different keyword arguments. Multiple linear traces can be avoided by jit-compiling the code that calls jit_compile_linear().")
            return tracer

    def __call__(self, *args: X, **kwargs) -> Y:
        nest, tensors = disassemble_tree(args)
        assert tensors, "Linear function requires at least one argument"
        if any(isinstance(t, ShiftLinTracer) for t in tensors):
            # TODO: if t is identity, use cached ShiftLinTracer, otherwise multiply two ShiftLinTracers
            return self.f(*args, **kwargs)
        backend = math.choose_backend_t(*tensors)
        if not backend.supports(Backend.sparse_coo_tensor):
            # warnings.warn(f"Sparse matrices are not supported by {backend}. Falling back to regular jit compilation.")
            if not all_available(*tensors):  # avoid nested tracing, Typical case jax.scipy.sparse.cg(LinearFunction). Nested traces cannot be reused which results in lots of traces per cg.
                logging.debug(f"Φ-lin: Running '{self.f.__name__}' as-is with {backend} because it is being traced.")
                return self.f(*args, **kwargs)
            else:
                return self.nl_jit(*args, **kwargs)
        x, *condition_args = args
        key = self._condition_key(x, condition_args, kwargs)
        tracer = self._get_or_trace(key)
        return tracer.apply(tensors[0])

    def sparse_matrix(self, x, *condition_args, format: str = None, **kwargs):
        key = self._condition_key(x, condition_args, kwargs)
        tracer = self._get_or_trace(key)
        assert math.close(tracer.bias, 0), "This is an affine function and cannot be represented by a single matrix. Use sparse_matrix_and_bias() instead."
        return tracer.get_sparse_matrix(format)

    def sparse_matrix_and_bias(self, x, *condition_args, format: str = None, **kwargs):
        key = self._condition_key(x, condition_args, kwargs)
        tracer = self._get_or_trace(key)
        return tracer.get_sparse_matrix(format), tracer.bias

    def _condition_key(self, x, condition_args, kwargs):
        kwargs['n_condition_args'] = len(condition_args)
        for i, c_arg in enumerate(condition_args):
            kwargs[f'_condition_arg[{i}]'] = c_arg
        key, _ = key_from_args(x, cache=False, **kwargs)
        assert key.backend.supports(Backend.sparse_coo_tensor)
        return key

    def stencil_inspector(self, *args, **kwargs):
        key, _ = key_from_args(*args, cache=True, **kwargs)
        tracer = self._get_or_trace(key)

        def print_stencil(**indices):
            pos = spatial(**indices)
            print(f"{self.f.__name__}: {pos} = {' + '.join(f'{val[indices]} * {vector_add(pos, offset)}' for offset, val in tracer.val.items() if (val[indices] != 0).all)}")

        return print_stencil

Ancestors

  • collections.abc.Callable
  • typing.Generic

Methods

def sparse_matrix(self, x, *condition_args, format: str = None, **kwargs)
Expand source code
def sparse_matrix(self, x, *condition_args, format: str = None, **kwargs):
    key = self._condition_key(x, condition_args, kwargs)
    tracer = self._get_or_trace(key)
    assert math.close(tracer.bias, 0), "This is an affine function and cannot be represented by a single matrix. Use sparse_matrix_and_bias() instead."
    return tracer.get_sparse_matrix(format)
def sparse_matrix_and_bias(self, x, *condition_args, format: str = None, **kwargs)
Expand source code
def sparse_matrix_and_bias(self, x, *condition_args, format: str = None, **kwargs):
    key = self._condition_key(x, condition_args, kwargs)
    tracer = self._get_or_trace(key)
    return tracer.get_sparse_matrix(format), tracer.bias
def stencil_inspector(self, *args, **kwargs)
Expand source code
def stencil_inspector(self, *args, **kwargs):
    key, _ = key_from_args(*args, cache=True, **kwargs)
    tracer = self._get_or_trace(key)

    def print_stencil(**indices):
        pos = spatial(**indices)
        print(f"{self.f.__name__}: {pos} = {' + '.join(f'{val[indices]} * {vector_add(pos, offset)}' for offset, val in tracer.val.items() if (val[indices] != 0).all)}")

    return print_stencil
class NotConverged

Raised during optimization if the desired accuracy was not reached within the maximum number of iterations.

This exception inherits from ConvergenceException.

See Also: Diverged.

Expand source code
class NotConverged(ConvergenceException):
    """
    Raised during optimization if the desired accuracy was not reached within the maximum number of iterations.

    This exception inherits from `ConvergenceException`.

    See Also:
        `Diverged`.
    """

    def __init__(self, result: SolveInfo):
        ConvergenceException.__init__(self, result)

Ancestors

  • phi.math._functional.ConvergenceException
  • builtins.RuntimeError
  • builtins.Exception
  • builtins.BaseException
class Shape

Shapes enumerate dimensions, each consisting of a name, size and type.

There are four types of dimensions: batch(), spatial(), channel(), and instance().

To construct a Shape, use batch(), spatial(), channel() or instance(), depending on the desired dimension type. To create a shape with multiple types, use merge_shapes(), concat_shapes() or the syntax shape1 & shape2.

The __init__ constructor is for internal use only.

Expand source code
class Shape:
    """
    Shapes enumerate dimensions, each consisting of a name, size and type.

    There are four types of dimensions: `batch`, `spatial`, `channel`, and `instance`.
    """

    def __init__(self, sizes: tuple or list, names: tuple or list, types: tuple or list):
        """
        To construct a `Shape`, use `batch`, `spatial`, `channel` or `instance`, depending on the desired dimension type.
        To create a shape with multiple types, use `merge_shapes()`, `concat_shapes()` or the syntax `shape1 & shape2`.

        The `__init__` constructor is for internal use only.
        """
        assert len(sizes) == len(names) == len(types), f"sizes={sizes} ({len(sizes)}), names={names} ({len(names)}), types={types} ({len(types)})"
        if len(sizes) > 0:
            from ._tensors import Tensor
            sizes = tuple([s if isinstance(s, Tensor) or s is None else int(s) for s in sizes])
        else:
            sizes = ()
        self.sizes: tuple = sizes
        """
        Ordered dimension sizes as `tuple`.
        The size of a dimension can be an `int` or a `Tensor` for [non-uniform shapes](https://tum-pbs.github.io/PhiFlow/Math.html#non-uniform-tensors).
        
        See Also:
            `Shape.get_size()`, `Shape.size`, `Shape.shape`.
        """
        self.names: Tuple[str] = tuple(names)
        """
        Ordered dimension names as `tuple[str]`.
        
        See Also:
            `Shape.name`.
        """
        assert all(isinstance(n, str) for n in names), f"All names must be of type string but got {names}"
        self.types: Tuple[str] = tuple(types)  # undocumented, may be private

    @property
    def _named_sizes(self):
        return zip(self.names, self.sizes)

    @property
    def _dimensions(self):
        return zip(self.sizes, self.names, self.types)

    def __len__(self):
        return len(self.sizes)

    def __contains__(self, item):
        if isinstance(item, str):
            return item in self.names
        elif isinstance(item, Shape):
            return all([d in self.names for d in item.names])
        else:
            raise ValueError(item)

    def __iter__(self):
        return iter(self[i] for i in range(self.rank))

    def index(self, dim: str or 'Shape' or None) -> int:
        """
        Finds the index of the dimension within this `Shape`.

        See Also:
            `Shape.indices()`.

        Args:
            dim: Dimension name or single-dimension `Shape`.

        Returns:
            Index as `int`.
        """
        if dim is None:
            return None
        elif isinstance(dim, str):
            return self.names.index(dim)
        elif isinstance(dim, Shape):
            assert dim.rank == 1, f"index() requires a single dimension as input but got {dim}. Use indices() for multiple dimensions."
            return self.names.index(dim.name)
        else:
            raise ValueError(f"index() requires a single dimension as input but got {dim}")

    def indices(self, dims: tuple or list or 'Shape') -> Tuple[int]:
        """
        Finds the indices of the given dimensions within this `Shape`.

        See Also:
            `Shape.index()`.

        Args:
            dims: Sequence of dimensions as `tuple`, `list` or `Shape`.

        Returns:
            Indices as `tuple[int]`.
        """
        if isinstance(dims, (list, tuple)):
            return tuple(self.index(n) for n in dims)
        elif isinstance(dims, Shape):
            return tuple(self.index(n) for n in dims.names)
        else:
            raise ValueError(f"indices() requires a sequence of dimensions but got {dims}")

    def get_size(self, dim: str or 'Shape'):
        """
        See Also:
            `Shape.get_sizes()`, `Shape.size`

        Args:
            dim: Dimension, either as name `str` or single-dimension `Shape`.

        Returns:
            Size associated with `dim` as `int` or `Tensor`.
        """
        if isinstance(dim, str):
            return self.sizes[self.names.index(dim)]
        elif isinstance(dim, Shape):
            assert dim.rank == 1, f"get_size() requires a single dimension but got {dim}. Use indices() to get multiple sizes."
            return self.sizes[self.names.index(dim.name)]
        else:
            raise ValueError(f"get_size() requires a single dimension but got {dim}. Use indices() to get multiple sizes.")

    def get_sizes(self, dims: tuple or list or 'Shape') -> tuple:
        """
        See Also:
            `Shape.get_size()`

        Args:
            dims: Dimensions as `tuple`, `list` or `Shape`.

        Returns:
            `tuple`
        """
        assert isinstance(dims, (tuple, list, Shape)), f"get_sizes() requires a sequence of dimensions but got {dims}"
        return tuple([self.get_size(dim) for dim in dims])

    def get_type(self, dim: str or 'Shape') -> str:
        """
        See Also:
            `Shape.get_types()`, `Shape.type`.

        Args:
            dim: Dimension, either as name `str` or single-dimension `Shape`.

        Returns:
            Dimension type as `str`.
        """
        if isinstance(dim, str):
            return self.types[self.names.index(dim)]
        elif isinstance(dim, Shape):
            assert dim.rank == 1, f"Shape.get_type() only accepts single-dimension Shapes but got {dim}"
            return self.types[self.names.index(dim.name)]
        else:
            raise ValueError(dim)

    def get_types(self, dims: tuple or list or 'Shape') -> tuple:
        """
        See Also:
            `Shape.get_type()`

        Args:
            dims: Dimensions as `tuple`, `list` or `Shape`.

        Returns:
            `tuple`
        """
        if isinstance(dims, (tuple, list)):
            return tuple(self.get_type(n) for n in dims)
        elif isinstance(dims, Shape):
            return tuple(self.get_type(n) for n in dims.names)
        else:
            raise ValueError(dims)

    def __getitem__(self, selection):
        if isinstance(selection, int):
            return Shape([self.sizes[selection]], [self.names[selection]], [self.types[selection]])
        elif isinstance(selection, slice):
            return Shape(self.sizes[selection], self.names[selection], self.types[selection])
        elif isinstance(selection, str):
            index = self.index(selection)
            return Shape([self.sizes[index]], [self.names[index]], [self.types[index]])
        elif isinstance(selection, (tuple, list)):
            return Shape([self.sizes[i] for i in selection], [self.names[i] for i in selection], [self.types[i] for i in selection])
        raise AssertionError("Can only access shape elements as shape[int] or shape[slice]")

    @property
    def batch(self) -> 'Shape':
        """
        Filters this shape, returning only the batch dimensions as a new `Shape` object.

        See also:
            `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

        Returns:
            New `Shape` object
        """
        return self[[i for i, t in enumerate(self.types) if t == BATCH_DIM]]

    @property
    def non_batch(self) -> 'Shape':
        """
        Filters this shape, returning only the non-batch dimensions as a new `Shape` object.

        See also:
            `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

        Returns:
            New `Shape` object
        """
        return self[[i for i, t in enumerate(self.types) if t != BATCH_DIM]]

    @property
    def spatial(self) -> 'Shape':
        """
        Filters this shape, returning only the spatial dimensions as a new `Shape` object.

        See also:
            `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

        Returns:
            New `Shape` object
        """
        return self[[i for i, t in enumerate(self.types) if t == SPATIAL_DIM]]

    @property
    def non_spatial(self) -> 'Shape':
        """
        Filters this shape, returning only the non-spatial dimensions as a new `Shape` object.

        See also:
            `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

        Returns:
            New `Shape` object
        """
        return self[[i for i, t in enumerate(self.types) if t != SPATIAL_DIM]]

    @property
    def instance(self) -> 'Shape':
        """
        Filters this shape, returning only the instance dimensions as a new `Shape` object.

        See also:
            `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

        Returns:
            New `Shape` object
        """
        return self[[i for i, t in enumerate(self.types) if t == INSTANCE_DIM]]

    @property
    def non_instance(self) -> 'Shape':
        """
        Filters this shape, returning only the non-instance dimensions as a new `Shape` object.

        See also:
            `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

        Returns:
            New `Shape` object
        """
        return self[[i for i, t in enumerate(self.types) if t != INSTANCE_DIM]]

    @property
    def channel(self) -> 'Shape':
        """
        Filters this shape, returning only the channel dimensions as a new `Shape` object.

        See also:
            `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

        Returns:
            New `Shape` object
        """
        return self[[i for i, t in enumerate(self.types) if t == CHANNEL_DIM]]

    @property
    def non_channel(self) -> 'Shape':
        """
        Filters this shape, returning only the non-channel dimensions as a new `Shape` object.

        See also:
            `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

        Returns:
            New `Shape` object
        """
        return self[[i for i, t in enumerate(self.types) if t != CHANNEL_DIM]]

    def unstack(self, dim='dims') -> Tuple['Shape']:
        """
        Slices this `Shape` along a dimension.
        The dimension listing the sizes of the shape is referred to as `'dims'`.

        Non-uniform tensor shapes may be unstacked along other dimensions as well, see
        https://tum-pbs.github.io/PhiFlow/Math.html#non-uniform-tensors

        Args:
            dim: dimension to unstack

        Returns:
            slices of this shape
        """
        if dim == 'dims':
            return tuple(Shape([self.sizes[i]], [self.names[i]], [self.types[i]]) for i in range(self.rank))
        if dim not in self:
            return tuple([self])
        else:
            from ._tensors import Tensor
            inner = self.without(dim)
            sizes = []
            dim_size = self.get_size(dim)
            for size in inner.sizes:
                if isinstance(size, Tensor) and dim in size.shape:
                    sizes.append(size.unstack(dim))
                    dim_size = size.shape.get_size(dim)
                else:
                    sizes.append(size)
            assert isinstance(dim_size, int)
            shapes = tuple(Shape([int(size[i]) if isinstance(size, tuple) else size for size in sizes], inner.names, inner.types) for i in range(dim_size))
            return shapes

    @property
    def name(self) -> str:
        """
        Only for Shapes containing exactly one single dimension.
        Returns the name of the dimension.

        See Also:
            `Shape.names`.
        """
        assert self.rank == 1, "Shape.name is only defined for shapes of rank 1."
        return self.names[0]

    @property
    def size(self) -> int:
        """
        Only for Shapes containing exactly one single dimension.
        Returns the size of the dimension.

        See Also:
            `Shape.sizes`, `Shape.get_size()`.
        """
        assert self.rank == 1, "Shape.size is only defined for shapes of rank 1."
        return self.sizes[0]

    @property
    def type(self) -> int:
        """
        Only for Shapes containing exactly one single dimension.
        Returns the type of the dimension.

        See Also:
            `Shape.get_type()`.
        """
        assert self.rank == 1, "Shape.type is only defined for shapes of rank 1."
        return self.types[0]

    def __int__(self):
        assert self.rank == 1, "int(Shape) is only defined for shapes of rank 1."
        return self.sizes[0]

    def mask(self, names: tuple or list or set or 'Shape'):
        """
        Returns a binary sequence corresponding to the names of this Shape.
        A value of 1 means that a dimension of this Shape is contained in `names`.

        Args:
          names: instance of dimension
          names: tuple or list or set: 

        Returns:
          binary sequence

        """
        if isinstance(names, str):
            names = [names]
        elif isinstance(names, Shape):
            names = names.names
        mask = [1 if name in names else 0 for name in self.names]
        return tuple(mask)

    def __repr__(self):
        strings = [f"{name}{TYPE_ABBR.get(dim_type, '?')}={size}" for size, name, dim_type in self._dimensions]
        return '(' + ', '.join(strings) + ')'

    def __eq__(self, other):
        if not isinstance(other, Shape):
            return False
        if self.names != other.names or self.types != other.types:
            return False
        for size1, size2 in zip(self.sizes, other.sizes):
            equal = size1 == size2
            assert isinstance(equal, (bool, math.Tensor))
            if isinstance(equal, math.Tensor):
                equal = equal.all
            if not equal:
                return False
        return True

    def __ne__(self, other):
        return not self == other

    def __bool__(self):
        return self.rank > 0

    def _reorder(self, names: tuple or list or 'Shape') -> 'Shape':
        assert len(names) == self.rank
        if isinstance(names, Shape):
            names = names.names
        order = [self.index(n) for n in names]
        return self[order]

    def _order_group(self, names: tuple or list or 'Shape') -> list:
        """ Reorders the dimensions of this `Shape` so that `names` are clustered together and occur in the specified order. """
        if isinstance(names, Shape):
            names = names.names
        result = []
        for dim in self.names:
            if dim not in result:
                if dim in names:
                    result.extend(names)
                else:
                    result.append(dim)
        return result

    def __and__(self, other):
        return merge_shapes(self, other, check_exact=[spatial])

    def _expand(self, dim: 'Shape', pos=None) -> 'Shape':
        """**Deprecated.** Use `phi.math.merge_shapes()` or `phi.math.concat_shapes()` instead. """
        warnings.warn("Shape.expand() is deprecated. Use merge_shapes() or concat_shapes() instead.", DeprecationWarning)
        if not dim:
            return self
        assert dim.name not in self, f"Cannot expand shape {self} by {dim} because dimension already exists."
        assert isinstance(dim, Shape) and dim.rank == 1, f"Shape.expand() requires a single dimension as a Shape but got {dim}"
        if pos is None:
            same_type_dims = self[[i for i, t in enumerate(self.types) if t == dim.type]]
            if len(same_type_dims) > 0:
                pos = self.index(same_type_dims.names[0])
            else:
                pos = {BATCH_DIM: 0, INSTANCE_DIM: self.batch_rank, SPATIAL_DIM: self.batch.rank + self.instance_rank, CHANNEL_DIM: self.rank + 1}[dim.type]
        elif pos < 0:
            pos += self.rank + 1
        sizes = list(self.sizes)
        names = list(self.names)
        types = list(self.types)
        sizes.insert(pos, dim.size)
        names.insert(pos, dim.name)
        types.insert(pos, dim.type)
        return Shape(sizes, names, types)

    def without(self, dims: str or tuple or list or 'Shape') -> 'Shape':
        """
        Builds a new shape from this one that is missing all given dimensions.
        Dimensions in `dims` that are not part of this Shape are ignored.
        
        The complementary operation is `Shape.only()`.

        Args:
          dims: Single dimension (str) or instance of dimensions (tuple, list, Shape)
          dims: Dimensions to exclude as `str` or `tuple` or `list` or `Shape`. Dimensions that are not included in this shape are ignored.

        Returns:
          Shape without specified dimensions
        """
        if isinstance(dims, str):
            return self[[i for i in range(self.rank) if self.names[i] != dims]]
        if isinstance(dims, (tuple, list)):
            return self[[i for i in range(self.rank) if self.names[i] not in dims]]
        elif isinstance(dims, Shape):
            return self[[i for i in range(self.rank) if self.names[i] not in dims.names]]
        # elif dims is None:  # subtract all
        #     return EMPTY_SHAPE
        else:
            raise ValueError(dims)

    def only(self, dims: str or tuple or list or 'Shape'):
        """
        Builds a new shape from this one that only contains the given dimensions.
        Dimensions in `dims` that are not part of this Shape are ignored.
        
        The complementary operation is :func:`Shape.without`.

        Args:
          dims: single dimension (str) or instance of dimensions (tuple, list, Shape)
          dims: str or tuple or list or Shape: 

        Returns:
          Shape containing only specified dimensions

        """
        if isinstance(dims, str):
            dims = parse_dim_order(dims)
        if isinstance(dims, (tuple, list)):
            return self[[i for i in range(self.rank) if self.names[i] in dims]]
        elif isinstance(dims, Shape):
            return self[[i for i in range(self.rank) if self.names[i] in dims.names]]
        elif dims is None:  # keep all
            return self
        else:
            raise ValueError(dims)

    @property
    def rank(self) -> int:
        """
        Returns the number of dimensions.
        Equal to `len(shape)`.

        See `Shape.is_empty`, `Shape.batch_rank`, `Shape.spatial_rank`, `Shape.channel_rank`.
        """
        return len(self.sizes)

    @property
    def batch_rank(self) -> int:
        """ Number of batch dimensions """
        r = 0
        for ty in self.types:
            if ty == BATCH_DIM:
                r += 1
        return r

    @property
    def instance_rank(self) -> int:
        """ Number of instance dimensions """
        r = 0
        for ty in self.types:
            if ty == INSTANCE_DIM:
                r += 1
        return r

    @property
    def spatial_rank(self) -> int:
        """ Number of spatial dimensions """
        r = 0
        for ty in self.types:
            if ty == SPATIAL_DIM:
                r += 1
        return r

    @property
    def channel_rank(self) -> int:
        """ Number of channel dimensions """
        r = 0
        for ty in self.types:
            if ty == CHANNEL_DIM:
                r += 1
        return r

    @property
    def well_defined(self):
        """
        Returns `True` if no dimension size is `None`.

        Shapes with undefined sizes may be used in `phi.math.tensor()`, `phi.math.wrap()`, `phi.math.stack()` or `phi.math.concat()`.

        To create an undefined size, call a constructor function (`batch()`, `spatial()`, `channel()`, `instance()`)
        with positional `str` arguments, e.g. `spatial('x')`.
        """
        for size in self.sizes:
            if size is None:
                return False
        return True

    @property
    def shape(self) -> 'Shape':
        """
        Higher-order `Shape`.
        The returned shape will always contain the channel dimension `dims` with a size equal to the `Shape.rank` of this shape.

        For uniform shapes, `Shape.shape` will only contain the dimension `dims` but the shapes of [non-uniform shapes](https://tum-pbs.github.io/PhiFlow/Math.html#non-uniform-tensors)
        may contain additional dimensions.

        See Also:
            `Shape.is_uniform`.

        Returns:
            `Shape`.
        """
        from phi.math import Tensor
        shape = Shape([self.rank], ['dims'], [CHANNEL_DIM])
        for size in self.sizes:
            if isinstance(size, Tensor):
                shape = shape & size.shape
        return shape

    @property
    def is_uniform(self) -> bool:
        """
        A shape is uniform if it all sizes have a single integer value.

        See Also:
            `Shape.is_non_uniform`, `Shape.shape`.
        """
        return not self.is_non_uniform

    @property
    def is_non_uniform(self) -> bool:
        """
        A shape is non-uniform if the size of any dimension varies along another dimension.

        See Also:
            `Shape.is_uniform`, `Shape.shape`.
        """
        from phi.math import Tensor
        for size in self.sizes:
            if isinstance(size, Tensor) and size.rank > 0:
                return True
        return False

    def with_size(self, size: int):
        """
        Only for single-dimension shapes.
        Returns a `Shape` representing this dimension but with a different size.

        See Also:
            `Shape.with_sizes()`.

        Args:
            size: Replacement size for this dimension.

        Returns:
            `Shape`
        """
        assert self.rank == 1, "Shape.with_size() is only defined for shapes of rank 1."
        return self.with_sizes([size])

    def with_sizes(self, sizes: tuple or list or 'Shape'):
        """
        Returns a new `Shape` matching the dimension names and types of `self` but with different sizes.

        See Also:
            `Shape.with_size()`.

        Args:
            sizes: One of

                * `tuple` / `list` of same length as `self` containing replacement sizes.
                * `Shape` of any rank. Replaces sizes for dimensions shared by `sizes` and `self`.

        Returns:
            `Shape` with same names and types as `self`.
        """
        if isinstance(sizes, Shape):
            sizes = [sizes.get_size(dim) if dim in sizes else self.sizes[i] for i, dim in enumerate(self.names)]
            return Shape(sizes, self.names, self.types)
        else:
            assert len(sizes) == len(self.sizes), f"Cannot create shape from {self} with sizes {sizes}"
            return Shape(sizes, self.names, self.types)

    def _replace_single_size(self, dim: str, size: int):
        new_sizes = list(self.sizes)
        new_sizes[self.index(dim)] = size
        return self.with_sizes(new_sizes)

    def _with_names(self, names: str or tuple or list):
        if isinstance(names, str):
            names = parse_dim_names(names, self.rank)
            names = [n if n is not None else o for n, o in zip(names, self.names)]
        return Shape(self.sizes, names, self.types)

    def _replace_names_and_types(self,
                                 dims: 'Shape' or str or tuple or list,
                                 new: 'Shape' or str or tuple or list) -> 'Shape':
        dims = parse_dim_order(dims)
        sizes = [math.rename_dims(s, dims, new) if isinstance(s, math.Tensor) else s for s in self.sizes]
        if isinstance(new, Shape):  # replace names and types
            names = list(self.names)
            types = list(self.types)
            for old_name, new_dim in zip(dims, new):
                names[self.index(old_name)] = new_dim.name
                types[self.index(old_name)] = new_dim.type
            return Shape(sizes, names, types)
        else:  # replace only names
            new = parse_dim_order(new)
            names = list(self.names)
            for old_name, new_name in zip(dims, new):
                names[self.index(old_name)] = new_name
            return Shape(sizes, names, self.types)

    def _with_types(self, types: 'Shape'):
        return Shape(self.sizes, self.names, [types.get_type(name) if name in types else self_type for name, self_type in zip(self.names, self.types)])

    def _perm(self, names: Tuple[str]):
        assert len(set(names)) == len(names), f"No duplicates allowed but got {names}"
        assert len(names) >= len(self.names), f"Cannot find permutation for {self} given {names} because names {set(self.names) - set(names)} are missing"
        assert len(names) <= len(self.names), f"Cannot find permutation for {self} given {names} because too many names were passed: {names}"
        perm = [self.names.index(name) for name in names]
        return perm

    @property
    def volume(self) -> int or None:
        """
        Returns the total number of values contained in a tensor of this shape.
        This is the product of all dimension sizes.

        Returns:
            volume as `int` or `Tensor` or `None` if the shape is not `Shape.well_defined`
        """
        from phi.math import Tensor
        for dim, size in self._named_sizes:
            if isinstance(size, Tensor) and size.rank > 0:
                non_uniform_dim = size.shape.names[0]
                shapes = self.unstack(non_uniform_dim)
                return sum(s.volume for s in shapes)
        result = 1
        for size in self.sizes:
            if size is None:
                return None
            result *= size
        return int(result)

    @property
    def is_empty(self) -> bool:
        """ True if this shape has no dimensions. Equivalent to `Shape.rank` `== 0`. """
        return len(self.sizes) == 0

    def after_pad(self, widths: dict) -> 'Shape':
        sizes = list(self.sizes)
        for dim, (lo, up) in widths.items():
            sizes[self.index(dim)] += lo + up
        return Shape(sizes, self.names, self.types)

    def after_gather(self, selection: dict) -> 'Shape':
        result = self
        for name, selection in selection.items():
            if name not in self.names:
                continue
            if isinstance(selection, int):
                if result.is_uniform:
                    result = result.without(name)
                else:
                    from phi.math import Tensor
                    gathered_sizes = [(s[{name: selection}] if isinstance(s, Tensor) else s) for s in result.sizes]
                    result = result.with_sizes(gathered_sizes).without(name)
            elif isinstance(selection, slice):
                start = selection.start or 0
                stop = selection.stop or self.get_size(name)
                step = selection.step or 1
                if stop < 0:
                    stop += self.get_size(name)
                    assert stop >= 0
                new_size = math.to_int64(math.ceil(math.wrap((stop - start) / step)))
                if new_size.rank == 0:
                    new_size = int(new_size)  # NumPy array not allowed because not hashable
                result = result._replace_single_size(name, new_size)
            else:
                raise NotImplementedError(f"{type(selection)} not supported. Only (int, slice) allowed.")
        return result

    def meshgrid(self):
        """Builds a sequence containing all multi-indices within a tensor of this shape."""
        indices = [0] * self.rank
        while True:
            yield {name: index for name, index in zip(self.names, indices)}
            for i in range(self.rank-1, -1, -1):
                indices[i] = (indices[i] + 1) % self.sizes[i]
                if indices[i] != 0:
                    break
            else:
                return

    def __add__(self, other):
        return self._op2(other, lambda s, o: s + o)

    def __radd__(self, other):
        return self._op2(other, lambda s, o: o + s)

    def __sub__(self, other):
        return self._op2(other, lambda s, o: s - o)

    def __rsub__(self, other):
        return self._op2(other, lambda s, o: o - s)

    def __mul__(self, other):
        return self._op2(other, lambda s, o: s * o)

    def __rmul__(self, other):
        return self._op2(other, lambda s, o: o * s)

    def _op2(self, other, fun):
        if isinstance(other, int):
            return Shape([fun(s, other) for s in self.sizes], self.names, self.types)
        elif isinstance(other, Shape):
            assert self.names == other.names, f"{self.names, other.names}"
            return Shape([fun(s, o) for s, o in zip(self.sizes, other.sizes)], self.names, self.types)
        else:
            return NotImplemented

    def __hash__(self):
        return hash(self.names)

Instance variables

var batch : phi.math._shape.Shape

Filters this shape, returning only the batch dimensions as a new Shape object.

See also: Shape.batch, Shape.spatial, Shape.instance, Shape.channel, Shape.non_batch, Shape.non_spatial, Shape.non_instance, Shape.non_channel.

Returns

New Shape object

Expand source code
@property
def batch(self) -> 'Shape':
    """
    Filters this shape, returning only the batch dimensions as a new `Shape` object.

    See also:
        `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

    Returns:
        New `Shape` object
    """
    return self[[i for i, t in enumerate(self.types) if t == BATCH_DIM]]
var batch_rank : int

Number of batch dimensions

Expand source code
@property
def batch_rank(self) -> int:
    """ Number of batch dimensions """
    r = 0
    for ty in self.types:
        if ty == BATCH_DIM:
            r += 1
    return r
var channel : phi.math._shape.Shape

Filters this shape, returning only the channel dimensions as a new Shape object.

See also: Shape.batch, Shape.spatial, Shape.instance, Shape.channel, Shape.non_batch, Shape.non_spatial, Shape.non_instance, Shape.non_channel.

Returns

New Shape object

Expand source code
@property
def channel(self) -> 'Shape':
    """
    Filters this shape, returning only the channel dimensions as a new `Shape` object.

    See also:
        `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

    Returns:
        New `Shape` object
    """
    return self[[i for i, t in enumerate(self.types) if t == CHANNEL_DIM]]
var channel_rank : int

Number of channel dimensions

Expand source code
@property
def channel_rank(self) -> int:
    """ Number of channel dimensions """
    r = 0
    for ty in self.types:
        if ty == CHANNEL_DIM:
            r += 1
    return r
var instance : phi.math._shape.Shape

Filters this shape, returning only the instance dimensions as a new Shape object.

See also: Shape.batch, Shape.spatial, Shape.instance, Shape.channel, Shape.non_batch, Shape.non_spatial, Shape.non_instance, Shape.non_channel.

Returns

New Shape object

Expand source code
@property
def instance(self) -> 'Shape':
    """
    Filters this shape, returning only the instance dimensions as a new `Shape` object.

    See also:
        `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

    Returns:
        New `Shape` object
    """
    return self[[i for i, t in enumerate(self.types) if t == INSTANCE_DIM]]
var instance_rank : int

Number of instance dimensions

Expand source code
@property
def instance_rank(self) -> int:
    """ Number of instance dimensions """
    r = 0
    for ty in self.types:
        if ty == INSTANCE_DIM:
            r += 1
    return r
var is_empty : bool

True if this shape has no dimensions. Equivalent to Shape.rank == 0.

Expand source code
@property
def is_empty(self) -> bool:
    """ True if this shape has no dimensions. Equivalent to `Shape.rank` `== 0`. """
    return len(self.sizes) == 0
var is_non_uniform : bool

A shape is non-uniform if the size of any dimension varies along another dimension.

See Also: Shape.is_uniform, Shape.shape.

Expand source code
@property
def is_non_uniform(self) -> bool:
    """
    A shape is non-uniform if the size of any dimension varies along another dimension.

    See Also:
        `Shape.is_uniform`, `Shape.shape`.
    """
    from phi.math import Tensor
    for size in self.sizes:
        if isinstance(size, Tensor) and size.rank > 0:
            return True
    return False
var is_uniform : bool

A shape is uniform if it all sizes have a single integer value.

See Also: Shape.is_non_uniform, Shape.shape.

Expand source code
@property
def is_uniform(self) -> bool:
    """
    A shape is uniform if it all sizes have a single integer value.

    See Also:
        `Shape.is_non_uniform`, `Shape.shape`.
    """
    return not self.is_non_uniform
var name : str

Only for Shapes containing exactly one single dimension. Returns the name of the dimension.

See Also: Shape.names.

Expand source code
@property
def name(self) -> str:
    """
    Only for Shapes containing exactly one single dimension.
    Returns the name of the dimension.

    See Also:
        `Shape.names`.
    """
    assert self.rank == 1, "Shape.name is only defined for shapes of rank 1."
    return self.names[0]
var names

Ordered dimension names as tuple[str].

See Also: Shape.name.

var non_batch : phi.math._shape.Shape

Filters this shape, returning only the non-batch dimensions as a new Shape object.

See also: Shape.batch, Shape.spatial, Shape.instance, Shape.channel, Shape.non_batch, Shape.non_spatial, Shape.non_instance, Shape.non_channel.

Returns

New Shape object

Expand source code
@property
def non_batch(self) -> 'Shape':
    """
    Filters this shape, returning only the non-batch dimensions as a new `Shape` object.

    See also:
        `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

    Returns:
        New `Shape` object
    """
    return self[[i for i, t in enumerate(self.types) if t != BATCH_DIM]]
var non_channel : phi.math._shape.Shape

Filters this shape, returning only the non-channel dimensions as a new Shape object.

See also: Shape.batch, Shape.spatial, Shape.instance, Shape.channel, Shape.non_batch, Shape.non_spatial, Shape.non_instance, Shape.non_channel.

Returns

New Shape object

Expand source code
@property
def non_channel(self) -> 'Shape':
    """
    Filters this shape, returning only the non-channel dimensions as a new `Shape` object.

    See also:
        `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

    Returns:
        New `Shape` object
    """
    return self[[i for i, t in enumerate(self.types) if t != CHANNEL_DIM]]
var non_instance : phi.math._shape.Shape

Filters this shape, returning only the non-instance dimensions as a new Shape object.

See also: Shape.batch, Shape.spatial, Shape.instance, Shape.channel, Shape.non_batch, Shape.non_spatial, Shape.non_instance, Shape.non_channel.

Returns

New Shape object

Expand source code
@property
def non_instance(self) -> 'Shape':
    """
    Filters this shape, returning only the non-instance dimensions as a new `Shape` object.

    See also:
        `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

    Returns:
        New `Shape` object
    """
    return self[[i for i, t in enumerate(self.types) if t != INSTANCE_DIM]]
var non_spatial : phi.math._shape.Shape

Filters this shape, returning only the non-spatial dimensions as a new Shape object.

See also: Shape.batch, Shape.spatial, Shape.instance, Shape.channel, Shape.non_batch, Shape.non_spatial, Shape.non_instance, Shape.non_channel.

Returns

New Shape object

Expand source code
@property
def non_spatial(self) -> 'Shape':
    """
    Filters this shape, returning only the non-spatial dimensions as a new `Shape` object.

    See also:
        `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

    Returns:
        New `Shape` object
    """
    return self[[i for i, t in enumerate(self.types) if t != SPATIAL_DIM]]
var rank : int

Returns the number of dimensions. Equal to len(shape).

See Shape.is_empty, Shape.batch_rank, Shape.spatial_rank, Shape.channel_rank.

Expand source code
@property
def rank(self) -> int:
    """
    Returns the number of dimensions.
    Equal to `len(shape)`.

    See `Shape.is_empty`, `Shape.batch_rank`, `Shape.spatial_rank`, `Shape.channel_rank`.
    """
    return len(self.sizes)
var shape : phi.math._shape.Shape

Higher-order Shape. The returned shape will always contain the channel dimension dims with a size equal to the Shape.rank of this shape.

For uniform shapes, Shape.shape will only contain the dimension dims but the shapes of non-uniform shapes may contain additional dimensions.

See Also: Shape.is_uniform.

Returns

Shape.

Expand source code
@property
def shape(self) -> 'Shape':
    """
    Higher-order `Shape`.
    The returned shape will always contain the channel dimension `dims` with a size equal to the `Shape.rank` of this shape.

    For uniform shapes, `Shape.shape` will only contain the dimension `dims` but the shapes of [non-uniform shapes](https://tum-pbs.github.io/PhiFlow/Math.html#non-uniform-tensors)
    may contain additional dimensions.

    See Also:
        `Shape.is_uniform`.

    Returns:
        `Shape`.
    """
    from phi.math import Tensor
    shape = Shape([self.rank], ['dims'], [CHANNEL_DIM])
    for size in self.sizes:
        if isinstance(size, Tensor):
            shape = shape & size.shape
    return shape
var size : int

Only for Shapes containing exactly one single dimension. Returns the size of the dimension.

See Also: Shape.sizes, Shape.get_size().

Expand source code
@property
def size(self) -> int:
    """
    Only for Shapes containing exactly one single dimension.
    Returns the size of the dimension.

    See Also:
        `Shape.sizes`, `Shape.get_size()`.
    """
    assert self.rank == 1, "Shape.size is only defined for shapes of rank 1."
    return self.sizes[0]
var sizes

Ordered dimension sizes as tuple. The size of a dimension can be an int or a Tensor for non-uniform shapes.

See Also: Shape.get_size(), Shape.size, Shape.shape.

var spatial : phi.math._shape.Shape

Filters this shape, returning only the spatial dimensions as a new Shape object.

See also: Shape.batch, Shape.spatial, Shape.instance, Shape.channel, Shape.non_batch, Shape.non_spatial, Shape.non_instance, Shape.non_channel.

Returns

New Shape object

Expand source code
@property
def spatial(self) -> 'Shape':
    """
    Filters this shape, returning only the spatial dimensions as a new `Shape` object.

    See also:
        `Shape.batch`, `Shape.spatial`, `Shape.instance`, `Shape.channel`, `Shape.non_batch`, `Shape.non_spatial`, `Shape.non_instance`, `Shape.non_channel`.

    Returns:
        New `Shape` object
    """
    return self[[i for i, t in enumerate(self.types) if t == SPATIAL_DIM]]
var spatial_rank : int

Number of spatial dimensions

Expand source code
@property
def spatial_rank(self) -> int:
    """ Number of spatial dimensions """
    r = 0
    for ty in self.types:
        if ty == SPATIAL_DIM:
            r += 1
    return r
var type : int

Only for Shapes containing exactly one single dimension. Returns the type of the dimension.

See Also: Shape.get_type().

Expand source code
@property
def type(self) -> int:
    """
    Only for Shapes containing exactly one single dimension.
    Returns the type of the dimension.

    See Also:
        `Shape.get_type()`.
    """
    assert self.rank == 1, "Shape.type is only defined for shapes of rank 1."
    return self.types[0]
var volume : int

Returns the total number of values contained in a tensor of this shape. This is the product of all dimension sizes.

Returns

volume as int or Tensor or None if the shape is not Shape.well_defined

Expand source code
@property
def volume(self) -> int or None:
    """
    Returns the total number of values contained in a tensor of this shape.
    This is the product of all dimension sizes.

    Returns:
        volume as `int` or `Tensor` or `None` if the shape is not `Shape.well_defined`
    """
    from phi.math import Tensor
    for dim, size in self._named_sizes:
        if isinstance(size, Tensor) and size.rank > 0:
            non_uniform_dim = size.shape.names[0]
            shapes = self.unstack(non_uniform_dim)
            return sum(s.volume for s in shapes)
    result = 1
    for size in self.sizes:
        if size is None:
            return None
        result *= size
    return int(result)
var well_defined

Returns True if no dimension size is None.

Shapes with undefined sizes may be used in tensor(), wrap(), stack() or concat().

To create an undefined size, call a constructor function (batch(), spatial(), channel(), instance()) with positional str arguments, e.g. spatial('x').

Expand source code
@property
def well_defined(self):
    """
    Returns `True` if no dimension size is `None`.

    Shapes with undefined sizes may be used in `phi.math.tensor()`, `phi.math.wrap()`, `phi.math.stack()` or `phi.math.concat()`.

    To create an undefined size, call a constructor function (`batch()`, `spatial()`, `channel()`, `instance()`)
    with positional `str` arguments, e.g. `spatial('x')`.
    """
    for size in self.sizes:
        if size is None:
            return False
    return True

Methods

def after_gather(self, selection: dict) ‑> phi.math._shape.Shape
Expand source code
def after_gather(self, selection: dict) -> 'Shape':
    result = self
    for name, selection in selection.items():
        if name not in self.names:
            continue
        if isinstance(selection, int):
            if result.is_uniform:
                result = result.without(name)
            else:
                from phi.math import Tensor
                gathered_sizes = [(s[{name: selection}] if isinstance(s, Tensor) else s) for s in result.sizes]
                result = result.with_sizes(gathered_sizes).without(name)
        elif isinstance(selection, slice):
            start = selection.start or 0
            stop = selection.stop or self.get_size(name)
            step = selection.step or 1
            if stop < 0:
                stop += self.get_size(name)
                assert stop >= 0
            new_size = math.to_int64(math.ceil(math.wrap((stop - start) / step)))
            if new_size.rank == 0:
                new_size = int(new_size)  # NumPy array not allowed because not hashable
            result = result._replace_single_size(name, new_size)
        else:
            raise NotImplementedError(f"{type(selection)} not supported. Only (int, slice) allowed.")
    return result
def after_pad(self, widths: dict) ‑> phi.math._shape.Shape
Expand source code
def after_pad(self, widths: dict) -> 'Shape':
    sizes = list(self.sizes)
    for dim, (lo, up) in widths.items():
        sizes[self.index(dim)] += lo + up
    return Shape(sizes, self.names, self.types)
def get_size(self, dim: str)

See Also: Shape.get_sizes(), Shape.size

Args

dim
Dimension, either as name str or single-dimension Shape.

Returns

Size associated with dim as int or Tensor.

Expand source code
def get_size(self, dim: str or 'Shape'):
    """
    See Also:
        `Shape.get_sizes()`, `Shape.size`

    Args:
        dim: Dimension, either as name `str` or single-dimension `Shape`.

    Returns:
        Size associated with `dim` as `int` or `Tensor`.
    """
    if isinstance(dim, str):
        return self.sizes[self.names.index(dim)]
    elif isinstance(dim, Shape):
        assert dim.rank == 1, f"get_size() requires a single dimension but got {dim}. Use indices() to get multiple sizes."
        return self.sizes[self.names.index(dim.name)]
    else:
        raise ValueError(f"get_size() requires a single dimension but got {dim}. Use indices() to get multiple sizes.")
def get_sizes(self, dims: tuple) ‑> tuple

See Also: Shape.get_size()

Args

dims
Dimensions as tuple, list or Shape.

Returns

tuple

Expand source code
def get_sizes(self, dims: tuple or list or 'Shape') -> tuple:
    """
    See Also:
        `Shape.get_size()`

    Args:
        dims: Dimensions as `tuple`, `list` or `Shape`.

    Returns:
        `tuple`
    """
    assert isinstance(dims, (tuple, list, Shape)), f"get_sizes() requires a sequence of dimensions but got {dims}"
    return tuple([self.get_size(dim) for dim in dims])
def get_type(self, dim: str) ‑> str

See Also: Shape.get_types(), Shape.type.

Args

dim
Dimension, either as name str or single-dimension Shape.

Returns

Dimension type as str.

Expand source code
def get_type(self, dim: str or 'Shape') -> str:
    """
    See Also:
        `Shape.get_types()`, `Shape.type`.

    Args:
        dim: Dimension, either as name `str` or single-dimension `Shape`.

    Returns:
        Dimension type as `str`.
    """
    if isinstance(dim, str):
        return self.types[self.names.index(dim)]
    elif isinstance(dim, Shape):
        assert dim.rank == 1, f"Shape.get_type() only accepts single-dimension Shapes but got {dim}"
        return self.types[self.names.index(dim.name)]
    else:
        raise ValueError(dim)
def get_types(self, dims: tuple) ‑> tuple

See Also: Shape.get_type()

Args

dims
Dimensions as tuple, list or Shape.

Returns

tuple

Expand source code
def get_types(self, dims: tuple or list or 'Shape') -> tuple:
    """
    See Also:
        `Shape.get_type()`

    Args:
        dims: Dimensions as `tuple`, `list` or `Shape`.

    Returns:
        `tuple`
    """
    if isinstance(dims, (tuple, list)):
        return tuple(self.get_type(n) for n in dims)
    elif isinstance(dims, Shape):
        return tuple(self.get_type(n) for n in dims.names)
    else:
        raise ValueError(dims)
def index(self, dim: str) ‑> int

Finds the index of the dimension within this Shape.

See Also: Shape.indices().

Args

dim
Dimension name or single-dimension Shape.

Returns

Index as int.

Expand source code
def index(self, dim: str or 'Shape' or None) -> int:
    """
    Finds the index of the dimension within this `Shape`.

    See Also:
        `Shape.indices()`.

    Args:
        dim: Dimension name or single-dimension `Shape`.

    Returns:
        Index as `int`.
    """
    if dim is None:
        return None
    elif isinstance(dim, str):
        return self.names.index(dim)
    elif isinstance(dim, Shape):
        assert dim.rank == 1, f"index() requires a single dimension as input but got {dim}. Use indices() for multiple dimensions."
        return self.names.index(dim.name)
    else:
        raise ValueError(f"index() requires a single dimension as input but got {dim}")
def indices(self, dims: tuple) ‑> Tuple[int]

Finds the indices of the given dimensions within this Shape.

See Also: Shape.index().

Args

dims
Sequence of dimensions as tuple, list or Shape.

Returns

Indices as tuple[int].

Expand source code
def indices(self, dims: tuple or list or 'Shape') -> Tuple[int]:
    """
    Finds the indices of the given dimensions within this `Shape`.

    See Also:
        `Shape.index()`.

    Args:
        dims: Sequence of dimensions as `tuple`, `list` or `Shape`.

    Returns:
        Indices as `tuple[int]`.
    """
    if isinstance(dims, (list, tuple)):
        return tuple(self.index(n) for n in dims)
    elif isinstance(dims, Shape):
        return tuple(self.index(n) for n in dims.names)
    else:
        raise ValueError(f"indices() requires a sequence of dimensions but got {dims}")
def mask(self, names: tuple)

Returns a binary sequence corresponding to the names of this Shape. A value of 1 means that a dimension of this Shape is contained in names.

Args

names
instance of dimension
names
tuple or list or set:

Returns

binary sequence

Expand source code
def mask(self, names: tuple or list or set or 'Shape'):
    """
    Returns a binary sequence corresponding to the names of this Shape.
    A value of 1 means that a dimension of this Shape is contained in `names`.

    Args:
      names: instance of dimension
      names: tuple or list or set: 

    Returns:
      binary sequence

    """
    if isinstance(names, str):
        names = [names]
    elif isinstance(names, Shape):
        names = names.names
    mask = [1 if name in names else 0 for name in self.names]
    return tuple(mask)
def meshgrid(self)

Builds a sequence containing all multi-indices within a tensor of this shape.

Expand source code
def meshgrid(self):
    """Builds a sequence containing all multi-indices within a tensor of this shape."""
    indices = [0] * self.rank
    while True:
        yield {name: index for name, index in zip(self.names, indices)}
        for i in range(self.rank-1, -1, -1):
            indices[i] = (indices[i] + 1) % self.sizes[i]
            if indices[i] != 0:
                break
        else:
            return
def only(self, dims: str)

Builds a new shape from this one that only contains the given dimensions. Dimensions in dims that are not part of this Shape are ignored.

The complementary operation is :func:Shape.without().

Args

dims
single dimension (str) or instance of dimensions (tuple, list, Shape)
dims
str or tuple or list or Shape:

Returns

Shape containing only specified dimensions

Expand source code
def only(self, dims: str or tuple or list or 'Shape'):
    """
    Builds a new shape from this one that only contains the given dimensions.
    Dimensions in `dims` that are not part of this Shape are ignored.
    
    The complementary operation is :func:`Shape.without`.

    Args:
      dims: single dimension (str) or instance of dimensions (tuple, list, Shape)
      dims: str or tuple or list or Shape: 

    Returns:
      Shape containing only specified dimensions

    """
    if isinstance(dims, str):
        dims = parse_dim_order(dims)
    if isinstance(dims, (tuple, list)):
        return self[[i for i in range(self.rank) if self.names[i] in dims]]
    elif isinstance(dims, Shape):
        return self[[i for i in range(self.rank) if self.names[i] in dims.names]]
    elif dims is None:  # keep all
        return self
    else:
        raise ValueError(dims)
def unstack(self, dim='dims') ‑> Tuple[phi.math._shape.Shape]

Slices this Shape along a dimension. The dimension listing the sizes of the shape is referred to as 'dims'.

Non-uniform tensor shapes may be unstacked along other dimensions as well, see https://tum-pbs.github.io/PhiFlow/Math.html#non-uniform-tensors

Args

dim
dimension to unstack

Returns

slices of this shape

Expand source code
def unstack(self, dim='dims') -> Tuple['Shape']:
    """
    Slices this `Shape` along a dimension.
    The dimension listing the sizes of the shape is referred to as `'dims'`.

    Non-uniform tensor shapes may be unstacked along other dimensions as well, see
    https://tum-pbs.github.io/PhiFlow/Math.html#non-uniform-tensors

    Args:
        dim: dimension to unstack

    Returns:
        slices of this shape
    """
    if dim == 'dims':
        return tuple(Shape([self.sizes[i]], [self.names[i]], [self.types[i]]) for i in range(self.rank))
    if dim not in self:
        return tuple([self])
    else:
        from ._tensors import Tensor
        inner = self.without(dim)
        sizes = []
        dim_size = self.get_size(dim)
        for size in inner.sizes:
            if isinstance(size, Tensor) and dim in size.shape:
                sizes.append(size.unstack(dim))
                dim_size = size.shape.get_size(dim)
            else:
                sizes.append(size)
        assert isinstance(dim_size, int)
        shapes = tuple(Shape([int(size[i]) if isinstance(size, tuple) else size for size in sizes], inner.names, inner.types) for i in range(dim_size))
        return shapes
def with_size(self, size: int)

Only for single-dimension shapes. Returns a Shape representing this dimension but with a different size.

See Also: Shape.with_sizes().

Args

size
Replacement size for this dimension.

Returns

Shape

Expand source code
def with_size(self, size: int):
    """
    Only for single-dimension shapes.
    Returns a `Shape` representing this dimension but with a different size.

    See Also:
        `Shape.with_sizes()`.

    Args:
        size: Replacement size for this dimension.

    Returns:
        `Shape`
    """
    assert self.rank == 1, "Shape.with_size() is only defined for shapes of rank 1."
    return self.with_sizes([size])
def with_sizes(self, sizes: tuple)

Returns a new Shape matching the dimension names and types of self but with different sizes.

See Also: Shape.with_size().

Args

sizes

One of

  • tuple / list of same length as self containing replacement sizes.
  • Shape of any rank. Replaces sizes for dimensions shared by sizes and self.

Returns

Shape with same names and types as self.

Expand source code
def with_sizes(self, sizes: tuple or list or 'Shape'):
    """
    Returns a new `Shape` matching the dimension names and types of `self` but with different sizes.

    See Also:
        `Shape.with_size()`.

    Args:
        sizes: One of

            * `tuple` / `list` of same length as `self` containing replacement sizes.
            * `Shape` of any rank. Replaces sizes for dimensions shared by `sizes` and `self`.

    Returns:
        `Shape` with same names and types as `self`.
    """
    if isinstance(sizes, Shape):
        sizes = [sizes.get_size(dim) if dim in sizes else self.sizes[i] for i, dim in enumerate(self.names)]
        return Shape(sizes, self.names, self.types)
    else:
        assert len(sizes) == len(self.sizes), f"Cannot create shape from {self} with sizes {sizes}"
        return Shape(sizes, self.names, self.types)
def without(self, dims: str) ‑> phi.math._shape.Shape

Builds a new shape from this one that is missing all given dimensions. Dimensions in dims that are not part of this Shape are ignored.

The complementary operation is Shape.only().

Args

dims
Single dimension (str) or instance of dimensions (tuple, list, Shape)
dims
Dimensions to exclude as str or tuple or list or Shape. Dimensions that are not included in this shape are ignored.

Returns

Shape without specified dimensions

Expand source code
def without(self, dims: str or tuple or list or 'Shape') -> 'Shape':
    """
    Builds a new shape from this one that is missing all given dimensions.
    Dimensions in `dims` that are not part of this Shape are ignored.
    
    The complementary operation is `Shape.only()`.

    Args:
      dims: Single dimension (str) or instance of dimensions (tuple, list, Shape)
      dims: Dimensions to exclude as `str` or `tuple` or `list` or `Shape`. Dimensions that are not included in this shape are ignored.

    Returns:
      Shape without specified dimensions
    """
    if isinstance(dims, str):
        return self[[i for i in range(self.rank) if self.names[i] != dims]]
    if isinstance(dims, (tuple, list)):
        return self[[i for i in range(self.rank) if self.names[i] not in dims]]
    elif isinstance(dims, Shape):
        return self[[i for i in range(self.rank) if self.names[i] not in dims.names]]
    # elif dims is None:  # subtract all
    #     return EMPTY_SHAPE
    else:
        raise ValueError(dims)
class Solve (method: str, relative_tolerance: float, absolute_tolerance: float, max_iterations: int = 1000, x0: ~X = None, suppress: tuple = (), gradient_solve: Solve[Y, X] = None)

Specifies parameters and stopping criteria for solving a minimization problem or system of equations.

Expand source code
class Solve(Generic[X, Y]):  # TODO move to phi.math._functional, put Tensors there
    """
    Specifies parameters and stopping criteria for solving a minimization problem or system of equations.
    """

    def __init__(self,
                 method: str,
                 relative_tolerance: float or Tensor,
                 absolute_tolerance: float or Tensor,
                 max_iterations: int or Tensor = 1000,
                 x0: X or Any = None,
                 suppress: tuple or list = (),
                 gradient_solve: 'Solve[Y, X]' or None = None):
        assert isinstance(method, str)
        self.method: str = method
        """ Optimization method to use. Available solvers depend on the solve function that is used to perform the solve. """
        self.relative_tolerance: Tensor = math.to_float(wrap(relative_tolerance))
        """ Relative tolerance for linear solves only. This must be `0` for minimization problems.
        For systems of equations *f(x)=y*, the final tolerance is `max(relative_tolerance * norm(y), absolute_tolerance)`. """
        self.absolute_tolerance: Tensor = math.to_float(wrap(absolute_tolerance))
        """ Absolut tolerance for optimization problems and linear solves.
        For systems of equations *f(x)=y*, the final tolerance is `max(relative_tolerance * norm(y), absolute_tolerance)`. """
        self.max_iterations: Tensor = math.to_int32(wrap(max_iterations))
        """ Maximum number of iterations to perform before raising a `NotConverged` error is raised. """
        self.x0 = x0
        """ Initial guess for the method, of same type and dimensionality as the solve result.
         This property must be set to a value compatible with the solution `x` before running a method. """
        assert all(issubclass(err, ConvergenceException) for err in suppress)
        self.suppress: tuple = tuple(suppress)
        """ Error types to suppress; `tuple` of `ConvergenceException` types. For these errors, the solve function will instead return the partial result without raising the error. """
        self._gradient_solve: Solve[Y, X] = gradient_solve
        self.id = str(uuid.uuid4())

    @property
    def gradient_solve(self) -> 'Solve[Y, X]':
        """
        Parameters to use for the gradient pass when an implicit gradient is computed.
        If `None`, a duplicate of this `Solve` is created for the gradient solve.

        In any case, the gradient solve information will be stored in `gradient_solve.result`.
        """
        if self._gradient_solve is None:
            self._gradient_solve = Solve(self.method, self.relative_tolerance, self.absolute_tolerance, self.max_iterations, None, self.suppress)
        return self._gradient_solve

    def __repr__(self):
        return f"{self.method} with tolerance {self.relative_tolerance} (rel), {self.absolute_tolerance} (abs), max_iterations={self.max_iterations}"

    def __eq__(self, other):
        if not isinstance(other, Solve):
            return False
        if self.method != other.method \
                or (self.absolute_tolerance != other.absolute_tolerance).any \
                or (self.relative_tolerance != other.relative_tolerance).any \
                or (self.max_iterations != other.max_iterations).any \
                or self.suppress != other.suppress:
            return False
        return self.x0 == other.x0

    def __variable_attrs__(self) -> Tuple[str]:
        return 'x0',

Ancestors

  • typing.Generic

Instance variables

var absolute_tolerance

Absolut tolerance for optimization problems and linear solves. For systems of equations f(x)=y, the final tolerance is max(relative_tolerance * norm(y), absolute_tolerance).

var gradient_solve : phi.math._functional.Solve[~Y, ~X]

Parameters to use for the gradient pass when an implicit gradient is computed. If None, a duplicate of this Solve is created for the gradient solve.

In any case, the gradient solve information will be stored in gradient_solve.result.

Expand source code
@property
def gradient_solve(self) -> 'Solve[Y, X]':
    """
    Parameters to use for the gradient pass when an implicit gradient is computed.
    If `None`, a duplicate of this `Solve` is created for the gradient solve.

    In any case, the gradient solve information will be stored in `gradient_solve.result`.
    """
    if self._gradient_solve is None:
        self._gradient_solve = Solve(self.method, self.relative_tolerance, self.absolute_tolerance, self.max_iterations, None, self.suppress)
    return self._gradient_solve
var max_iterations

Maximum number of iterations to perform before raising a NotConverged error is raised.

var method

Optimization method to use. Available solvers depend on the solve function that is used to perform the solve.

var relative_tolerance

Relative tolerance for linear solves only. This must be 0 for minimization problems. For systems of equations f(x)=y, the final tolerance is max(relative_tolerance * norm(y), absolute_tolerance).

var suppress

Error types to suppress; tuple of ConvergenceException types. For these errors, the solve function will instead return the partial result without raising the error.

var x0

Initial guess for the method, of same type and dimensionality as the solve result. This property must be set to a value compatible with the solution x before running a method.

class SolveInfo

Stores information about the solution or trajectory of a solve.

When representing the full optimization trajectory, all tracked quantities will have an additional trajectory batch dimension.

Expand source code
class SolveInfo(Generic[X, Y]):
    """
    Stores information about the solution or trajectory of a solve.

    When representing the full optimization trajectory, all tracked quantities will have an additional `trajectory` batch dimension.
    """

    def __init__(self,
                 solve: Solve,
                 x: X,
                 residual: Y or None,
                 iterations: Tensor or None,
                 function_evaluations: Tensor or None,
                 converged: Tensor,
                 diverged: Tensor,
                 method: str,
                 msg: str,
                 solve_time: float):
        # tuple.__new__(SolveInfo, (x, residual, iterations, function_evaluations, converged, diverged))
        self.solve: Solve[X, Y] = solve
        """ `Solve`, Parameters specified for the solve. """
        self.x: X = x
        """ `Tensor` or `TensorLike`, solution estimate. """
        self.residual: Y = residual
        """ `Tensor` or `TensorLike`, residual vector for systems of equations or function value for minimization problems. """
        self.iterations: Tensor = iterations
        """ `Tensor`, number of performed iterations to reach this state. """
        self.function_evaluations: Tensor = function_evaluations
        """ `Tensor`, how often the function (or its gradient function) was called. """
        self.converged: Tensor = converged
        """ `Tensor`, whether the residual is within the specified tolerance. """
        self.diverged: Tensor = diverged
        """ `Tensor`, whether the solve has diverged at this point. """
        self.method = method
        """ `str`, which method and implementation that was used. """
        if not msg and all_available(diverged, converged):
            if self.diverged.any:
                msg = f"Solve diverged within {iterations if iterations is not None else '?'} iterations using {method}."
            elif not self.converged.trajectory[-1].all:
                msg = f"Solve did not converge to rel={solve.relative_tolerance}, abs={solve.absolute_tolerance} within {solve.max_iterations} iterations using {method}. Max residual: {[math.max_(t.trajectory[-1]) for t in disassemble_tree(self.residual)[1]]}"
            else:
                msg = f"Converged within {iterations if iterations is not None else '?'} iterations."
        self.msg = msg
        """ `str`, termination message """
        self.solve_time = solve_time
        """ Time spent in Backend solve function (in seconds) """

    def __repr__(self):
        return self.msg

    def snapshot(self, index):
        return SolveInfo(self.solve, self.x.trajectory[index], self.residual.trajectory[index], self.iterations.trajectory[index], self.function_evaluations.trajectory[index], self.converged.trajectory[index], self.diverged.trajectory[index], self.method, self.msg, self.solve_time)

    def convergence_check(self, only_warn: bool):
        if not all_available(self.diverged, self.converged):
            return
        if self.diverged.any:
            if Diverged not in self.solve.suppress:
                if only_warn:
                    warnings.warn(self.msg)
                else:
                    raise Diverged(self)
        if not self.converged.trajectory[-1].all:
            if NotConverged not in self.solve.suppress:
                if only_warn:
                    warnings.warn(self.msg)
                else:
                    raise NotConverged(self)

Ancestors

  • typing.Generic

Instance variables

var converged

Tensor, whether the residual is within the specified tolerance.

var diverged

Tensor, whether the solve has diverged at this point.

var function_evaluations

Tensor, how often the function (or its gradient function) was called.

var iterations

Tensor, number of performed iterations to reach this state.

var method

str, which method and implementation that was used.

var msg

str, termination message

var residual

Tensor or TensorLike, residual vector for systems of equations or function value for minimization problems.

var solve

Solve, Parameters specified for the solve.

var solve_time

Time spent in Backend solve function (in seconds)

var x

Tensor or TensorLike, solution estimate.

Methods

def convergence_check(self, only_warn: bool)
Expand source code
def convergence_check(self, only_warn: bool):
    if not all_available(self.diverged, self.converged):
        return
    if self.diverged.any:
        if Diverged not in self.solve.suppress:
            if only_warn:
                warnings.warn(self.msg)
            else:
                raise Diverged(self)
    if not self.converged.trajectory[-1].all:
        if NotConverged not in self.solve.suppress:
            if only_warn:
                warnings.warn(self.msg)
            else:
                raise NotConverged(self)
def snapshot(self, index)
Expand source code
def snapshot(self, index):
    return SolveInfo(self.solve, self.x.trajectory[index], self.residual.trajectory[index], self.iterations.trajectory[index], self.function_evaluations.trajectory[index], self.converged.trajectory[index], self.diverged.trajectory[index], self.method, self.msg, self.solve_time)
class SolveTape (record_trajectories=False)

Used to record additional information about solves invoked via solve_linear(), solve_nonlinear() or minimize(). While a SolveTape is active, certain performance optimizations and algorithm implementations may be disabled.

To access a SolveInfo of a recorded solve, use

solve = Solve(method, ...)
with SolveTape() as solves:
    x = math.solve_linear(f, y, solve)
result: SolveInfo = solves[solve]  # get by Solve
result: SolveInfo = solves[0]  # get by index

Args

record_trajectories
When enabled, the entries of SolveInfo will contain an additional batch dimension named trajectory.
Expand source code
class SolveTape:
    """
    Used to record additional information about solves invoked via `solve_linear()`, `solve_nonlinear()` or `minimize()`.
    While a `SolveTape` is active, certain performance optimizations and algorithm implementations may be disabled.

    To access a `SolveInfo` of a recorded solve, use
    ```python
    solve = Solve(method, ...)
    with SolveTape() as solves:
        x = math.solve_linear(f, y, solve)
    result: SolveInfo = solves[solve]  # get by Solve
    result: SolveInfo = solves[0]  # get by index
    ```
    """

    def __init__(self, record_trajectories=False):
        """
        Args:
            record_trajectories: When enabled, the entries of `SolveInfo` will contain an additional batch dimension named `trajectory`.
        """
        self.record_trajectories = record_trajectories
        self.solves: List[SolveInfo] = []
        self.solve_ids: List[str] = []

    def __enter__(self):
        _SOLVE_TAPES.append(self)
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        _SOLVE_TAPES.remove(self)

    def _add(self, solve: Solve, trj: bool, result: SolveInfo):
        if any(s.solve.id == solve.id for s in self.solves):
            warnings.warn("SolveTape contains two results for the same solve settings. SolveTape[solve] will return the first solve result.")
        if self.record_trajectories:
            assert trj, "Solve did not record a trajectory."
            self.solves.append(result)
        elif trj:
            self.solves.append(result.snapshot(-1))
        else:
            self.solves.append(result)
        self.solve_ids.append(solve.id)

    def __getitem__(self, item) -> SolveInfo:
        if isinstance(item, int):
            return self.solves[item]
        else:
            assert isinstance(item, Solve)
            solves = [s for s in self.solves if s.solve.id == item.id]
            if len(solves) == 0:
                raise KeyError(f"No solve recorded with key '{item}'.")
            assert len(solves) == 1
            return solves[0]

    def __iter__(self):
        return iter(self.solves)

    def __len__(self):
        return len(self.solves)
class Tensor

Abstract base class to represent structured data of one data type. This class replaces the native tensor classes numpy.ndarray, torch.Tensor, tensorflow.Tensor or jax.numpy.ndarray as the main data container in ΦFlow.

Tensor instances are different from native tensors in two important ways:

  • The dimensions of Tensors have names and types.
  • Tensors can have non-uniform shapes, meaning that the size of dimensions can vary along other dimensions.

To check whether a value is a tensor, use isinstance(value, Tensor).

To construct a Tensor, use tensor(), wrap() or one of the basic tensor creation functions, see https://tum-pbs.github.io/PhiFlow/Math.html#tensor-creation .

Tensors are not editable. When backed by an editable native tensor, e.g. a numpy.ndarray, do not edit the underlying data structure.

Expand source code
class Tensor:
    """
    Abstract base class to represent structured data of one data type.
    This class replaces the native tensor classes `numpy.ndarray`, `torch.Tensor`, `tensorflow.Tensor` or `jax.numpy.ndarray` as the main data container in Φ<sub>Flow</sub>.

    `Tensor` instances are different from native tensors in two important ways:

    * The dimensions of Tensors have *names* and *types*.
    * Tensors can have non-uniform shapes, meaning that the size of dimensions can vary along other dimensions.

    To check whether a value is a tensor, use `isinstance(value, Tensor)`.

    To construct a Tensor, use `phi.math.tensor()`, `phi.math.wrap()` or one of the basic tensor creation functions,
    see https://tum-pbs.github.io/PhiFlow/Math.html#tensor-creation .

    Tensors are not editable.
    When backed by an editable native tensor, e.g. a `numpy.ndarray`, do not edit the underlying data structure.
    """

    def native(self, order: str or tuple or list or Shape = None):
        """
        Returns a native tensor object with the dimensions ordered according to `order`.
        
        Transposes the underlying tensor to match the name order and adds singleton dimensions for new dimension names.
        If a dimension of the tensor is not listed in `order`, a `ValueError` is raised.

        Args:
            order: (Optional) list of dimension names. If not given, the current dimension order is kept.

        Returns:
            Native tensor representation

        Raises:
            ValueError if the tensor cannot be transposed to match target_shape
        """
        raise NotImplementedError()

    def numpy(self, order: str or tuple or list = None) -> np.ndarray:
        """
        Converts this tensor to a `numpy.ndarray` with dimensions ordered according to `order`.
        
        *Note*: Using this function breaks the autograd chain. The returned tensor is not differentiable.
        To get a differentiable tensor, use `Tensor.native()` instead.
        
        Transposes the underlying tensor to match the name order and adds singleton dimensions for new dimension names.
        If a dimension of the tensor is not listed in `order`, a `ValueError` is raised.

        If this `Tensor` is backed by a NumPy array, a reference to this array may be returned.

        See Also:
            `phi.math.numpy()`

        Args:
            order: (Optional) list of dimension names. If not given, the current dimension order is kept.

        Returns:
            NumPy representation

        Raises:
            ValueError if the tensor cannot be transposed to match target_shape
        """
        native = self.native(order=order)
        return choose_backend(native).numpy(native)

    @property
    def dtype(self) -> DType:
        """ Data type of the elements of this `Tensor`. """
        raise NotImplementedError()

    @property
    def shape(self) -> Shape:
        """ The `Shape` lists the dimensions with their sizes, names and types. """
        raise NotImplementedError()

    @property
    def default_backend(self):
        from ._ops import choose_backend_t
        return choose_backend_t(self)

    def _with_shape_replaced(self, new_shape: Shape):
        raise NotImplementedError()

    def _with_natives_replaced(self, natives: list):
        """ Replaces all n _natives() of this Tensor with the first n elements of the list and removes them from the list. """
        raise NotImplementedError()

    @property
    def rank(self) -> int:
        """
        Number of explicit dimensions of this `Tensor`. Equal to `tensor.shape.rank`.
        This replaces [`numpy.ndarray.ndim`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.ndim.html) /
        [`torch.Tensor.dim`](https://pytorch.org/docs/master/generated/torch.Tensor.dim.html) /
        [`tf.rank()`](https://www.tensorflow.org/api_docs/python/tf/rank) /
        [`jax.numpy.ndim()`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndim.html).
        """
        return self.shape.rank

    @property
    def _is_tracer(self) -> bool:
        """
        Tracers store additional internal information.
        They should not be converted to `native()` in intermediate operations.
        
        TensorStack prevents performing the actual stack operation if one of its component tensors is special.
        """
        raise NotImplementedError()

    def __len__(self):
        return self.shape.volume if self.rank == 1 else NotImplemented

    def __bool__(self):
        assert self.rank == 0, f"Cannot convert tensor with non-empty shape {self.shape} to bool. Use tensor.any or tensor.all instead."
        from ._ops import all_
        if not self.default_backend.supports(Backend.jit_compile):  # NumPy
            return bool(self.native()) if self.rank == 0 else bool(all_(self).native())
        else:
            # __bool__ does not work with TensorFlow tracing.
            # TensorFlow needs to see a tf.Tensor in loop conditions but won't allow bool() invocations.
            # However, this function must always return a Python bool.
            raise AssertionError("To evaluate the boolean value of a Tensor, use 'Tensor.all'.")

    @property
    def all(self):
        """ Whether all values of this `Tensor` are `True` as a native bool. """
        from ._ops import all_, cast
        if self.rank == 0:
            return cast(self, DType(bool)).native()
        else:
            return all_(self, dim=self.shape).native()

    @property
    def any(self):
        """ Whether this `Tensor` contains a `True` value as a native bool. """
        from ._ops import any_, cast
        if self.rank == 0:
            return cast(self, DType(bool)).native()
        else:
            return any_(self, dim=self.shape).native()

    @property
    def mean(self):
        """ Mean value of this `Tensor` as a native scalar. """
        from ._ops import mean
        return mean(self, dim=self.shape).native()

    @property
    def sum(self):
        """ Sum of all values of this `Tensor` as a native scalar. """
        from ._ops import sum_
        return sum_(self, dim=self.shape).native()

    @property
    def min(self):
        """ Minimum value of this `Tensor` as a native scalar. """
        from ._ops import min_
        return min_(self, dim=self.shape).native()

    @property
    def max(self):
        """ Maximum value of this `Tensor` as a native scalar. """
        from ._ops import max_
        return max_(self, dim=self.shape).native()

    def __int__(self):
        return int(self.native()) if self.shape.volume == 1 else NotImplemented

    def __float__(self):
        return float(self.native()) if self.shape.volume == 1 else NotImplemented

    def __complex__(self):
        return complex(self.native()) if self.shape.volume == 1 else NotImplemented

    def __index__(self):
        assert self.shape.volume == 1, f"Only scalar tensors can be converted to index but has shape {self.shape}"
        assert self.dtype.kind == int, f"Only int tensors can be converted to index but dtype is {self.dtype}"
        return int(self.native())

    def _summary_str(self) -> str:
        try:
            from ._ops import all_available
            if all_available(self):
                if self.rank == 0:
                    return str(self.numpy())
                elif self.shape.volume is not None and self.shape.volume <= 6:
                    content = list(np.reshape(self.numpy(self.shape.names), [-1]))
                    content = ', '.join([repr(number) for number in content])
                    if self.shape.rank == 1 and (self.dtype.kind in (bool, int) or self.dtype.precision == get_precision()):
                        if self.shape.name == 'vector' and self.shape.type == CHANNEL_DIM:
                            return f"({content})"
                        return f"({content}) along {self.shape.name}{TYPE_ABBR[self.shape.type]}"
                    return f"{self.shape} {self.dtype}  {content}"
                else:
                    if self.dtype.kind in (float, int):
                        min_val, max_val = self.min, self.max
                        return f"{self.shape} {self.dtype}  {min_val} < ... < {max_val}"
                    elif self.dtype.kind == complex:
                        max_val = abs(self).max
                        return f"{self.shape} {self.dtype} |...| < {max_val}"
                    elif self.dtype.kind == bool:
                        return f"{self.shape} {self.sum} / {self.shape.volume} True"
                    else:
                        return f"{self.shape} {self.dtype}"
            else:
                if self.rank == 0:
                    return f"{self.default_backend} scalar {self.dtype}"
                else:
                    return f"{self.default_backend} {self.shape} {self.dtype}"
        except BaseException as err:
            return f"{self.shape}, failed to fetch values: {err}"

    def __repr__(self):
        return self._summary_str()

    def __format__(self, format_spec):
        from ._ops import all_available
        if not all_available(self):
            return self._summary_str()
        if self.shape.volume > 1:
            return self._summary_str()
        val = self.numpy()
        return format(val, format_spec)

    def __getitem__(self, item):
        if isinstance(item, Tensor):
            from ._ops import gather
            return gather(self, item)
        if isinstance(item, (int, slice)):
            assert self.rank == 1
            item = {self.shape.names[0]: item}
        if isinstance(item, (tuple, list)):
            if item[0] == Ellipsis:
                assert len(item) - 1 == self.shape.channel.rank
                item = {name: selection for name, selection in zip(self.shape.channel.names, item[1:])}
            elif len(item) == self.shape.channel.rank:
                item = {name: selection for name, selection in zip(self.shape.channel.names, item)}
            elif len(item) == self.shape.rank:  # legacy indexing
                warnings.warn("Slicing with sequence should only be used for channel dimensions.")
                item = {name: selection for name, selection in zip(self.shape.names, item)}
        assert isinstance(item, dict)  # dict mapping name -> slice/int
        return self._getitem(item)

    def _getitem(self, selection: dict) -> 'Tensor':
        """
        Slice the tensor along specified dimensions.

        Args:
          selection: dim_name: str -> int or slice
          selection: dict: 

        Returns:

        """
        raise NotImplementedError()

    def flip(self, *dims: str) -> 'Tensor':
        """
        Reverses the order of elements along one or multiple dimensions.

        Args:
            *dims: dimensions to flip

        Returns:
            `Tensor` of the same `Shape`
        """
        raise NotImplementedError()

    # def __setitem__(self, key, value):
    #     """
    #     All tensors are editable.
    #
    #     :param key: list/tuple of slices / indices
    #     :param value:
    #     :return:
    #     """
    #     raise NotImplementedError()

    def unstack(self, dimension: str):
        """
        Splits this tensor along the specified dimension.
        The returned tensors have the same dimensions as this tensor save the unstacked dimension.

        Raises an error if the dimension is not part of the `Shape` of this `Tensor`.

        See Also:
            `TensorDim.unstack()`

        Args:
          dimension(str or int or TensorDim): name of dimension or Dimension or None for component dimension

        Returns:
          tuple of tensors

        """
        raise NotImplementedError()

    def dimension(self, name: str or Shape) -> 'TensorDim':
        """
        Returns a reference to a specific dimension of this tensor.
        This is equivalent to the syntax `tensor.<name>`.

        The dimension need not be part of the `Tensor.shape` in which case its size is 1.

        Args:
            name: dimension name

        Returns:
            `TensorDim` corresponding to a dimension of this tensor
        """
        if isinstance(name, str):
            return TensorDim(self, name)
        elif isinstance(name, Shape):
            return TensorDim(self, name.name)
        else:
            raise ValueError(name)

    def __getattr__(self, name):
        if name.startswith('_'):
            raise AttributeError(f"'{type(self)}' object has no attribute '{name}'")
        if name == 'is_tensor_like':  # TensorFlow replaces abs() while tracing and checks for this attribute
            raise AttributeError(f"'{type(self)}' object has no attribute '{name}'")
        assert name not in ('shape', '_shape', 'tensor'), name
        return TensorDim(self, name)

    def __add__(self, other):
        return self._op2(other, lambda x, y: x + y, lambda x, y: choose_backend(x, y).add(x, y))

    def __radd__(self, other):
        return self._op2(other, lambda x, y: y + x, lambda x, y: choose_backend(x, y).add(y, x))

    def __sub__(self, other):
        return self._op2(other, lambda x, y: x - y, lambda x, y: choose_backend(x, y).sub(x, y))

    def __rsub__(self, other):
        return self._op2(other, lambda x, y: y - x, lambda x, y: choose_backend(x, y).sub(y, x))

    def __and__(self, other):
        return self._op2(other, lambda x, y: x & y, lambda x, y: choose_backend(x, y).and_(x, y))

    def __or__(self, other):
        return self._op2(other, lambda x, y: x | y, lambda x, y: choose_backend(x, y).or_(x, y))

    def __xor__(self, other):
        return self._op2(other, lambda x, y: x ^ y, lambda x, y: choose_backend(x, y).xor(x, y))

    def __mul__(self, other):
        return self._op2(other, lambda x, y: x * y, lambda x, y: choose_backend(x, y).mul(x, y))

    def __rmul__(self, other):
        return self._op2(other, lambda x, y: y * x, lambda x, y: choose_backend(x, y).mul(y, x))

    def __truediv__(self, other):
        return self._op2(other, lambda x, y: x / y, lambda x, y: choose_backend(x, y).div(x, y))

    def __rtruediv__(self, other):
        return self._op2(other, lambda x, y: y / x, lambda x, y: choose_backend(x, y).div(y, x))

    def __divmod__(self, other):
        return self._op2(other, lambda x, y: divmod(x, y), lambda x, y: divmod(x, y))

    def __rdivmod__(self, other):
        return self._op2(other, lambda x, y: divmod(y, x), lambda x, y: divmod(y, x))

    def __floordiv__(self, other):
        return self._op2(other, lambda x, y: x // y, lambda x, y: choose_backend(x, y).floordiv(x, y))

    def __rfloordiv__(self, other):
        return self._op2(other, lambda x, y: y // x, lambda x, y: choose_backend(x, y).floordiv(y, x))

    def __pow__(self, power, modulo=None):
        assert modulo is None
        return self._op2(power, lambda x, y: x ** y, lambda x, y: choose_backend(x, y).pow(x, y))

    def __rpow__(self, other):
        return self._op2(other, lambda x, y: y ** x, lambda x, y: choose_backend(x, y).pow(y, x))

    def __mod__(self, other):
        return self._op2(other, lambda x, y: x % y, lambda x, y: choose_backend(x, y).mod(x, y))

    def __rmod__(self, other):
        return self._op2(other, lambda x, y: y % x, lambda x, y: choose_backend(x, y).mod(y, x))

    def __eq__(self, other):
        return self._op2(other, lambda x, y: x == y, lambda x, y: choose_backend(x, y).equal(x, y))

    def __ne__(self, other):
        return self._op2(other, lambda x, y: x != y, lambda x, y: choose_backend(x, y).not_equal(x, y))

    def __lt__(self, other):
        return self._op2(other, lambda x, y: x < y, lambda x, y: choose_backend(x, y).greater_than(y, x))

    def __le__(self, other):
        return self._op2(other, lambda x, y: x <= y, lambda x, y: choose_backend(x, y).greater_or_equal(y, x))

    def __gt__(self, other):
        return self._op2(other, lambda x, y: x > y, lambda x, y: choose_backend(x, y).greater_than(x, y))

    def __ge__(self, other):
        return self._op2(other, lambda x, y: x >= y, lambda x, y: choose_backend(x, y).greater_or_equal(x, y))

    def __abs__(self):
        return self._op1(lambda t: choose_backend(t).abs(t))

    def __round__(self, n=None):
        return self._op1(lambda t: choose_backend(t).round(t))

    def __copy__(self):
        return self._op1(lambda t: choose_backend(t).copy(t, only_mutable=True))

    def __deepcopy__(self, memodict={}):
        return self._op1(lambda t: choose_backend(t).copy(t, only_mutable=False))

    def __neg__(self):
        return self._op1(lambda t: -t)

    def __invert__(self):
        return self._op1(lambda t: ~t)

    def __reversed__(self):
        assert self.shape.channel.rank == 1
        return self[::-1]

    def __iter__(self):
        assert self.rank == 1, f"Can only iterate over 1D tensors but got {self.shape}"
        return iter(self.native())

    def _tensor(self, other):
        if isinstance(other, Tensor):
            return other
        return compatible_tensor(other, compat_shape=self.shape, compat_natives=self._natives(), convert=False)

    def _op1(self, native_function):
        """
        Transform the values of this tensor given a function that can be applied to any native tensor.

        Args:
          native_function:

        Returns:

        """
        raise NotImplementedError(self.__class__)

    def _op2(self, other: 'Tensor', operator: Callable, native_function: Callable) -> 'Tensor':
        """
        Apply a broadcast operation on two tensors.

        Args:
          other: second argument
          operator: function (Tensor, Tensor) -> Tensor, used to propagate the operation to children tensors to have Python choose the callee
          native_function: function (native tensor, native tensor) -> native tensor
          other: 'Tensor': 
          operator: Callable:
          native_function: Callable:

        Returns:

        """
        raise NotImplementedError()

    def _natives(self) -> tuple:
        raise NotImplementedError(self.__class__)

    def _expand(self):
        """ Expands all compressed tensors to their defined size as if they were being used in `Tensor.native()`. """
        warnings.warn("Tensor._expand() is deprecated, use cached(Tensor) instead.", DeprecationWarning)
        raise NotImplementedError(self.__class__)

    def _tensor_reduce(self,
                       dims: Tuple[str],
                       native_function: Callable,
                       collapsed_function: Callable = lambda inner_reduced, collapsed_dims_to_reduce: inner_reduced,
                       unaffected_function: Callable = lambda value: value):
        raise NotImplementedError(self.__class__)

    def _simplify(self):
        return self

Subclasses

  • phi.math._functional.ShiftLinTracer
  • phi.math._tensors.CollapsedTensor
  • phi.math._tensors.NativeTensor
  • phi.math._tensors.TensorStack

Instance variables

var all

Whether all values of this Tensor are True as a native bool.

Expand source code
@property
def all(self):
    """ Whether all values of this `Tensor` are `True` as a native bool. """
    from ._ops import all_, cast
    if self.rank == 0:
        return cast(self, DType(bool)).native()
    else:
        return all_(self, dim=self.shape).native()
var any

Whether this Tensor contains a True value as a native bool.

Expand source code
@property
def any(self):
    """ Whether this `Tensor` contains a `True` value as a native bool. """
    from ._ops import any_, cast
    if self.rank == 0:
        return cast(self, DType(bool)).native()
    else:
        return any_(self, dim=self.shape).native()
var default_backend
Expand source code
@property
def default_backend(self):
    from ._ops import choose_backend_t
    return choose_backend_t(self)
var dtype : phi.math.backend._dtype.DType

Data type of the elements of this Tensor.

Expand source code
@property
def dtype(self) -> DType:
    """ Data type of the elements of this `Tensor`. """
    raise NotImplementedError()
var max

Maximum value of this Tensor as a native scalar.

Expand source code
@property
def max(self):
    """ Maximum value of this `Tensor` as a native scalar. """
    from ._ops import max_
    return max_(self, dim=self.shape).native()
var mean

Mean value of this Tensor as a native scalar.

Expand source code
@property
def mean(self):
    """ Mean value of this `Tensor` as a native scalar. """
    from ._ops import mean
    return mean(self, dim=self.shape).native()
var min

Minimum value of this Tensor as a native scalar.

Expand source code
@property
def min(self):
    """ Minimum value of this `Tensor` as a native scalar. """
    from ._ops import min_
    return min_(self, dim=self.shape).native()
var rank : int

Number of explicit dimensions of this Tensor. Equal to tensor.shape.rank. This replaces numpy.ndarray.ndim / torch.Tensor.dim / tf.rank() / jax.numpy.ndim().

Expand source code
@property
def rank(self) -> int:
    """
    Number of explicit dimensions of this `Tensor`. Equal to `tensor.shape.rank`.
    This replaces [`numpy.ndarray.ndim`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.ndim.html) /
    [`torch.Tensor.dim`](https://pytorch.org/docs/master/generated/torch.Tensor.dim.html) /
    [`tf.rank()`](https://www.tensorflow.org/api_docs/python/tf/rank) /
    [`jax.numpy.ndim()`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndim.html).
    """
    return self.shape.rank
var shape : phi.math._shape.Shape

The Shape lists the dimensions with their sizes, names and types.

Expand source code
@property
def shape(self) -> Shape:
    """ The `Shape` lists the dimensions with their sizes, names and types. """
    raise NotImplementedError()
var sum

Sum of all values of this Tensor as a native scalar.

Expand source code
@property
def sum(self):
    """ Sum of all values of this `Tensor` as a native scalar. """
    from ._ops import sum_
    return sum_(self, dim=self.shape).native()

Methods

def dimension(self, name: str) ‑> phi.math._tensors.TensorDim

Returns a reference to a specific dimension of this tensor. This is equivalent to the syntax tensor.<name>.

The dimension need not be part of the Tensor.shape in which case its size is 1.

Args

name
dimension name

Returns

TensorDim corresponding to a dimension of this tensor

Expand source code
def dimension(self, name: str or Shape) -> 'TensorDim':
    """
    Returns a reference to a specific dimension of this tensor.
    This is equivalent to the syntax `tensor.<name>`.

    The dimension need not be part of the `Tensor.shape` in which case its size is 1.

    Args:
        name: dimension name

    Returns:
        `TensorDim` corresponding to a dimension of this tensor
    """
    if isinstance(name, str):
        return TensorDim(self, name)
    elif isinstance(name, Shape):
        return TensorDim(self, name.name)
    else:
        raise ValueError(name)
def flip(self, *dims: str) ‑> phi.math._tensors.Tensor

Reverses the order of elements along one or multiple dimensions.

Args

*dims
dimensions to flip

Returns

Tensor of the same Shape

Expand source code
def flip(self, *dims: str) -> 'Tensor':
    """
    Reverses the order of elements along one or multiple dimensions.

    Args:
        *dims: dimensions to flip

    Returns:
        `Tensor` of the same `Shape`
    """
    raise NotImplementedError()
def native(self, order: str = None)

Returns a native tensor object with the dimensions ordered according to order.

Transposes the underlying tensor to match the name order and adds singleton dimensions for new dimension names. If a dimension of the tensor is not listed in order, a ValueError is raised.

Args

order
(Optional) list of dimension names. If not given, the current dimension order is kept.

Returns

Native tensor representation

Raises

ValueError if the tensor cannot be transposed to match target_shape

Expand source code
def native(self, order: str or tuple or list or Shape = None):
    """
    Returns a native tensor object with the dimensions ordered according to `order`.
    
    Transposes the underlying tensor to match the name order and adds singleton dimensions for new dimension names.
    If a dimension of the tensor is not listed in `order`, a `ValueError` is raised.

    Args:
        order: (Optional) list of dimension names. If not given, the current dimension order is kept.

    Returns:
        Native tensor representation

    Raises:
        ValueError if the tensor cannot be transposed to match target_shape
    """
    raise NotImplementedError()
def numpy(self, order: str = None) ‑> numpy.ndarray

Converts this tensor to a numpy.ndarray with dimensions ordered according to order.

Note: Using this function breaks the autograd chain. The returned tensor is not differentiable. To get a differentiable tensor, use Tensor.native() instead.

Transposes the underlying tensor to match the name order and adds singleton dimensions for new dimension names. If a dimension of the tensor is not listed in order, a ValueError is raised.

If this Tensor is backed by a NumPy array, a reference to this array may be returned.

See Also: numpy()

Args

order
(Optional) list of dimension names. If not given, the current dimension order is kept.

Returns

NumPy representation

Raises

ValueError if the tensor cannot be transposed to match target_shape

Expand source code
def numpy(self, order: str or tuple or list = None) -> np.ndarray:
    """
    Converts this tensor to a `numpy.ndarray` with dimensions ordered according to `order`.
    
    *Note*: Using this function breaks the autograd chain. The returned tensor is not differentiable.
    To get a differentiable tensor, use `Tensor.native()` instead.
    
    Transposes the underlying tensor to match the name order and adds singleton dimensions for new dimension names.
    If a dimension of the tensor is not listed in `order`, a `ValueError` is raised.

    If this `Tensor` is backed by a NumPy array, a reference to this array may be returned.

    See Also:
        `phi.math.numpy()`

    Args:
        order: (Optional) list of dimension names. If not given, the current dimension order is kept.

    Returns:
        NumPy representation

    Raises:
        ValueError if the tensor cannot be transposed to match target_shape
    """
    native = self.native(order=order)
    return choose_backend(native).numpy(native)
def unstack(self, dimension: str)

Splits this tensor along the specified dimension. The returned tensors have the same dimensions as this tensor save the unstacked dimension.

Raises an error if the dimension is not part of the Shape of this Tensor.

See Also: TensorDim.unstack()

Args

dimension(str or int or TensorDim): name of dimension or Dimension or None for component dimension

Returns

tuple of tensors

Expand source code
def unstack(self, dimension: str):
    """
    Splits this tensor along the specified dimension.
    The returned tensors have the same dimensions as this tensor save the unstacked dimension.

    Raises an error if the dimension is not part of the `Shape` of this `Tensor`.

    See Also:
        `TensorDim.unstack()`

    Args:
      dimension(str or int or TensorDim): name of dimension or Dimension or None for component dimension

    Returns:
      tuple of tensors

    """
    raise NotImplementedError()
class TensorDim

Reference to a specific dimension of a Tensor.

To obtain a TensorDim, use Tensor.dimension() or the syntax tensor.<dim>.

Indexing a TensorDim as tdim[start:stop:step] returns a sliced Tensor.

See the documentation at https://tum-pbs.github.io/PhiFlow/Math.html#indexing-slicing-unstacking .

Expand source code
class TensorDim:
    """
    Reference to a specific dimension of a `Tensor`.

    To obtain a `TensorDim`, use `Tensor.dimension()` or the syntax `tensor.<dim>`.

    Indexing a `TensorDim` as `tdim[start:stop:step]` returns a sliced `Tensor`.

    See the documentation at https://tum-pbs.github.io/PhiFlow/Math.html#indexing-slicing-unstacking .
    """

    def __init__(self, tensor: Tensor, name: str):
        self.tensor = tensor
        self.name = name

    @property
    def exists(self):
        """ Whether the dimension is listed in the `Shape` of the `Tensor`. """
        return self.name in self.tensor.shape

    def __str__(self):
        """ Dimension name. """
        return self.name

    def __repr__(self):
        return f"Dimension '{self.name}' of {self.tensor.shape}"

    def unstack(self, size: int or None = None) -> tuple:
        """
        See `unstack_spatial()`.

        Args:
            size: (optional)
                None: unstack along this dimension, error if dimension does not exist
                int: repeating unstack if dimension does not exist

        Returns:
            sliced tensors
        """
        if size is None:
            result = self.tensor.unstack(self.name)
        else:
            if self.exists:
                unstacked = self.tensor.unstack(self.name)
                assert len(unstacked) == size, f"Size of dimension {self.name} does not match {size}."
                result = unstacked
            else:
                result = (self.tensor,) * size
        return result

    def optional_unstack(self):
        """
        Unstacks the `Tensor` along this dimension if the dimension is listed in the `Shape`.
        Otherwise returns the original `Tensor`.

        Returns:
            `tuple` of sliced tensors or original `Tensor`
        """
        if self.exists:
            return self.unstack()
        else:
            return self.tensor

    def unstack_spatial(self, components: str or tuple or list) -> tuple:
        """
        Slices the tensor along this dimension, returning only the selected components in the specified order.

        Args:
            components: Spatial dimension names as comma-separated `str` or sequence of `str`.

        Returns:
            selected components
        """
        if isinstance(components, str):
            components = parse_dim_order(components)
        if self.exists:
            spatial = self.tensor.shape.spatial
            result = []
            if spatial.is_empty:
                spatial = [GLOBAL_AXIS_ORDER.axis_name(i, len(components)) for i in range(len(components))]
            for dim in components:
                component_index = spatial.index(dim)
                result.append(self.tensor[{self.name: component_index}])
        else:
            result = [self.tensor] * len(components)
        return tuple(result)

    @property
    def index(self):
        """ The index of this dimension in the `Shape` of the `Tensor`. """
        return self.tensor.shape.index(self.name)

    def __int__(self):
        return self.index

    def __len__(self):
        warnings.warn("Use Tensor.dim.size instead of len(Tensor.dim). len() only supports with integer sizes.")
        return self.size

    @property
    def size(self):
        """ Length of this tensor dimension as listed in the `Shape`, otherwise `1`. """
        assert self.exists, f"Dimension {self.name} does not exist for tensor {self.tensor.shape}"
        return self.tensor.shape.get_size(self.name)

    def as_batch(self, name: str or None = None):
        """ Returns a shallow copy of the `Tensor` where the type of this dimension is *batch*. """
        return self._as(BATCH_DIM, name)

    def as_spatial(self, name: str or None = None):
        """ Returns a shallow copy of the `Tensor` where the type of this dimension is *spatial*. """
        return self._as(SPATIAL_DIM, name)

    def as_channel(self, name: str or None = None):
        """ Returns a shallow copy of the `Tensor` where the type of this dimension is *channel*. """
        return self._as(CHANNEL_DIM, name)

    def _as(self, dim_type: int, name: str or None):
        shape = self.tensor.shape
        new_types = list(shape.types)
        new_types[self.index] = dim_type
        new_names = shape.names
        if name is not None:
            new_names = list(new_names)
            new_names[self.index] = name
        new_shape = Shape(shape.sizes, new_names, new_types)
        return self.tensor._with_shape_replaced(new_shape)

    @property
    def _dim_type(self):
        return self.tensor.shape.get_type(self.name)

    @property
    def is_spatial(self):
        """ Whether the type of this dimension as listed in the `Shape` is *spatial*. Only defined for existing dimensions. """
        return self._dim_type == SPATIAL_DIM

    @property
    def is_batch(self):
        """ Whether the type of this dimension as listed in the `Shape` is *batch*. Only defined for existing dimensions. """
        return self._dim_type == BATCH_DIM

    @property
    def is_channel(self):
        """ Whether the type of this dimension as listed in the `Shape` is *channel*. Only defined for existing dimensions. """
        return self._dim_type == CHANNEL_DIM

    def __getitem__(self, item):
        if isinstance(item, str):
            item = self.tensor.shape.spatial.index(item)
        elif isinstance(item, Tensor) and item.dtype == DType(bool):
            from ._ops import boolean_mask
            return boolean_mask(self.tensor, self.name, item)
        return self.tensor[{self.name: item}]

    def flip(self):
        """ Flips the element order along this dimension and returns the result as a `Tensor`. """
        return self.tensor.flip(self.name)

    def split(self, split_dimensions: Shape):
        """ See `phi.math.unpack_dims()` """
        from ._ops import unpack_dims
        return unpack_dims(self.tensor, self.name, split_dimensions)

    def __mul__(self, other):
        if isinstance(other, TensorDim):
            from ._ops import dot
            return dot(self.tensor, (self.name,), other.tensor, (other.name,))
        else:
            return NotImplemented

    def __call__(self, *args, **kwargs):
        raise TypeError(f"Method Tensor.{self.name}() does not exist.")

Instance variables

var exists

Whether the dimension is listed in the Shape of the Tensor.

Expand source code
@property
def exists(self):
    """ Whether the dimension is listed in the `Shape` of the `Tensor`. """
    return self.name in self.tensor.shape
var index

The index of this dimension in the Shape of the Tensor.

Expand source code
@property
def index(self):
    """ The index of this dimension in the `Shape` of the `Tensor`. """
    return self.tensor.shape.index(self.name)
var is_batch

Whether the type of this dimension as listed in the Shape is batch. Only defined for existing dimensions.

Expand source code
@property
def is_batch(self):
    """ Whether the type of this dimension as listed in the `Shape` is *batch*. Only defined for existing dimensions. """
    return self._dim_type == BATCH_DIM
var is_channel

Whether the type of this dimension as listed in the Shape is channel. Only defined for existing dimensions.

Expand source code
@property
def is_channel(self):
    """ Whether the type of this dimension as listed in the `Shape` is *channel*. Only defined for existing dimensions. """
    return self._dim_type == CHANNEL_DIM
var is_spatial

Whether the type of this dimension as listed in the Shape is spatial. Only defined for existing dimensions.

Expand source code
@property
def is_spatial(self):
    """ Whether the type of this dimension as listed in the `Shape` is *spatial*. Only defined for existing dimensions. """
    return self._dim_type == SPATIAL_DIM
var size

Length of this tensor dimension as listed in the Shape, otherwise 1.

Expand source code
@property
def size(self):
    """ Length of this tensor dimension as listed in the `Shape`, otherwise `1`. """
    assert self.exists, f"Dimension {self.name} does not exist for tensor {self.tensor.shape}"
    return self.tensor.shape.get_size(self.name)

Methods

def as_batch(self, name: str = None)

Returns a shallow copy of the Tensor where the type of this dimension is batch.

Expand source code
def as_batch(self, name: str or None = None):
    """ Returns a shallow copy of the `Tensor` where the type of this dimension is *batch*. """
    return self._as(BATCH_DIM, name)
def as_channel(self, name: str = None)

Returns a shallow copy of the Tensor where the type of this dimension is channel.

Expand source code
def as_channel(self, name: str or None = None):
    """ Returns a shallow copy of the `Tensor` where the type of this dimension is *channel*. """
    return self._as(CHANNEL_DIM, name)
def as_spatial(self, name: str = None)

Returns a shallow copy of the Tensor where the type of this dimension is spatial.

Expand source code
def as_spatial(self, name: str or None = None):
    """ Returns a shallow copy of the `Tensor` where the type of this dimension is *spatial*. """
    return self._as(SPATIAL_DIM, name)
def flip(self)

Flips the element order along this dimension and returns the result as a Tensor.

Expand source code
def flip(self):
    """ Flips the element order along this dimension and returns the result as a `Tensor`. """
    return self.tensor.flip(self.name)
def optional_unstack(self)

Unstacks the Tensor along this dimension if the dimension is listed in the Shape. Otherwise returns the original Tensor.

Returns

tuple of sliced tensors or original Tensor

Expand source code
def optional_unstack(self):
    """
    Unstacks the `Tensor` along this dimension if the dimension is listed in the `Shape`.
    Otherwise returns the original `Tensor`.

    Returns:
        `tuple` of sliced tensors or original `Tensor`
    """
    if self.exists:
        return self.unstack()
    else:
        return self.tensor
def split(self, split_dimensions: phi.math._shape.Shape)
Expand source code
def split(self, split_dimensions: Shape):
    """ See `phi.math.unpack_dims()` """
    from ._ops import unpack_dims
    return unpack_dims(self.tensor, self.name, split_dimensions)
def unstack(self, size: int = None) ‑> tuple

See unstack_spatial().

Args

size
(optional) None: unstack along this dimension, error if dimension does not exist int: repeating unstack if dimension does not exist

Returns

sliced tensors

Expand source code
def unstack(self, size: int or None = None) -> tuple:
    """
    See `unstack_spatial()`.

    Args:
        size: (optional)
            None: unstack along this dimension, error if dimension does not exist
            int: repeating unstack if dimension does not exist

    Returns:
        sliced tensors
    """
    if size is None:
        result = self.tensor.unstack(self.name)
    else:
        if self.exists:
            unstacked = self.tensor.unstack(self.name)
            assert len(unstacked) == size, f"Size of dimension {self.name} does not match {size}."
            result = unstacked
        else:
            result = (self.tensor,) * size
    return result
def unstack_spatial(self, components: str) ‑> tuple

Slices the tensor along this dimension, returning only the selected components in the specified order.

Args

components
Spatial dimension names as comma-separated str or sequence of str.

Returns

selected components

Expand source code
def unstack_spatial(self, components: str or tuple or list) -> tuple:
    """
    Slices the tensor along this dimension, returning only the selected components in the specified order.

    Args:
        components: Spatial dimension names as comma-separated `str` or sequence of `str`.

    Returns:
        selected components
    """
    if isinstance(components, str):
        components = parse_dim_order(components)
    if self.exists:
        spatial = self.tensor.shape.spatial
        result = []
        if spatial.is_empty:
            spatial = [GLOBAL_AXIS_ORDER.axis_name(i, len(components)) for i in range(len(components))]
        for dim in components:
            component_index = spatial.index(dim)
            result.append(self.tensor[{self.name: component_index}])
    else:
        result = [self.tensor] * len(components)
    return tuple(result)
class TensorLike

Tensor-like objects can interoperate with some phi.math functions, depending on what methods they implement. Objects are considered TensorLike if they implement TensorLike.__variable_attrs__() or TensorLike.__value_attrs__(). This is reflected in isinstance checks.

TensorLike objects may be used as keys, for example in jit_compile(). In key mode, all variable attributes are set to None. When used as keys, TensorLike should also implement __eq__() to compare any non-variable properties that can affect a function.

Do not declare this class as a superclass.

Expand source code
class TensorLike(metaclass=_TensorLikeType):
    """
    Tensor-like objects can interoperate with some `phi.math` functions, depending on what methods they implement.
    Objects are considered `TensorLike` if they implement `TensorLike.__variable_attrs__()` or `TensorLike.__value_attrs__()`.
    This is reflected in `isinstance` checks.

    `TensorLike` objects may be used as keys, for example in `jit_compile()`.
    In key mode, all variable attributes are set to `None`.
    When used as keys, `TensorLike` should also implement `__eq__()` to compare any non-variable properties that can affect a function.

    Do not declare this class as a superclass.
    """

    def __value_attrs__(self) -> Tuple[str]:
        """
        Returns all `Tensor` or `TensorLike` attribute names of `self` that should be transformed by single-operand math operations,
        such as `sin()`, `exp()`.

        Returns:
            `tuple` of `str` attributes.
                Calling `getattr(self, attr)` must return a `Tensor` or `TensorLike` for all returned attributes.
        """
        raise NotImplementedError()

    def __variable_attrs__(self) -> Tuple[str]:
        """
        Returns all `Tensor` or `TensorLike` attribute names of `self` whose values are variable.
        Variables denote values that can change from one function call to the next or for which gradients can be recorded.
        If this method is not implemented, all attributes returned by `__value_attrs__()` are considered variable.

        The returned properties are used by the following functions:

        - `jit_compile()`
        - `jit_compile_linear()`
        - `stop_gradient()`
        - `functional_gradient()`
        - `custom_gradient()`

        Returns:
            `tuple` of `str` attributes.
                Calling `getattr(self, attr)` must return a `Tensor` or `TensorLike` for all returned attributes.
        """
        raise NotImplementedError()

    def __with_attrs__(self, **attrs):
        """
        Creates a copy of this object which has the `Tensor` or `TensorLike` attributes contained in `tattrs` replaced.
        If this method is not implemented, tensor attributes are replaced using `setattr()`.

        Args:
            **attrs: `dict` mapping `str` attribute names to `Tensor` or `TensorLike`.

        Returns:
            Altered copy of `self`
        """
        raise NotImplementedError()

Methods

def __value_attrs__(self) ‑> Tuple[str]

Returns all Tensor or TensorLike attribute names of self that should be transformed by single-operand math operations, such as sin(), exp().

Returns

tuple of str attributes. Calling getattr(self, attr) must return a Tensor or TensorLike for all returned attributes.

Expand source code
def __value_attrs__(self) -> Tuple[str]:
    """
    Returns all `Tensor` or `TensorLike` attribute names of `self` that should be transformed by single-operand math operations,
    such as `sin()`, `exp()`.

    Returns:
        `tuple` of `str` attributes.
            Calling `getattr(self, attr)` must return a `Tensor` or `TensorLike` for all returned attributes.
    """
    raise NotImplementedError()
def __variable_attrs__(self) ‑> Tuple[str]

Returns all Tensor or TensorLike attribute names of self whose values are variable. Variables denote values that can change from one function call to the next or for which gradients can be recorded. If this method is not implemented, all attributes returned by __value_attrs__() are considered variable.

The returned properties are used by the following functions:

Returns

tuple of str attributes. Calling getattr(self, attr) must return a Tensor or TensorLike for all returned attributes.

Expand source code
def __variable_attrs__(self) -> Tuple[str]:
    """
    Returns all `Tensor` or `TensorLike` attribute names of `self` whose values are variable.
    Variables denote values that can change from one function call to the next or for which gradients can be recorded.
    If this method is not implemented, all attributes returned by `__value_attrs__()` are considered variable.

    The returned properties are used by the following functions:

    - `jit_compile()`
    - `jit_compile_linear()`
    - `stop_gradient()`
    - `functional_gradient()`
    - `custom_gradient()`

    Returns:
        `tuple` of `str` attributes.
            Calling `getattr(self, attr)` must return a `Tensor` or `TensorLike` for all returned attributes.
    """
    raise NotImplementedError()
def __with_attrs__(self, **attrs)

Creates a copy of this object which has the Tensor or TensorLike attributes contained in tattrs replaced. If this method is not implemented, tensor attributes are replaced using setattr().

Args

**attrs
dict mapping str attribute names to Tensor or TensorLike.

Returns

Altered copy of self

Expand source code
def __with_attrs__(self, **attrs):
    """
    Creates a copy of this object which has the `Tensor` or `TensorLike` attributes contained in `tattrs` replaced.
    If this method is not implemented, tensor attributes are replaced using `setattr()`.

    Args:
        **attrs: `dict` mapping `str` attribute names to `Tensor` or `TensorLike`.

    Returns:
        Altered copy of `self`
    """
    raise NotImplementedError()