• 🌐 ΦML • 📖 Documentation • 🔗 API • ▶ Videos • Examples
Linear solves are a vital part in many simulations and machine learning applications. ΦML provides an easy-to-use interface for performing linear solves that supports backpropagation via implicit gradients. Dense, sparse, and matrix-free linear systems can be solved this way.
%%capture
!pip install phiml
from phiml import math
from phiml.math import wrap, channel, dual, spatial, Solve, tensor
Linear solves and sparse matrices are supported on all backends.
Feel free to choose the below line to use jax
, tensorflow
or numpy
instead.
math.use('torch')
torch
We can perform a linear solve by passing a matrix A
, right-hand-side vector b
and initial guess x0
to solve_linear()
.
We recommend passing ΦML tensors. Then, the dual dimensions of the matrix must match the initial guess and the primal dimensions must match the right-hand-side.
Alternatively, solve_linear()
can be used called with native tensors (see below).
A = tensor([[0, 1], [1, 0]], channel('b_vec'), dual('x_vec'))
b = tensor([2, 3], channel('b_vec'))
x0 = tensor([0, 0], channel('x_vec'))
math.solve_linear(A, b, Solve(x0=x0))
(3.000, 2.000) along b_vecᶜ
ΦML implements multiple algorithms to solve linear systems, such as the conjugate gradient method (CG
) and the stabilized bi-conjugate gradient method (biCG
).
All SciPy solvers are also available.
For a full list, see here.
math.solve_linear(A, b, Solve('CG', x0=x0))
(3.000, 2.000) along b_vecᶜ
math.solve_linear(A, b, Solve('biCG-stab', x0=x0))
(3.000, 2.000) along b_vecᶜ
math.solve_linear(A, b, Solve('scipy-GMres', x0=x0))
(3, 2) along b_vecᶜ int64
Instead of passing a matrix, you can also specify a linear Python function that computes the matrix-vector product. This will typically be slower unless the function is compiled to a matrix.
def linear_function(x):
return x * (2, 1)
math.solve_linear(linear_function, b, Solve(x0=x0))
(1.000, 3.000) along b_vecᶜ
ΦML can also build an explicit matrix representation of the provided Python function.
You can do this either by explicitly obtaining the matrix first using matrix_from_function
or by annotating the linear function with
jit_compile_linear
.
If the function adds a constant offset to the output, this will automatically be subtracted from the right-hand-side vector.
from phiml.math import jit_compile_linear
math.solve_linear(jit_compile_linear(linear_function), b, Solve(x0=x0))
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/phiml/backend/torch/_torch_backend.py:794: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at ../aten/src/ATen/SparseCsrTensorImpl.cpp:53.) return torch.sparse_csr_tensor(row_pointers, column_indices, values, shape, device=values.device)
(1.000, 2.000, 1.500, 3.000) (b_vecᶜ=2, x_vecᶜ=2)
ΦML includes an ILU and experimental cluster preconditioner.
To use a preconditioner, simply specify preconditioner='ilu'
when creating the Solve
object.
math.solve_linear(jit_compile_linear(linear_function), b, Solve('scipy-CG', x0=x0, preconditioner='ilu'))
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/scipy/sparse/linalg/_dsolve/linsolve.py:640: SparseEfficiencyWarning: CSR matrix format is required. Converting to CSR matrix. warn('CSR matrix format is required. Converting to CSR matrix.',
(1, 2, 1, 3) (b_vecᶜ=2, x_vecᶜ=2) int64
The ILU preconditioner always runs on the CPU and should be paired with a SciPy linear solver for optimal efficiency.
Available SciPy solvers include 'scipy-direct'
, 'scipy-CG'
, 'scipy-GMres'
, 'scipy-biCG'
, 'scipy-biCG-stab'
, 'scipy-CGS'
, 'scipy-QMR'
, 'scipy-GCrotMK'
(see the API).
If the matrix or linear function is constant, i.e. only depends on NumPy arrays, the preconditioner computation can be performed during JIT compilation.
@math.jit_compile
def jit_perform_solve(b):
return math.solve_linear(jit_compile_linear(linear_function), b, Solve('scipy-CG', x0=x0, preconditioner='ilu'))
jit_perform_solve(b)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/phiml/math/_optimize.py:694: RuntimeWarning: Preconditioners are not supported for sparse scipy-CG in torch JIT mode. Disabling preconditioner. Use Jax or TensorFlow to enable preconditioners in JIT mode. warnings.warn(f"Preconditioners are not supported for sparse {method} in {y.default_backend} JIT mode. Disabling preconditioner. Use Jax or TensorFlow to enable preconditioners in JIT mode.", RuntimeWarning) /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/phiml/backend/torch/_torch_backend.py:68: TracerWarning: torch.from_numpy results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. tensor = torch.from_numpy(x) /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/phiml/backend/torch/_torch_backend.py:794: TracerWarning: torch.sparse_csr_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. return torch.sparse_csr_tensor(row_pointers, column_indices, values, shape, device=values.device) /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/phiml/backend/torch/_torch_backend.py:610: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! return tuple([int(s) for s in tensor.shape]) /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/phiml/backend/torch/_torch_backend.py:215: TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! args = [a.detach().cpu().numpy() if isinstance(a, torch.Tensor) else a for a in args]
(1, 2, 1, 3) (b_vecᶜ=2, x_vecᶜ=2) int64
Here, the ILU preconditioner is computed during JIT-compile time since the linear function does not depend on b
.
ΦML enables backpropagation through linear solves. Instead of backpropagating through the unrolled loop (which can lead to inaccurate results and cause high memory consumption), Unify runs an ajoint linear solve for the pullback operation.
def loss_function(b):
x = math.solve_linear(jit_compile_linear(linear_function), b, Solve(x0=x0))
return math.l2_loss(x)
gradient_function = math.gradient(loss_function, 'b', get_output=False)
gradient_function(b)
(2.500, 3.750) along b_vecᶜ
ΦML can also compute gradients for the (sparse) matrix used in a linear solve, which allows differentiating w.r.t. parameters that influenced the matrix values via backpropagation.
To enable this, pass grad_for_f=True
to the solve_linear()
call.
@math.jit_compile_linear
def conditioned_linear_function(x, conditioning):
return x * conditioning
def loss_function(conditioning):
b = math.ones_like(conditioning)
x = math.solve_linear(conditioned_linear_function, b, Solve(x0=x0), conditioning=conditioning, grad_for_f=True)
return math.l2_loss(x)
gradient_function = math.gradient(loss_function, 'conditioning', get_output=False)
gradient_function(tensor([1., 2.], channel('x_vec')))
(-1.000, -0.125) along x_vecᶜ
When a linear solve (or minimize
call) does not find a solution, a subclass of ConvergenceException
is thrown, depending on the reason.
NotConverged
is thrown.Diverged
is thrown.These exceptions can also be thrown during backpropagation if the adjoint solve fails (except for TensorFlow).
You can deal with failed solves using Python's try
-except
clause.
try:
solution = math.solve_linear(lambda x: 0 * x, wrap([1, 2, 3], spatial('x')), solve=math.Solve(x0=math.zeros(spatial(x=3))))
print("converged", solution)
except math.ConvergenceException as exc:
print(exc)
print(f"Last estimate: {exc.result.x}")
Φ-ML CG-adaptive (torch) did not converge to rel_tol=1e-05, abs_tol=1e-05 within 1000 iterations. Max residual: 3, ., 0 Last estimate: (0.000, 0.000, 0.000) along xˢ
If you want the regular execution flow to continue despite non-convergence, you can pass suppress=[math.Diverged, math.NotConverged]
to the Solve
constructor.
All solves are logged internally and can be shown by setting phiml.set_logging_level('debug')
.
Additional solve properties can be recorded using a SolveTape
.
Recording the full optimization trajectory requires setting record_trajectories=True
.
import phiml
phiml.set_logging_level('debug')
with math.SolveTape() as solves:
math.solve_linear(jit_compile_linear(linear_function), b, Solve('scipy-CG', x0=0 * b, preconditioner='ilu'))
factor_ilu: auto-selecting iterations=1 (eager mode) for matrix (2.000, 0.000); (0.000, 1.000) (b_vecᶜ=2, ~b_vecᵈ=2) (DEBUG), 2024-04-15 12:16:48,296n TorchScript -> run compiled forward '_matrix_solve_forward' with args [((2,), False)] (DEBUG), 2024-04-15 12:16:48,310n Running forward pass of custom op forward '_matrix_solve_forward' given args ('y',) containing 1 native tensors (DEBUG), 2024-04-15 12:16:48,310n Performing linear solve scipy-CG with tolerance float64 1e-05 (rel), float64 1e-05 (abs), max_iterations=1000 with backend torch (DEBUG), 2024-04-15 12:16:48,314n
The solve information about a performed solve(s) can then be obtained by indexing specific solves by index or Solve
object.
print(solves[0].solve)
print("Solution", solves[0].x)
print("Residual", solves[0].residual)
print("Fun.evals", solves[0].function_evaluations)
print("Iterations", solves[0].iterations)
print("Diverged", solves[0].diverged)
print("Converged", solves[0].converged)
scipy-CG with tolerance float64 1e-05 (rel), float64 1e-05 (abs), max_iterations=1000 Solution (1, 3) along b_vecᶜ int64 Residual (0, 0) along b_vecᶜ int64 Fun.evals 2 Iterations 1 Diverged False Converged True
When performing a linear solve without ΦML tensors, the matrix must have shape (..., N, N) and x0
and b
must have shape (..., N)
where ...
denotes the batch dimensions.
This matches the signatures of the native solve functions like torch.linalg.solve
or jax.numpy.linalg.solve
.
import torch
A = torch.tensor([[0., 1], [1, 0]])
b = torch.tensor([2., 3])
x0 = torch.tensor([0., 0])
math.solve_linear(A, b, Solve(x0=x0))
TorchScript -> run compiled forward '_matrix_solve_forward' with args [((2,), False), ((2, 2), False)] (DEBUG), 2024-04-15 12:16:48,337n Running forward pass of custom op forward '_matrix_solve_forward' given args ('y', 'matrix') containing 2 native tensors (DEBUG), 2024-04-15 12:16:48,337n Performing linear solve auto with tolerance float64 1e-05 (rel), float64 1e-05 (abs), max_iterations=1000 with backend torch (DEBUG), 2024-04-15 12:16:48,340n
tensor([3., 2.])