Logo

INC: An Indirect Neural Corrector for Auto-Regressive Hybrid PDE Solvers

1Technical University of Munich,

Indirect Neural Corrector (INC) is a hybrid-physics learning paradigm that embeds a learned correction within the governing equations of a differentiable solver. It has been applied on a range of PDE systems to correct for unresolved physics and discretization errors while provably enhancing long-term stability in autoregressive simulations.

INC stabilizing an otherwise unstable simulation.
Quantitative accuracy in unstable regimes.

Abstract

When simulating partial differential equations, hybrid solvers combine coarse numerical solvers with learned correctors. They promise accelerated simulations while adhering to physical constraints. However, as shown in our theoretical framework, directly applying learned corrections to solver outputs leads to significant autoregressive errors, which originate from amplified perturbations that accumulate during long-term rollouts, especially in chaotic regimes.

To overcome this, we propose the Indirect Neural Corrector (\(\mathrm{INC}\)), which integrates learned corrections into the governing equations rather than applying direct state updates. Our key insight is that \(\mathrm{INC}\) reduces the error amplification on the order of \(\Delta t^{-1} + L\), where \(\Delta t\) is the timestep and \(L\) the Lipschitz constant. At the same time, our framework poses no architectural requirements and integrates seamlessly with arbitrary neural networks and solvers. We test \(\mathrm{INC}\) in extensive benchmarks, covering numerous differentiable solvers, neural backbones, and test cases ranging from a 1D chaotic system to 3D turbulence.

INC improves the long-term trajectory performance (\(R^2\)) by up to \(158.7\%\), stabilizes blowups under aggressive coarsening, and for complex 3D turbulence cases yields speed-ups of several orders of magnitude. INC thus enables stable, efficient PDE emulation with formal error reduction, paving the way for faster scientific and engineering simulations with reliable physics guarantees. Our source code is available at GitHub.

Motivation

Instability of direct correction

Standard methods apply a neural network correction directly to the solver's output state, which is proven to be unstable. Herein, to show the instability of this approach visually, we inject a small amount of noise into the state at each step, mimicking the errors from a neural network.

Karman vortex street: Direct correction causes the simulation to diverge quickly.


Backward-facing step: The simulation quickly becomes unstable and fails.


Enhanced stability of INC

Our indirect method (INC) injects the correction into the governing equations instead. Applying the same level of noise shows that the simulation is significantly more robust and stable against perturbations.

Karman vortex street: The INC simulation remains stable and physically plausible.

Backward-facing step: The simulation is robust and completes successfully.

Experiments on 1D example

To further investigate the sensitivity of different correction methods to perturbations, we applied similar experiments on 1D examples, the Burgers' equation and the Kuramoto–Sivashinsky equation. It clearly demonstrates that the existing method (direct) is significantly more sensitive compared to ours (INC).

Interpolate start reference image.

Numerical study of 1D Kuramoto–Sivashinsky equation

Interpolate start reference image.

Numerical study of 1D Burgers equation

Methods

Why Indirect Corrections Are Better

We analyze how errors propagate through hybrid neural PDE solvers during autoregressive rollouts. Our key finding is that indirect corrections reduce error accumulation by a factor of \(R_k\) compared to direct corrections, where \(R_k\) depends on the simulation time step and the system's underlying dynamics.

Error Propagation Framework

Consider a general PDE of the form \(\partial_t u = \mathcal{N}(u)\), discretized with a time step \(\Delta t\). A neural network corrector introduces small perturbations at each step in one of two ways:

  • Direct Corrections: The network perturbs the state \(u\) directly. We denote this perturbation as \(\epsilon_u\).
  • Indirect Corrections: The network perturbs the right-hand side (RHS) of the equation, which governs the dynamics. We denote this as \(\epsilon_s\).

Local Error (One-Step Propagation)

After a single time step, a perturbation introduced at step \(n\) creates an error \(\delta u^{n+1}\) at the next step. This error is a combination of both effects: $$ \delta u^{n+1} = \underbrace{(I + \Delta t J) \epsilon_u}_{\text{direct error}} + \underbrace{\Delta t \epsilon_s}_{\text{indirect error}} $$ Here, \(J\) is the Jacobian of the operator \(\mathcal{N}\), which captures how the system's dynamics amplify perturbations.

Cumulative Error (Multi-Step Rollout)

Over many steps, these local errors accumulate. The crucial difference lies in how they grow:

  • Errors from direct corrections (\(\epsilon_u\)) are repeatedly multiplied by the full propagation operator \((I + \Delta t J)\), causing them to be amplified at each step.
  • Errors from indirect corrections (\(\epsilon_s\)) are consistently scaled down by the small time step \(\Delta t\) before they propagate.

The Error Dominance Ratio (\(R_k\))

To quantify this difference, we define the Error Dominance Ratio, \(R_k\), which compares the growth of direct errors to indirect errors. Our analysis simplifies this ratio to the following relationship: $$ R_k \sim \frac{1}{\Delta t} + L $$ where \(L\) is the Lipschitz constant of the PDE operator, which bounds its maximum rate of change.

Implications for Chaotic Systems

In typical simulations, the time step is very small (e.g., \(\Delta t = 0.01\)). This means the term \(\frac{1}{\Delta t}\) is large (e.g., 100). Consequently, direct corrections amplify errors at least 100 times more than indirect corrections, making them far more prone to instability.

Key Insight:

The advantage of indirect corrections is even more pronounced in chaotic systems, which are defined by a positive maximum Lyapunov exponent (\(\lambda_{\text{max}} > 0\)) that governs exponential error growth. In our framework, the Lipschitz constant \(L\) provides an upper bound for this exponent (\(L \ge \lambda_{\text{max}}\)).

  • In chaotic systems, the large value of \(L\) further increases \(R_k\), widening the stability gap between the two methods.
  • The rapid, amplified error growth from direct corrections can push the solver into unstable states, causing the simulation to fail or "blow up."
  • Indirect corrections, by keeping error growth in check via the \(\Delta t\) factor, maintain stability even in these challenging regimes.

Indirect Neural Corrections

Our method, the Indirect Neural Corrector (INC), reframes how neural networks are integrated with numerical PDE solvers. We contrast it with the standard "solver-in-the-loop" approach, which we term a direct correction.

Direct vs. Indirect Correction Schemes

A direct correction uses an operator-splitting approach. First, a numerical solver computes a coarse prediction, and then a neural network directly adjusts this predicted state. The process can be written as:

Direct Correction:

$$ \underbrace{u^{*} = \mathcal{T}\big(u^n, \mathcal{N}(u^n)\big)}_{\text{1. Coarse Solver Step}} \quad \rightarrow \quad \underbrace{u^{n+1} = \mathcal{G}_\theta(u^*)}_{\text{2. NN Correction on State}} $$

In contrast, our Indirect Neural Correction (INC) integrates the neural network's output as a source term inside the governing PDE. The correction is applied to the equation's right-hand side (RHS) before the time integration step:

Indirect Correction (INC):

$$ u^{n+1} = \mathcal{T}\big(u^n, \underbrace{\mathcal{N}(u^n) + \mathcal{G}_\theta(u^n)}_{\text{Corrected Dynamics}}\big) $$

In these equations, \(\mathcal{T}\) is the temporal integration, \(\mathcal{N}\) represents the physical dynamics, and \(\mathcal{G}_\theta\) is the neural network. As shown in our theory, moving \(\mathcal{G}_\theta\) inside the solver has a profound stabilizing effect on long-term simulations. A key requirement for INC is that the solver \(\mathcal{T}\) must be differentiable to allow for end-to-end training.

Unrolled Training Objective

We train the network \(\mathcal{G}_\theta\) in a supervised manner using high-resolution reference data. To promote long-term stability and accuracy, we use a multi-step unrolled optimization strategy. The model is trained to minimize the \(\mathcal{L}_2\) loss between its predicted trajectory and the ground truth over a sequence of \(m\) steps:

$$ \theta^* = \operatorname{arg\,min}_{\theta} \left[ \sum_{n} \sum_{s=1}^{m} \mathcal{L}_2 \left( \tilde{u}^{n+s}, (\mathcal{S}_{\theta})^s(\tilde{u}^n) \right) + \lambda \|\theta\| \right] $$

Here, \( \tilde{u} \) is the high-fidelity ground truth, \( (\mathcal{S}_{\theta})^s \) represents applying the hybrid solver (either direct or indirect) autoregressively for \(s\) steps, and \(\lambda\) is a regularization term. This objective forces the network to learn corrections that are not just accurate for a single step but also lead to stable, physically plausible rollouts over time.

Results

Accuracy in Long Autoregressive Rollouts

We tested INC's long-term performance on challenging chaotic and shock-forming systems over thousands of steps. Compared to standard direct correction methods, INC demonstrates superior accuracy by dramatically reducing error accumulation. It achieves up to a 158.7% improvement in R² correlation for chaotic systems and reduces error by up to 99% when capturing sharp shock waves.

Long-term accuracy on a chaotic system.

INC maintains accuracy far longer in a chaotic simulation.

Accuracy in simulating shock waves.

INC's prediction (bottom) closely matches the reference (top).

Improving Numerical Stability

We then pushed the numerical solver into unstable regimes where it would normally fail (or "blow up") due to coarse temporal resolution. While direct correction methods struggle, INC not only prevents the simulation from failing but also maintains high accuracy, significantly outperforming other approaches that manage to remain stable.

INC stabilizing an otherwise unstable simulation.

INC remains stable and accurate even when the baseline solver fails.

Quantitative accuracy in unstable regimes.

Quantitative results show INC achieves the highest correlation in unstable setups.

Acceleration and Accuracy for Complex Cases

Finally, we applied INC to complex, engineering-relevant turbulent flows. By enabling larger time steps and coarser grids without losing accuracy, INC delivers massive computational speedups. It achieves a 7x speedup for a 2D flow simulation and a staggering 330x speedup for a large-scale 3D turbulent flow, all while matching the statistical accuracy of a high-resolution numerical solver.

Accurate vortex structures in a 2D BFS simulation.

7x faster: INC accurately captures complex 2D flow structures, with 1200 steps forward.

Performance and accuracy for a 3D turbulent flow.

330x faster: INC accelerates complex 3D turbulent flow simulations while maintaining high accuracy.

BibTeX

@article{INC2025,
  title={{INC}: An Indirect Neural Corrector for Auto-Regressive Hybrid {PDE} Solvers},
  author={Hao Wei, Aleksandra Franz, Björn Malte List, Nils Thuerey},
  booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
  year={2025},
}