However, this conventional wisdom overlooks crucial
details. We demonstrate that a neural emulator,
trained purely on low-fidelity data, can produce
results that are more accurate than that data. As
the video shows, our emulator learns to produce a
sharper, more physically realistic simulation than
the coarse solver it was trained on. How is this
possible?
- Inductive Biases: Neural network
architectures aren't blank slates. A
Convolutional Neural Network (ConvNet), for
example, has a natural preference for local
patterns. This "bias" acts as a regularizer,
implicitly filtering out the structured errors
of the coarse solver.
- Different Goals: Training and
evaluation are different. We train the emulator
to predict the next single step perfectly. But
we evaluate it over a long, multi-step
"rollout." The emulator can learn dynamics with
more favorable error accumulation properties,
making it more accurate in the long run than the
coarse solver.
We define the Superiority Ratio (ΞΎ) to measure this
effect. When ΞΎ < 1, the emulator is superior to its
training data.
$$\xi = \frac{\text{Error(π€ Emulator vs. β
Ground Truth)}}{\text{Error(π Training Data vs. β
Ground Truth)}}$$