Interactive Demo
See how our self-test method learns particle interactions from unlabeled data. All simulations run in your browser — no server needed.
Demo 1
Watch 10 particles evolve under confinement V and interaction Φ. Each colored trail traces one particle's path. The interplay of attraction, repulsion, and noise creates rich dynamics.
This is the data we observe: snapshots of particle positions. But in real experiments, we don't know which particle is which across snapshots — the labels are lost.
Demo 2
Our self-test method recovers the gradient of both potentials. Solid line = true gradient; dashed = estimated. The error percentages are from our actual experiments (Tables 2 & 4 in the paper).
Try switching to Model B (Conditioning) — notice how Φ estimation degrades due to the ill-conditioned normal matrix (κ = 146K). This is the most difficult benchmark here for basis methods; the NN partially recovers.
Demo 3
A more stringent qualitative test: use the learned potentials to forward-simulate trajectories. Left: true dynamics. Right: predicted dynamics using estimated V̂ and Φ̂. Both start from identical initial conditions with the same noise realization, so trajectory divergence mainly reflects estimation error in the learned potentials.
For the Reference model (E), predicted trajectories stay close to truth. Switch to Model B or C to see how estimation errors compound — small gradient errors can cause significant trajectory divergence over time.
Reading Guide
This is the key regime story behind the demos. The advantage is not "magic denoising"; it is that self-test works directly on unlabeled snapshots, avoids label matching, and avoids finite-difference velocity estimation.
If labels are known and Δt is tiny, trajectory-based MLE can use nearly unbiased velocity information. Self-test is not claiming to dominate that ideal regime.
As Δt grows, label recovery worsens and finite-difference velocity estimation becomes biased. This is where MLE and Sinkhorn-style pipelines degrade quickly.
Self-test still has quadrature/discretization error, but it avoids the extra failure mode coming from velocity estimation. That is why it is usually more stable in coarse-observation, unlabeled settings.
Interpretation guide for the homepage energy panel: it is a diagnostic visualization of a free-energy balance identity, not the objective that Self-Test LSE/NN directly optimize. The optimized target is the quadratic self-test loss.
For Developers
Get started in 5 minutes. The codebase supports both basis regression (fast closed-form linear solve within a chosen basis) and neural network learning (flexible, without a hand-specified oracle basis).
git clone https://github.com/ViskaWei/lips_unlabeled_data
cd lips_unlabeled_data
pip install -e . from core.potentials import HarmonicPotential, GaussianInteraction
from core.sde_simulator import SDESimulator
V = HarmonicPotential(k=2.0)
Phi = GaussianInteraction(A=1.0, sigma=0.8)
sim = SDESimulator(V=V, Phi=Phi, sigma=1.0, dt=0.0001) # δt = 10⁻⁴
data, t_obs = sim.simulate(N=10, d=2, T=1.0, L=100, M=2000) from lib.basis import get_basis
from lib.solvers import solve_selftest
build_V, build_Phi, K_V, K_Phi, _ = get_basis('oracle', 'model_e')
alpha, beta, info = solve_selftest(
data, t_obs, sigma=1.0,
build_V_fn=build_V, build_Phi_fn=build_Phi,
K_V=K_V, K_Phi=K_Phi, reg='auto'
) from lib.eval import evaluate_kde
v_err, phi_err = evaluate_kde('model_e', d=2, alpha=alpha, beta=beta,
build_V_fn=build_V, build_Phi_fn=build_Phi)
print("V error: %.2f%%" % (100 * v_err))
print("Phi error: %.2f%%" % (100 * phi_err))
# Expected: V ~0.67%, Phi ~0.74% for Reference model at Δt=10⁻³