Back home
Q1 2026 · Live

Observatory

See the model thinking, feature by feature, token by token.

The narrative layer Neuronpedia lacks. Scrub a prompt through the residual stream, watch features ignite, click into circuits, compare reasoning across models.

Tools

Backing SAEs

Every trace, circuit, and atlas entry is grounded in a publicly-trained sparse autoencoder. All artifacts on HuggingFace, all training recipes reproducible.

Qwen/Qwen3.6-27B
Dense reasoning-tuned · 3 layers in parallel (L11/L31/L55) · d_sae 65,536 · var_exp 0.843
HF
Qwen/Qwen3.5-4B
Hybrid Gated DeltaNet · d_sae 40,960 · var_exp 0.866
HF
Google/Gemma-4-E4B
Ensemble MoE · d_sae 32,768 · var_exp 0.939
HF
Qwen/Qwen3.6-35B-A3B
Triple-hybrid (MoE + GDN + Gated Attention) · d_sae 32,768 · var_exp 0.835
HF