Hybrid-architecture SAEs
First public TopK residual-stream SAEs on Gated DeltaNet, ensemble MoE, and triple-hybrid MoE+GDN+Gated-Attn. No one else has released these.
The first open stack for training sparse autoencoders on hybrid architectures and using their features as per-token reward signals in reinforcement learning.
pip install mechrewardalphaEvery card below corresponds to a public artifact: a trained SAE, a validated feature pack, a protocol, or an ablation result. No vaporware.
First public TopK residual-stream SAEs on Gated DeltaNet, ensemble MoE, and triple-hybrid MoE+GDN+Gated-Attn. No one else has released these.
Stage Gate 1 correlation ρ=0.52–0.54 on held-out GSM8K / SuperGPQA. Features predict answer correctness across architectures.
Per-token SAE feature activations as dense reward inside GRPO. Qwen3.5-4B → +19 pp on GSM8K in 168 effective training steps.
Same protocol, same contrastive reward formula, runs on 4B dense-GDN, 9B ensemble-MoE, and 35B-A3B triple-hybrid. Thesis transfers.
Our G2 ablation (R1 SAE-sparse vs R2 raw-direction) shows an 11 pp gap on GSM8K. Sparse decomposition is causal, not cosmetic.
Every SAE, every reward pack, every evaluation result is public. No black boxes. Stage Gates are reproducible step by step.
First TopK residual-stream SAEs on architectures previously unreachable.
Hybrid Gated DeltaNet · Residual post-L18 · 16× expansion · 200M training tokens
First TopK residual-stream SAE for hybrid GDN
Ensemble MoE · Residual post-L21 · 16× expansion · 1B training tokens
First public SAE for Gemma-4 ensemble-MoE
Triple-hybrid (MoE + GDN + Gated Attention) · Residual post-L23 · 16× expansion · 92M (WIP) training tokens
First public SAE on triple-hybrid MoE+GDN+Gated-Attention. No precedent in literature.
Honest comparison against the four closest prior works. All numbers are from the published papers; we don't soften or spin.
Linear probes on activations → online RL reward
Their result: 58% hallucination reduction on Gemma-3-12B-IT
How we differ: We use sparse TopK SAE features instead of raw probes; the 11 pp R1-vs-R2 gap in our G2 is the empirical argument for why decomposition matters.
PPO with SAE features as action space (select which feature to amplify)
Their result: +1.03 pp on GSM8K with Gemma-2-2B
How we differ: We use SAE features as the reward signal itself, not as an action space. +19 pp on Qwen3.5-4B GSM8K. Methods are complementary (different axes of using SAE features in RL).
SAE features → linear head → frozen reward model for offline RLHF
Their result: Preference-model quality improvements
How we differ: We are online, per-token, and target reasoning on hybrid architectures — not preference modeling on dense transformers.
SAE feature amplification at inference (contrastive around reasoning vocabulary)
Their result: +13.4% AIME-2024 on DeepSeek-R1-Distill-Llama-8B
How we differ: Inference-time intervention vs training-time reward. We ported ReasonScore in our library for completeness and ran it on Qwen3.5-4B — confirmed rhetoric features, not correctness features.
Don't spend GPU hours on RL until you've verified the signal predicts the outcome. Every validated pack in the catalog has passed all three gates.
Verify features predict outcome on held-out data before spending GPU hours on RL.
Compare outcome-only (R0) vs outcome + SAE-sparse (R1) vs outcome + raw-direction (R2).
Scale-up with per-token mech-reward, MMLU preservation check, adversarial canary suite.
Per-token SAE-feature reward lifts Qwen3.5-4B from 64% → 83% on GSM8K in 168 effective training steps, +7pp above the same-SAE trajectory-level G2 R1 ceiling (76%). MMLU non-regressed. Hack rate within baseline 95% CI.
First cross-architecture validation. SAE trained on 92M tokens (46% of Qwen3.5-4B budget) already matches Qwen3.5-4B correlation level (ρ=0.540). Signal transfers to triple-hybrid MoE.
mechreward drops into TRL, OpenRLHF, and verl with a single import. Every feature pack is validated at ρ ≥ 0.30 on held-out data before it ships.