RewardHackGuard PoC
Detect emergent reward-hacking generalization · Anthropic Nov 2025 framing
SAE-feature-combination probe on the residual stream of Qwen3.6-27B. Trained to detect activation patterns associated with reward-hacking generalization (Anthropic 2511.18397). Key for any team running GRPO / DPO / RLHF post-training.
Quickstart
Three lines via the openinterp.probebench SDK. Returns the probe's P(positive_class) for a tensor of activations captured at the layer/position below.
from openinterp import probebench
probe = probebench.load("openinterp/rewardhackguard-qwen35-4b-l18")
score = probe.score(activations) # → P(positive_class)ProbeScore — 7 weighted axes
Composite metric in [0, 1]. No single axis can dominate; we think this discourages single-metric optimization. Numbers update as new evaluations land.
ProbeScore — 8-axis breakdown
ProbeBench v0.0.2 · 8 axes weighted to sum 1.0Visual breakdown of where this probe scores high vs low across all 8 axes. Polygon area is proportional to total score; shape reveals which axes carry it (and which ones it sacrifices).
Per-task evaluation
Headline AUROC alongside the eval-awareness-corrected and distribution-shift versions for each task. Reproducers run from a single notebook on a single Colab session.
| Task | n | AUROC [95% CI] | Eval-aware corrected | Dist-shift | ECE | FPR@99TPR | Latency | Reproducer |
|---|---|---|---|---|---|---|---|---|
HaluEval-QA Hallucination · hallucinated | 200 | 0.650 [0.56, 0.74] | 0.590 | 0.520 | 0.150 | 0.320 | 1.8 ms | — |
Cross-model transfer (Pearson_CE)
Pearson correlation of paired ablation effects across models. ≥ 0.7 suggests the probe's direction is shared; 0.4–0.7 suggests partial transfer; < 0.4 suggests retraining per architecture.
Architecture & artifact
Base model
- Family
- Qwen
- Params
- 27B
- Architecture
- Hybrid GDN + Gated-Attn (dense, reasoning)
- Layers
- 64
- d_model
- 5120
- Weights license
- Apache-2.0
Artifact
- Params
- 480,000
- Size
- 1.8 MB
- License
- Apache-2.0
- Released
- 2026-05-XX
- sha256
- rh8d4c0e6c5b…c0b9a8f7
openinterp probebench verify openinterp/rewardhackguard-qwen35-4b-l18.Reproduce
Three entry points. Pick one. Each path lands at the same numbers.
git clone https://github.com/OpenInterpretability/notebooks.git pip install openinterp openinterp probebench reproduce openinterp/rewardhackguard-qwen35-4b-l18
Honest scope
Limits derived from the evaluation data above. We think these are the honest constraints; revisions land as more reproducers do.
- Trained / evaluated on: HaluEval-QA. Performance outside these tasks is unmeasured here.
- Eval-aware AUROC drop: −0.060 AUROC averaged across tasks (uncorrected − corrected for eval-awareness confound, arXiv:2509.13333).
- Distribution-shift drop: −0.130 AUROC averaged across tasks (in-distribution − long-context / OOD).
- Cross-model fit (mean Pearson_CE): 0.500 — values below 0.4 suggest retraining per architecture; values above 0.7 suggest the direction is shared.
- Probe attaches at: L31 · token_avg of Qwen3.6-27B. Other layers / positions are out of scope unless re-trained.
ProbeBench v0.0.1 · 5 probes registered · 11 evaluations · 7 transfer measurements. Schema and weights subject to revision.