Back to ProbeBench
pending reviewsae combinationReward HackingApache-2.0

RewardHackGuard PoC

Detect emergent reward-hacking generalization · Anthropic Nov 2025 framing

SAE-feature-combination probe on the residual stream of Qwen3.6-27B. Trained to detect activation patterns associated with reward-hacking generalization (Anthropic 2511.18397). Key for any team running GRPO / DPO / RLHF post-training.

byCaio VicentinoOpenInterp2026-05-XXarXiv:2603.04069
ProbeScore
0.645
Reward Hacking #1Global #4
7 weighted axes. Subject to revision as more reproducers land.

Quickstart

Three lines via the openinterp.probebench SDK. Returns the probe's P(positive_class) for a tensor of activations captured at the layer/position below.

python
from openinterp import probebench
probe = probebench.load("openinterp/rewardhackguard-qwen35-4b-l18")
score = probe.score(activations)  # → P(positive_class)

ProbeScore — 7 weighted axes

Composite metric in [0, 1]. No single axis can dominate; we think this discourages single-metric optimization. Numbers update as new evaluations land.

scoring formula
ProbeScore breakdown0.645
AUROC0.163
Eval-aware0.106
Dist-shift0.062
Calibration0.070
Transfer0.050
Goodhart-resistance0.050
Latency0.094
License0.050

ProbeScore — 8-axis breakdown

ProbeBench v0.0.2 · 8 axes weighted to sum 1.0

Visual breakdown of where this probe scores high vs low across all 8 axes. Polygon area is proportional to total score; shape reveals which axes carry it (and which ones it sacrifices).

AUROC0.65Eval-aware0.59Dist-shift0.52Calibration0.70Transfer0.50Goodhart0.50Latency0.94License1.00
AUROC
0.650
× 0.25 = 0.163
Eval-aware
0.590
× 0.18 = 0.106
Dist-shift
0.520
× 0.12 = 0.062
Calibration
0.700
× 0.10 = 0.070
Cross-model
0.500
× 0.10 = 0.050
Goodhart-resistance
0.500
× 0.10 = 0.050
Latency
0.936
× 0.10 = 0.094
License
1.000
× 0.05 = 0.050

Per-task evaluation

Headline AUROC alongside the eval-awareness-corrected and distribution-shift versions for each task. Reproducers run from a single notebook on a single Colab session.

TasknAUROC [95% CI]Eval-aware correctedDist-shiftECEFPR@99TPRLatencyReproducer
HaluEval-QA
Hallucination · hallucinated
2000.650 [0.56, 0.74]0.5900.5200.1500.3201.8 ms

Cross-model transfer (Pearson_CE)

Pearson correlation of paired ablation effects across models. ≥ 0.7 suggests the probe's direction is shared; 0.4–0.7 suggests partial transfer; < 0.4 suggests retraining per architecture.

No cross-model transfer measurements yet for this probe. Pearson_CE tells you whether a probe's direction transfers to another architecture without retraining. Submit a transfer measurement.
Mean Pearson_CE for this probe (excluding self-baseline): 0.500

Architecture & artifact

Base model

Qwen3.6-27B
Qwen/Qwen3.6-27B
Family
Qwen
Params
27B
Architecture
Hybrid GDN + Gated-Attn (dense, reasoning)
Layers
64
d_model
5120
Weights license
Apache-2.0
Probe attaches at
Layer 31 · token_avg
huggingface.co/Qwen/Qwen3.6-27B

Artifact

RewardHackGuard weights
sklearn-compatible probe + scaler + meta.json
Params
480,000
Size
1.8 MB
License
Apache-2.0
Released
2026-05-XX
sha256
rh8d4c0e6c5b…c0b9a8f7
Hash matches the artifact at huggingface.co/datasets/caiovicentino1/RewardHackGuard-linearprobe-qwen35-4b. Recompute via openinterp probebench verify openinterp/rewardhackguard-qwen35-4b-l18.

Reproduce

Three entry points. Pick one. Each path lands at the same numbers.

shell
git clone https://github.com/OpenInterpretability/notebooks.git
pip install openinterp
openinterp probebench reproduce openinterp/rewardhackguard-qwen35-4b-l18

Honest scope

Limits derived from the evaluation data above. We think these are the honest constraints; revisions land as more reproducers do.

  • Trained / evaluated on: HaluEval-QA. Performance outside these tasks is unmeasured here.
  • Eval-aware AUROC drop: 0.060 AUROC averaged across tasks (uncorrected − corrected for eval-awareness confound, arXiv:2509.13333).
  • Distribution-shift drop: 0.130 AUROC averaged across tasks (in-distribution − long-context / OOD).
  • Cross-model fit (mean Pearson_CE): 0.500 — values below 0.4 suggest retraining per architecture; values above 0.7 suggest the direction is shared.
  • Probe attaches at: L31 · token_avg of Qwen3.6-27B. Other layers / positions are out of scope unless re-trained.

ProbeBench v0.0.1 · 5 probes registered · 11 evaluations · 7 transfer measurements. Schema and weights subject to revision.