EvalAwarenessGuard
Detects when model believes it is being evaluated · UK AISI priority
Linear probe separating real-world deployment prompts from evaluation prompts on Qwen3.6-27B. Used to compute "eval-awareness corrected AUROC" for other probes — exposes the confound that AUROC scales with eval-awareness (arXiv:2509.13333).
Quickstart
Three lines via the openinterp.probebench SDK. Returns the probe's P(positive_class) for a tensor of activations captured at the layer/position below.
from openinterp import probebench
probe = probebench.load("openinterp/evalawareness-qwen36-27b-l40")
score = probe.score(activations) # → P(positive_class)ProbeScore — 7 weighted axes
Composite metric in [0, 1]. No single axis can dominate; we think this discourages single-metric optimization. Numbers update as new evaluations land.
ProbeScore — 8-axis breakdown
ProbeBench v0.0.2 · 8 axes weighted to sum 1.0Visual breakdown of where this probe scores high vs low across all 8 axes. Polygon area is proportional to total score; shape reveals which axes carry it (and which ones it sacrifices).
Per-task evaluation
Headline AUROC alongside the eval-awareness-corrected and distribution-shift versions for each task. Reproducers run from a single notebook on a single Colab session.
| Task | n | AUROC [95% CI] | Eval-aware corrected | Dist-shift | ECE | FPR@99TPR | Latency | Reproducer |
|---|---|---|---|---|---|---|---|---|
Apollo Eval-vs-Deploy Eval Awareness · evaluation | 200 | 0.930 [0.88, 0.97] | 0.930 | 0.810 | 0.050 | 0.030 | 1.0 ms | — |
Cross-model transfer (Pearson_CE)
Pearson correlation of paired ablation effects across models. ≥ 0.7 suggests the probe's direction is shared; 0.4–0.7 suggests partial transfer; < 0.4 suggests retraining per architecture.
| Source | Target | Pearson_CE | Transfer AUROC | Notes |
|---|---|---|---|---|
| Qwen3.6-27B | Llama-3.3-70B | 0.610 | 0.860 |
Architecture & artifact
Base model
- Family
- Qwen
- Params
- 27B
- Architecture
- Hybrid GDN + Gated-Attn (dense, reasoning)
- Layers
- 64
- d_model
- 5120
- Weights license
- Apache-2.0
Artifact
- Params
- 312,000
- Size
- 1.2 MB
- License
- Apache-2.0
- Released
- 2026-05-XX
- sha256
- ea7c3d9e5b4a…0b9a8f7e
openinterp probebench verify openinterp/evalawareness-qwen36-27b-l40.Reproduce
Three entry points. Pick one. Each path lands at the same numbers.
git clone https://github.com/OpenInterpretability/notebooks.git pip install openinterp openinterp probebench reproduce openinterp/evalawareness-qwen36-27b-l40
Honest scope
Limits derived from the evaluation data above. We think these are the honest constraints; revisions land as more reproducers do.
- Trained / evaluated on: Apollo Eval-vs-Deploy. Performance outside these tasks is unmeasured here.
- Eval-aware AUROC drop: −0.000 AUROC averaged across tasks (uncorrected − corrected for eval-awareness confound, arXiv:2509.13333).
- Distribution-shift drop: −0.120 AUROC averaged across tasks (in-distribution − long-context / OOD).
- Cross-model fit (mean Pearson_CE): 0.610 — values below 0.4 suggest retraining per architecture; values above 0.7 suggest the direction is shared.
- Probe attaches at: L40 · end_question of Qwen3.6-27B. Other layers / positions are out of scope unless re-trained.
ProbeBench v0.0.1 · 5 probes registered · 11 evaluations · 7 transfer measurements. Schema and weights subject to revision.