Mate in 1
Benchmark page
GPT-5.5 scored 29.3% strict accuracy on ChessBench, solving 44 of 150 fixed Lichess tactics when the answer had to match the expected UCI move line exactly.
Strict accuracy
29.3%
Correct puzzles
44 / 150
Prompt tokens
114,397
Total cost
$15.44
Benchmark updated
Apr 30, 2026, 8:35 PM
Find the tactical line that creates a fork.
Find the line that wins an undefended piece.
Solve a forced mate in one move.
Solve a forced mate in two moves.
Find the tactical line that exploits a pin.
This page gives Google a stable, canonical URL for the model and describes the benchmark result in plain HTML instead of only inside the interactive explorer.
It also links back to the benchmark methodology and exposes model-specific copy, numbers, and sample puzzle outcomes that are relevant to searches like "GPT-5.5chess benchmark" and "GPT-5.5tactical reasoning results".
For the full dataset definition and scoring details, visit the ChessBench dataset and methodology page.
These examples add readable, model-specific context beyond a single leaderboard number.
Mate in 1
Mate in 1
Mate in 1
Mate in 1
Expected line: d6h6
Model line: No parseable line returned
Parse status: missing
Mate in 1
Mate in 1