Mate in 1
Benchmark page
GLM-5 scored 12.0% strict accuracy on ChessBench, solving 18 of 150 fixed Lichess tactics when the answer had to match the expected UCI move line exactly.
Strict accuracy
12.0%
Correct puzzles
18 / 150
Prompt tokens
126,345
Total cost
$1.87
Benchmark updated
Mar 26, 2026, 7:53 AM
Find the tactical line that creates a fork.
Find the line that wins an undefended piece.
Solve a forced mate in one move.
Solve a forced mate in two moves.
Find the tactical line that exploits a pin.
This page gives Google a stable, canonical URL for the model and describes the benchmark result in plain HTML instead of only inside the interactive explorer.
It also links back to the benchmark methodology and exposes model-specific copy, numbers, and sample puzzle outcomes that are relevant to searches like "GLM-5chess benchmark" and "GLM-5tactical reasoning results".
For the full dataset definition and scoring details, visit the ChessBench dataset and methodology page.
These examples add readable, model-specific context beyond a single leaderboard number.
Mate in 1
Mate in 1
Mate in 1
Mate in 1
Expected line: b5e8
Model line: No parseable line returned
Parse status: missing
Mate in 1
Expected line: d5f7
Model line: No parseable line returned
Parse status: missing
Mate in 1
Expected line: f6f7
Model line: No parseable line returned
Parse status: missing