
Google has developed a state-of-the-art AI system for geometry called AlphaGeometry, which can correctly solve 25 of 30 International Mathematical Olympiad (IMO) level geometry problems. This achievement surpasses other computing algorithms and approaches human gold-medalist performance.
Olympiad geometry problems have emerged as a testing ground for advanced AI systems. Previously, the best systems managed to solve 10 out of 30 problems. For comparison, Bronze medalists, on average, solve 19.3 problems; for Silver, you would need 22.9 correct solutions, and the Gold medalist level is 25.9.
AlphaGeometry solved 25 within the standard Olympiad time limit, almost equaling the best young mathematicians, according to a paper published in Nature.
The benchmark set was compiled from 2002-2022 Olympiad geometry problems.
AlphaGeometry works by combining two systems, thinking “fast and slow.” One system is a neural language model, providing fast “intuitive” ideas. Large language models excel at identifying general patterns and relationships in data and predicting potentially useful constructs but lack the ability to reason rigorously or explain decisions.
That’s where a symbolic deduction engine comes in, offering more deliberate and rational decision-making. Symbolic deduction engines are based on formal logic and use clear rules to arrive at conclusions.
Google calls the approach “neuro-symbolic.”

“AI systems often struggle with complex problems in geometry and mathematics due to a lack of reasoning skills and training data. AlphaGeometry’s system combines the predictive power of a neural language model with a rule-bound deduction engine, which work in tandem to find solutions,” Google researchers explained.
They trained the AlphaGeometry model without human demonstrations, using a vast pool of synthetic training data with 100 million unique examples.
Researchers hope that reaching the milestone of solving Olympiad-level geometry problems will help to create even more advanced and general AI systems.
“With AlphaGeometry, we demonstrate AI’s growing ability to reason logically and to discover and verify new knowledge,” the blog post reads.
Evan Chen, a math coach and former Olympiad gold medalist, was impressed by AlphaGeometry’s verifiable and clean output.
“Past AI solutions to proof-based competition problems have sometimes been hit-or-miss,” Chen said. “One could have imagined a computer program that solved geometry problems by brute-force coordinate systems: think pages and pages of tedious algebra calculation. AlphaGeometry is not that. It uses classical geometry rules with angles and similar triangles just as students do.”
AlphaGeometry wouldn’t win the Olympiad, which typically features six problems, as only two of those are usually focused on geometry. Nevertheless, its geometry capability alone makes it the first AI model in the world capable of passing the bronze medal threshold of the IMO in 2000 and 2015.
The AlphaGeometry code and model have been made open source as a contribution to further possibilities across mathematics, science, and AI.
Your email address will not be published. Required fields are markedmarked