AI Reaches Gold-Medal Performance at the International Mathematical Olympiad
AI Breaks New Ground in Mathematics
Artificial intelligence has just made its mark on one of the world’s toughest academic arenas—the International Mathematical Olympiad (IMO). Known as the ultimate stage for young math whizzes under 20, the IMO challenges even the brightest students with problems that twist the mind in knots. This year, things took a futuristic turn: Google DeepMind’s advanced Gemini chatbot scored in the gold-medal range by nailing five out of six problems from the official 2025 IMO set.
AI Earns Gold Among Math’s Elite
Not only did DeepMind’s model earn a stellar 35 points—a tally usually reserved for the crème de la crème among Olympiad contestants—but it also did so under real competition conditions and within the tricky 4.5-hour time limit. What’s more, professional Olympiad graders found its solutions to be “clear, precise and easy to follow.” That’s no small feat given the level of creativity, logic, and ingenuity these problems require.
To put this into perspective, roughly 10% of IMO competitors typically earn gold, and just a handful notch up a perfect score. While this AI didn’t reach the flawless 42 points achieved by five human prodigies this year, matching the gold standard is a leap beyond last year, when AI could only crack the silver tier.
What This Means for the Future
AI’s ability to reason through Olympiad-level mathematics hints at even bigger things brewing on the horizon. With advances like this, AI could become an indispensable tool in education, research, and maybe even in solving problems that have stumped mathematicians for generations. The possibilities—from personalized math mentors to behind-the-scenes assistants for theoretical science—are just starting to unfold.
Curious to see the details and what DeepMind’s team has to say about their achievement? You can read the full announcement for yourself: Read the full article here.