Sitemap
Data Science Collective

Advice, insights, and ideas from the Medium data science community

Reproducing Google DeepMind IMO 2025 Results Using Gemini Pro: A Prompting Strategy Analysis

What if we could reproduce the IMO 2025 results using Gemini Pro with just prompting strategies no web tools or sophisticated RL methods.

8 min readJul 26, 2025

--

Press enter or click to view image in full size
Image from Google DeepMind

I think we’ve all heard the news about how Google DeepMind conquered the 66th International Mathematical Olympiad (IMO) in Australia using its Gemini Deep Think AI model. The system solved 5 out of 6 problems, earning 35 out of 45 points and securing a gold medal. This sent a massive shockwave through the worlds of mathematics and AI. The event featured six notoriously difficult problems in algebra, combinatorics, geometry, and number theory.

They used a steroid version of Gemini 2.5 Pro operating in “Deep Think” mode, which adds enhanced reasoning capabilities such as parallel thinking, multi‑step reinforcement learning, and exposure to a curated corpus of high‑quality mathematical proofs.

But… what if we could reproduce these results using the standard Gemini Pro without the fancy multi-step reinforcement learning by relying solely on prompting strategies? We’ll explore Chain-of-Thought (CoT), Role-Based, and a Hybrid approach that combines both CoT and Role-Based prompting. For the sake of simplicity, we’ll…

--

--

Data Science Collective

Published in Data Science Collective

Advice, insights, and ideas from the Medium data science community

Marc Lopez

Written by Marc Lopez

researcher, loves to write, open for collaborative work

Responses (2)