TGArchive
·2 хв читання · 262 слова·👁 14.4K14

🥇 AI Takes Gold at the International Mathematical Olympiad

An experimental model from Google DeepMind solved five out of six problems at the IMO—the world's most prestigious high school math competition. The AI scored 35 out of 42 points, earning it a gold medal ranking—the top 8–10% of participants.

✍️ Same Rules

Gemini Deep Think, the AI "contestant," explores multiple lines of reasoning in parallel—similar to OpenAI "pro" models.

There were no shortcuts: it got the same raw problems students did, without any reformatting or simplification. Just like the human competitors, it had 4.5 hours to work and no internet access.

IMO judges graded the work. "Their solutions were astonishing in many respects," said IMO president Gregor Dolinar.

Notably, this was the general-purpose Gemini model, which Google plans to make available to users soon. In 2024, Google tested AlphaProof and AlphaGeometry 2 on IMO. Engineers had to rewrite the problems for those models in a formal programming language. After three days of processing, they solved four problems—a silver-level performance.

🏆 Other Competitors

A new model from Harmonic AI also attempted this year's IMO problems, but the company is keeping quiet until July 28. Organizers asked all AI teams to hold announcements for a week after the closing ceremony to keep the spotlight on the students.

One outlier: OpenAI. Their new "reasoning" model reportedly matched Gemini's performance—but OpenAI went public with their results on July 19.

IMO officials called the move "rude and inappropriate," adding that OpenAI's "gold" status is still in question, since independent judges haven't reviewed its solutions.

#Gemini #OpenAI @hiaimediaen

Відкрити в Telegram
Повернутись до каналу