It's a huge leap for machines that can think like humans, experts say. Geometry, and maths in general, have been a headache for AI boffins for a long time. Compared with text-based AI models, there is much less data for maths because it uses symbols and rules.
Thang Wang, one of the clever clogs behind the research, which is published in Nature today said solving maths problems needs logical thinking, something that most current AI models are rubbish at. This is why maths is a good way to measure how smart AI is, says Wang.
DeepMind's program, called AlphaGeometry, mixes a language model with a type of AI that uses symbols and rules to work things out. Language models are good at spotting patterns and guessing what comes next. But their thinking is not good enough for maths. The other type of AI is based on strict logic and rules, which helps it guide the language model to make sense. These two methods, for creative and logical thinking, work together to solve tough maths problems. This is like how humans do geometry, using what they know and trying new things.
DeepMind says it tested AlphaGeometry on 30 geometry problems that are as hard as the ones in the International Mathematical Olympiad, a contest for the best maths students. It did 25 in time. The previous best system, made by a Chinese maths whizz in 1978, did only 10.
Floris van Doorn, a maths professor at the University of Bonn, who was not part of the research said: "This is amazing... I thought this would take much longer." DeepMind says this system shows AI's ability to think and learn new maths.