Enlarge / An illustration provided by Google. On Thursday, Google DeepMind announced that AI systems called AlphaProof and AlphaGeometry 2 reportedly solved four out of six problems from this year’s International Mathematical Olympiad (IMO), achieving a score equivalent to a silver medal. The tech giant claims this marks the first time an AI has reached this level of performance in the prestigious math competition—but as usual in AI, the claims aren’t as clear-cut as they seem. Further Reading DeepMind AI rivals the world’s smartest high schoolers at geometry Google says AlphaProof uses reinforcement learning to prove mathematical statements in the formal language called Lean . The system trains itself by generating and verifying millions of proofs, progressively tackling more difficult problems. Meanwhile, AlphaGeometry 2 is described as an upgraded […]
Original web page at arstechnica.com