Google's data centers have gained 0.7% more operational capacity thanks to AlphaEvolve, an AI agent that iteratively refines code for optimal performance. This same system also advanced a long-standing geometry challenge by finding a new way to pack 11 identical hexagons. Those are not the only advances. AlphaEvolve also discovered superior algorithms for matrix multiplication and improved bounds on several decades-old mathematical conjectures. While numerically small in some cases, they preview a future where machine-tuned algorithms compound into major scientific and commercial payoffs.
The hexagon packing achievement, as illustrated above, is an example of how AlphaEvolve surpassed dedicated human efforts. For the challenge of packing 11 unit regular hexagons into a larger bounding hexagon, the previous best known construction, attributed to independent geometer Maurizio Morandi in 2015, required a bounding side length of approximately 3.943 units for its aligned honeycomb structure. AlphaEvolve refined this to a side length of 3.931 units, a reduction of about 0.3% in edge length. That translates to a roughly 0.6% decrease in the bounding area. The AlphaEvolve method accomplished this by tilting each inner hexagon at varying angles instead of a uniform, flush alignment. That off-grid twist demonstrates the agent's capacity for novel design beyond simply remixing training data, underscoring why even fractional geometric gains matter for applications like wafer layouts, battery anodes, and other real-world packing problems.
"AlphaEvolve discovered novel, provably correct algorithms that surpass state-of-the-art solutions on a spectrum of problems in mathematics and computer science..."
AlphaEvolve's knack for fresh ideas isn't limited to creative hexagon packing. The white paper itself notes that the agent "discovered novel, provably correct algorithms that surpass state-of-the-art solutions on a spectrum of problems in mathematics and computer science," and the evidence backs that up. Take the Erdős minimum overlap problem, a number-theory puzzle first posed in 1955. The upper-bound record hadn't budged since 2016, yet AlphaEvolve shaved it from roughly 0.380927 to 0.380924 with a new step-function construction. That fourth-decimal-place nudge may look trivial, but in a domain where progress can be at times glacial, it's meaningful.
"Notably, AlphaEvolve developed a search algorithm that found a procedure to multiply two 4 × 4 complex-valued matrices using 48 scalar multiplications; offering the first improvement, after 56 years, over Strassen's algorithm in this setting."
And this capacity for fresh insight isn't just for making calculations faster. AlphaEvolve also made headway on a highly abstract mathematical puzzle known as an uncertainty inequality.
Imagine you have a signal, like a sound wave. An uncertainty principle in mathematics (similar in spirit to the famous one in physics) states that you can't know both how precisely located the signal is in time and how precisely its frequency components are defined, beyond a certain limit. There's a fundamental trade-off.
The image below shows a test that shows the prior state-of-the-art (SOTA) in red -- the previous best-known mathematical function used to set a limit on this uncertainty. AlphaEvolve, however, discovered a new, slightly better function (the green curve).
This new function allowed mathematicians to slightly tighten the known bounds for this uncertainty constant. The result pushed the upper limit from approximately 0.3523 down to 0.3521. While a change of 0.0002 might seem minuscule, in theoretical mathematics, refining the boundaries of what's possible, even by tiny increments, is still significant.
"AlphaEvolve orchestrates an autonomous pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code... potentially leading to new scientific and practical discoveries."