The sums were related to Gaussian quadratic sums, named after the famous mathematician Carl Friedrich Gauss. Gauss would pick some prime number p and then add numbers of the form $latex e^{\frac{2iπn^2}{p}}$. Since their inception, Gaussian sums of squares have proven invaluable for tasks such as computing solutions to certain types of equations. “It turns out that Gaussian sums are magic, that they just do wonderful things for God knows what reason,” said Jeffrey Hofstein, a mathematician at Brown University. In the mid-19th century, the German mathematician Ernst Eduard Kummer played with a close relative of these Gaussian sums of squares, where the n2 in the exponent is replaced by an n3. Kummer noticed that they tended to collect almost certain values to a surprising degree—a stark observation that would lead to centuries of research in number theory. Unless cubic Gaussian sums are reworked into a simpler formula, their values are difficult to deduce. Without such a formula, Kummer began to calculate cubic Gaussian sums — and calculate and calculate. “It was very common for them to do these kinds of heroic calculations by hand back then,” said Matthew Young, a mathematician at Texas A&M University. After plowing through 45 sums, corresponding to the first 45 non-trivial primes, Kummer finally gave up. Investigating his results, Kummer noticed something interesting. In theory, the sums could be anything between −1 and 1 (after being “normalized” — divided by an appropriate constant). But when he did the calculations, he discovered that they were distributed in a strange way. Half of the results were between ½ and 1, and only one-sixth of them were between −1 and −½. They seemed to cluster around 1. Kummer presented his observations, along with a conjecture: If you somehow managed to plot all the infinite cubic Gauss sums, you would see most of them between ½ and 1. Fewer between −½ and ½? and even less between −1 and −½. Selberg, von Neumann and Goldstine set out to test this on their early computer. Selberg programmed it to compute cubic Gaussian sums for all non-trivial primes less than 10,000 — about 600 sums in all. (Goldstine and von Neumann went on to write the paper; her contributions would end up in an acknowledgment line at the end.) They found that as the primes got larger, the normalized sums became less inclined to cluster near 1. convincing evidence that the Kummer’s conjecture was wrong, mathematicians began trying to understand cubic Gaussian sums in a deeper way that went beyond simple calculation. This process is now complete. In 1978, mathematician Samuel Patterson ventured a solution to Kummer’s mathematical mystery, but was unable to prove it. Then, last fall, two mathematicians from the California Institute of Technology proved Patterson’s conjecture, finally allowing closure on Kummer’s thoughts from 1846. Patterson was first associated with the problem as a graduate student at Cambridge University in the 1970s. His conjecture was based on what happens when numbers are randomly placed anywhere between −1 and 1. If you add up N of these random numbers, the typical sum size will be $latex\sqrt{N}$ (could be positive or negative). Similarly, if the cubic Gaussian sums were uniformly spread from -1 to 1, one would expect N of them to add up to about $latex\sqrt{N}$ . With this in mind, Patterson added N cubic Gaussian sums, ignoring (for now) the requirement to stick to primes. He found that the sum was about N5/6 — larger than $latex\sqrt{N}$ (which can be written as N1/2), but smaller than N. This value implied that the sums behaved like random numbers but with a weak force pushing them towards positive values, called bias. As N got larger and larger, the randomness would start to overwhelm the bias, and so if you somehow looked at all the infinite cubic sums of Gaussians at once, they would appear uniformly distributed. This seemingly explained everything: Kummer’s calculations show a bias, as well as the IAS calculations that refute a bias. But Patterson wasn’t able to do the same calculations for primes, so in 1978, he formally wrote it up as a conjecture: If you sum the cubic Gauss sums for primes, you should get the same N5/6 behavior. Shortly after giving his talk on his work on the Kummer problem, Patterson contacted a graduate student named Roger Heath-Brown, who suggested incorporating techniques from prime number theory. The two collaborated and soon published an advance on the problem, but still could not show that Patterson’s predicted N5/6 bias was accurate for prime numbers. In the following decades, there was little progress. Finally, at the turn of the millennium, Heath-Brown made another breakthrough, in which a tool he had developed, the cubic grand sieve, played an essential role. To use the cubic grand sieve, Heath-Brown used a series of calculations to relate the sum of cubic Gauss sums to a different sum. With this tool, Heath-Brown was able to show that if we add cubic Gaussian sums for primes smaller than N, the result cannot be much larger than N5/6. But he thought he could do better — that the sieve itself could be improved. If it could, it would lower the limit exactly to N5/6, thus proving Patterson’s conjecture. In a short line of text, he outlined what he thought would be the best possible formula for the sieve. Even with this new tool in hand, mathematicians could not go any further. Then, two decades later, a chance meeting between Caltech postdoc Alexander Dunn and his supervisor Maksym Radziwiłł marked the beginning of the end. Before Dunn started his post in September 2020, Radziwiłł suggested they work together on Patterson’s conjecture. But with the Covid-19 pandemic still raging, research and teaching continued remotely. Finally, in January 2021, luck—or fate—intervened when the two mathematicians unexpectedly bumped into each other in a Pasadena parking lot. “We had a heart to heart and agreed that we should start meeting and talking math,” Dan wrote in an email. By March, they were hard at work on a proof of Patterson’s conjecture. “It was exciting to work on, but extremely high risk,” Dunn said. “I mean, I remember coming into my office at 5 a.m. every morning straight for four or five months.” Dunn and Radziwiłł, like Heath-Brown before them, found the cubic grand sieve essential to their proof. But as they used the formula Heath-Brown had written in his 2000 paper—what he believed to be the best possible sieve, a conjecture that the number theory community had come to believe was true—they realized something was wrong. . “We managed to prove that 1 = 2, after very, very complicated work,” said Radziwiłł. At that point, Radziwiłł was sure that the mistake was theirs. “I was kind of convinced that we basically have an error in our proof.” Dan convinced him otherwise. The cubic large sieve, contrary to expectations, could not be improved. Armed with the correctness of the cubic grand sieve, Dunn and Radziwiłł recalibrated their approach to the Patterson conjecture. This time they succeeded. “I think that was the main reason nobody did this, because that [Heath-Brown] the guess was misleading everyone,” Radziwiłł said. “I think if I told Heath-Brown that his guess is wrong, then he would probably figure out how to do it.” Dunn and Radziwiłł published their paper on September 15, 2021. In the end, their proof was based on the generalized Riemann hypothesis, a famous unprovable conjecture in mathematics. But other mathematicians consider this to be only a minor drawback. “We would like to get rid of the case. But we’re happy to have a result that’s conditional anyway,” said Heath-Brown, who is now an emeritus professor at Oxford University. For Heath-Brown, Dunn and Radziwiłł’s work is more than just proof of Patterson’s conjecture. With its unexpected image on the cubic grand sieve, the paper brought them a surprise at the end of a story in which it has been involved for decades. “I’m glad I didn’t actually write in my paper, ‘I’m sure one can get rid of this,’” he said, referring to the part of the sieve that Dunn and Radziwiłł discovered was necessary. “I just said, ‘It would be nice if someone could get rid of it. It seems likely that you can.’ And I was wrong — not for the first time.” \r\n // Define the dataLayer and gtag function.\r\n window.dataLayer = window.dataLayer || [];\r\n function gtag(){dataLayer.push(arguments);}\r\n\r\n // The default ad storage in “deny”.\r\n gtag(‘consent’, ‘default ‘, { \r\n ‘ad_storage’: ‘denied’\r\n });\r\n\u003c/script\r\n\u003c!– Google Tag Manager –\r\n\ u003cscript(function (w,d,s,l,i){w[l]=w[l]||[]w[l].push({‘gtm.start’:\r\nnew Date().getTime(),event:’gtm.js’});var f=d.getElementsByTagName(s)[0],\r\nj=d.createElement(s),dl=l!=’dataLayer’?’&l=”+l:”‘;j.async=true;j.src=\r\n’ End Google Tag Admin –\r\n\u003c!– Google Tag Manager (noscript) –\r\n\u003cnoscript\u003ciframe src=\” width=\”0\” style=\”display:none;visibility: hidden\”\u003c/iframe\u003c/noscript\r\n\u003c!– Stop Google Tag Manager (noscript)…