数学“A队”证明了加法和集合之间的关键联系
'A-team' of math proves a critical link between addition and sets

原始链接: https://www.quantamagazine.org/a-team-of-math-proves-a-critical-link-between-addition-and-sets-20231206/

2023年11月9日,四位著名数学家Ben Green、Terence Tao、Freddy Manners和Michael N。 Bennett(指导作者)的合作,正式证明了马顿的二元串猜想,这是加性组合领域的重大突破。 The conjecture suggests a relationship between set addition and subgroup structures, with relevance to fields such as cryptography, computing, and computational biology。 According to the authors, if a set satisfies conditions outlined by the conjecture, it demonstrates a "large degree of quasi-randomness," suggesting connections between seemingly unrelated fields, emphasizing the role of computation in understanding complex mathematical concepts。 To facilitate verification of the proof, mathematician Terence Tao led an initiative to convert the proof to machine-readable format。 The successful execution of this task constitutes a milestone in mathematics research development history, offering insights and facilitating further exploration in a rapidly evolving technological landscape。

然而,关于 NP 与 NEXP 的争论,让我澄清几点: 首先,我不确定你是否在开玩笑,但是计算复杂性理论中有一个既定的概念,称为“选择后放大”,它专门处理由于 NP 完整性而只能有效处理输入子集的情况 并涉及在传统决策算法中添加验收测试阶段以获得放大器。 这些技术允许有效处理以前难以处理的候选输入子集。 然而,由于围绕选择标准(例如组合爆炸)的问题,我不建议仅仅依靠这些类型子集的有希望的发现来解决实践中的 NP 完全问题,除非得到严格的理论分析和考虑的支持。 关于 NEXPOUNDING,应该指出的是,使用 NEXP 时间资源“决定”财产的定义与能够对所有可能的候选者进行搜索并验证结果的定义之间存在重大差异。 前者仅指对给定输入进行二元是/否判断,而后者则需要提供所有令人满意的输入的全面列表,这可能是一个完全不同的挑战。 最后,关于 NP 搜索比 P 大小证明更长的假设可能会因计算设备本质上固有的实际限制而遇到挑战,这是事实,但是,任何人都不太可能能够理解和记住 即使它低于 P 界限(特别是考虑到人类思维只能同时保留有限数量的信息),整个符号链的长度相当大。 Thus, while we cannot yet conclude that there are zero length proofs out there, the likelihood of discovering such proofs or even substantially shorter (than what current computing technology permits) proofs through an exhaustive search mechanism within foreseeable future remains infinitesimally low. “通过详尽的搜索来寻找简短的证据。” 算法推理满足了人类无限且无法满足的好奇心。
相关文章

原文

The lists have fixed lengths, and each bit can be either 0 or 1. You add them together by adding each entry to its counterpart in another list, with the rule that 1 + 1 = 0. So (0, 1, 1, 1, 0) + (1, 1, 1, 1, 1) = (1, 0, 0, 0, 1). PFR is an attempt to figure out what a set can look like if it isn’t quite a subgroup but has some grouplike features.

To make PFR precise, imagine you have a set of binary lists called A. Now take every pair of elements from A and add them up. The resulting sums make up the sumset of A, called A + A. If the elements of A are chosen randomly, then most of the sums are different from each other. If there are k elements in A, that means there will be around k2/2 elements in the sumset. When k is large — say, 1,000 — k2/2 is a lot bigger than k. But if A is a subgroup, every element of A + A is in A, meaning that A + A is the same size as A itself.

PFR considers sets that are not random, but are not subgroups either. In these sets, the number of elements in A + A is somewhat small — say, 10k, or 100k. “It’s really useful when your notion of structure is much more rich than just being an exact algebraic structure,” said Shachar Lovett, a computer scientist at the University of California, San Diego.

All the sets mathematicians knew of that obeyed this property “are pretty close to actual subgroups,” Tao said. “That was the intuition, that there weren’t any other sort of fake groups lying around.” Freiman had proved a version of this statement in his original work. In 1999, Ruzsa extended Freiman’s theorem from the integers to the setting of binary lists. He proved that when the number of elements in A + A is a constant multiple of the size of A, A is contained within a subgroup.

But Ruzsa’s theorem required the subgroup to be enormous. Marton’s insight was to posit that rather than being contained in one giant subgroup, A could be contained in a polynomial number of cosets of a subgroup that is no bigger than the original set A.

Around the turn of the millennium, Gowers came across Ruzsa’s proofs of Freiman’s theorem while studying a different problem about sets containing strings of evenly spaced numbers. “I needed something like this, kind of getting structural information from much looser information about a certain set,” Gowers said. “I was very fortunate that not that long before, Ruzsa produced this absolutely gorgeous proof.”

Gowers began to work out a potential proof of the polynomial version of the conjecture. His idea was to start with a set A whose sumset was relatively small, then gradually manipulate A into a subgroup. If he could prove that the resulting subgroup was similar to the original set A, he could easily conclude the conjecture was true. Gowers shared his ideas with colleagues, but no one could mold them into a full proof. Though Gowers’ strategy was successful in some cases, in others the manipulations took A further away from the desired conclusion of the polynomial Freiman-Ruzsa conjecture.

Eventually, the field moved on. In 2012, Sanders almost proved PFR. But the number of shifted subgroups he needed was above the polynomial level, though only by a little bit. “Once he did that, it meant that it became a less urgent thing, but still a really nice problem for which I have a great fondness,” Gowers said.

But Gowers’ ideas lived on in his colleagues’ memories and hard drives. “There’s a real idea there,” said Green, who had also been a student of Gowers. “I know a real idea when I see a real idea.” This summer, Green, Manners and Tao finally liberated Gowers’ ideas from their purgatory.

Green, Tao and Manners were 37 pages deep in collaboration before they thought to return to Gowers’ 20-year-old ideas. In a June 23 paper, they had successfully used a concept from probability theory called random variables to probe the structure of sets with small sumsets. By making this switch, the group could manipulate their sets with more finesse. “Dealing with random variables is somehow much less rigid than dealing with sets,” Manners said. With a random variable, “I can tweak one of the probabilities by a small amount, and that might give me a better random variable.”

Using this probabilistic perspective, Green, Manners and Tao could move from working with the number of elements in a set to a measurement of the information contained in a random variable, a quantity called entropy. Entropy was not new to additive combinatorics. In fact, Tao had attempted to popularize the concept in the late 2000s. But nobody had yet tried to use it on the polynomial Freiman-Ruzsa conjecture. Green, Manners and Tao discovered it was powerful. But they still couldn’t prove the conjecture.

As the group chewed over their new results, they realized that they’d finally built an environment in which Gowers’ dormant ideas could flourish. If they measured the size of a set using its entropy, rather than its number of elements, the technical details might work out much better. “At some point we realized that these old ideas from Tim from 20 years ago were actually more likely to work than the ones we were trying,” Tao said. “And so we brought Tim back into the project. And then all the pieces fit together surprisingly nicely.”

Still, there were many details to figure out before a proof came together. “It was sort of foolish that all four of us were incredibly busy with other things,” Manners said. “You want to publish this great result and tell the world, but you do actually still have to write your midterms.” Eventually, the group pushed through, and on November 9, they posted their paper. They proved that if A + A is no bigger than k times the size of A, then A can be covered by no more than about k12 shifts of a subgroup that is no bigger than A. This is a potentially enormous number of shifts. But it is a polynomial, so it doesn’t grow exponentially faster as k gets bigger, as it would if k were in the exponent.

A few days later, Tao began to formalize the proof. He ran the formalization project collaboratively, using the version-control package GitHub to coordinate contributions from 25 volunteers around the world. They used a tool called Blueprint developed by Patrick Massot, a mathematician at Paris-Saclay University, to organize the efforts to translate from what Tao called “Mathematical English” into computer code. Blueprint can, among other things, create a chart depicting the various logical steps involved in the proof. Once all the bubbles were covered in what Tao called a “lovely shade of green,” the team was done. They discovered a few very minor typos in the paper — in an online message, Tao noted that “the most mathematically interesting portions of the project were relatively straightforward to formalize, but it was the technical ‘obvious’ steps that took the longest.”

Marton died just a few years before her famous conjecture was proved, but the proof echoes her life’s work on entropy and information theory. “Everything works much better when you do it in this entropy framework than in the framework I was trying to do it,” Gowers said. “To me, it still seems somewhat magical.”

Quanta is conducting a series of surveys to better serve our audience. Take our mathematics reader survey and you will be entered to win free Quanta merch.

联系我们 contact @ memedata.com