使用微积分做数论
Using calculus to do number theory

原始链接: https://hidden-phenomena.com/articles/hensels

## 微积分与数论:概要 库尔特·亨泽尔展示了微积分与数论之间令人惊讶的联系——这两个看似不同的领域。他表明,基于微积分的近似技术可以有力地应用于解决涉及多项式方程整数解的问题,特别是在模算术中。 所举的例子集中在寻找满足方程 *x³ - 17x² + 12x + 16 ≡ 0 (mod 3000)* 的所有整数 *x*。利用中国剩余定理,这被简化为模 8、3 和 125 的单独同余式。虽然模 8 和 3 的求解很简单,但模 125 的求解却具有挑战性。 亨泽尔的关键见解,受牛顿法启发,涉及迭代地完善近似解。如果一个值 *x* 几乎满足同余式,则进行小的调整可以使其更接近真正的解。这是通过应用类似导数的修正来实现的,从而反映了微积分的近似技术。这个过程被称为亨泽尔引理,它允许将模 *pe* 的求解简化为模 *p* 的求解。 这项看似特定的技术具有深远的影响,与雄心勃勃的郎兰斯纲领相关联——一个现代研究领域,探索多项式的伽罗瓦群与模算术中解的存在之间的关系。

## 使用受微积分启发的技巧解决数论问题 - 摘要 最近 Hacker News 的讨论围绕一篇博客文章展开,该文章探讨了一种使用类似于微积分的技术(特别是微分代数)来解决数论问题的方法。虽然严格来说不是微积分(更准确地说,是使用微分),但文章展示了如何应用“形式导数”——本质上是多项式的改写规则。 核心思想与亨塞尔引理相关,这是一种受微积分启发的代数技术,特别是 p-adic 数域中的牛顿法。评论者争论这是否属于解析数论或代数数论,一些人强调了与复分析和黎曼zeta函数的关系。 文章说明了如何找到模方程的解,并解释说由于同余性质和二项式定理,检查整数 0 到 n-1 就足够了。 许多评论者提供了替代的解释,并指出了原文中的一个小的错误。 讨论还涉及 p-adic 数的更广泛背景和郎兰计划。
相关文章

原文
Using calculus to do number theory ← back

February 2, 2026

Calculus is the mathematics of approximations of continuous quantities; number theory asks exact problems about discrete quantities. These seem, therefore, to be unrelated studies.

Yet Kurt Hensel made a remarkable discovery: using ideas from calculus, one can better understand number theory problems. We're going to explain Hensel's wonderful discovery, by trying to solve polynomial equations in modular arithmetic. More particularly, we are going to try and find all integers \(x\) such that \[x^3 - 17x^2 + 12x + 16 \equiv 0 \pmod{3000}.\]

While this sort of problem -- solving a polynomial equation in modular arithmetic -- might seem a bit strange, it actually leads directly to the Langlands program in modern number theory, as we'll comment a little about at the end.

A first reduction

The prime factorization of 3000 is \[3000 = 2^3 \times 3 \times 5^3.\]

The Chinese remainder theorem tells us that solving \[x^3 -17x^2 + 12x + 16 \equiv 0 \pmod{3000}\] is, therefore, equivalent to solving the three different congruences \[x^3 - 17x^2 + 12x + 16 \equiv 0\pmod{2^3},\] \[x^3 - 17x^2 + 12x + 16 \equiv 0\pmod{3},\] \[x^3 - 17x^2 + 12x + 16 \equiv 0\pmod{125}.\]

It might seem like we've made our problem harder, not easier: we used to have one equation, but now we have three! However, there is a sense in which our life has been made simpler: to solve \[x^3 - 17x^2 + 12x + 16 \equiv 0 \pmod{8},\] for example, we only need to plug in \(x = 0, 1, 2, 3, 4, 5, 6,\) and \(7.\) This is only eight values to try. If we simplify our polynomial to \[x^3 -17x^2 + 12x + 16 \equiv x^3 - x^2 + 4x \pmod{8},\] and plug in each of the possible values, we quickly find that the only solutions to our equation modulo 8 are \[x \equiv 0, 4 \pmod{8}.\]

Similarly, we can plug in \(x= 0, 1, 2\) to find that the only solution to \[x^3 - 17x^2 + 12x + 16 \equiv 0\pmod{3}\] is \(x \equiv 1 \pmod{3}.\)

Just by plugging in a few values, we managed to solve 2 out of 3 of our equations. Unfortunately, plugging in every value from \(x = 0\) to \(x = 124\) to solve \[x^3 - 17x^2 + 12x + 16 \equiv 0\pmod{125},\] seems like it would take much too long. Are there any quicker ways to proceed?

Working modulo 5

Recall \(125 = 5^3.\) So, if \(x\) is an integer making \[x^3 - 17x^2 + 12x + 16 \equiv 0 \pmod{125},\] then we also have \[ x^3 - 17x^2 + 12x + 16 \equiv 0 \pmod{5}.\]

This equation is a little easier to solve: we can plug in \(x = 0, 1, 2, 3,\) and 4 to find that the only solution is \[x \equiv 2 \pmod{5}.\]

When we plug in \(x = 2\) to our polynomial, we get \[2^3 - 17 \cdot 2^2 + 12 \cdot 2 + 16 = -20.\]

This number is indeed divisible by 5... but it is not a multiple of \(125.\) However, Hensel thought that \(-20\) is close to being zero modulo 125, because \(125 = 5^3,\) and \(-20\) is at least divisible by 5 once, even if it isn't divisible by 5 three times.

On its own, this thought would probably not be so helpful. What made this idea so remarkable was what Hensel did next. Before we can explain that, though, we need to take a brief digression to recall an insight from calculus.

Using derivatives to solve polynomials

Newton showed us how to use calculus to take an approximate solution to an equation, and generate a slightly better approximation to a solution. It's not surprising that calculus can do this: calculus is the mathematics of approximation.

Let's recall his idea for classical functions. Suppose you have a function \(g(x),\) and you know that \[g(4) = 0.1.\] The graph of our \(g(x)\) looks like this.

TikZ graph

\(0.1\) is pretty small, so we might guess that some solution to \(g(x) = 0\) lies pretty close to \(x = 4.\) Let's try looking for a solution to \(g(4 + \epsilon) = 0,\) where \(\epsilon\) is some small number. Newton told us that \[g(4 + \epsilon) \approx g(4) + g'(4) \cdot \epsilon.\] Here, \(g'(4) = 2,\) so we can approximate \[g(4 + \epsilon) \approx 0.1 + 2\epsilon,\] and so \(g(4+\epsilon) = 0\) has an approximate solution of \(\epsilon = -0.05.\) This number \(4 - 0.05 = 3.95\) is a slightly better approximation to the actual solution of \(g(x) = 0\) then \(x = 4\) was; if we wanted an even better solution, we could start at \(x = 3.95\) and repeat this same first derivative approximation trick. In this way, Newton was able to find very good approximations to solutions to all sorts of crazy equations.

Back to modular arithmetic

Let's now return to Hensel. We saw that \(x \equiv 2 \pmod{5}\) is a solution to \[x^3 - 17x^2 + 12x + 16 \equiv 0\pmod{5}.\] Set \[f(x) = x^3 - 17x^2 + 12x + 16.\] Then \(f(2) = -20\) is divisible by 5, but it is not divisible by 125. However, Hensel thought that being divisible by 5 is `close' to being divisible by 125. So, he decided to do something which might seem a little strange: apply Newton's method to try and improve 2, to a number which is closer to being a solution to \(f(x) \equiv 0 \pmod{125}.\)

We know that any integer \(x\) obeying \(f(x) \equiv 0 \pmod{125}\) must obey \(x \equiv 2 \pmod{5}.\) So, we can write \[x = 2 + 5 \cdot n,\] for some \(n.\)

Hensel noticed a surprising identity: \[f(2 + 5 \cdot n) \equiv f(2) + f'(2) \cdot 5n \pmod{25}.\] Note how similar this is to the approximation used in calculus!

But why is this approximation true? To get some idea as to why, note that \[(x + 5n)^3 = \binom{3}{0}x^3 + \binom{3}{1} \cdot x^2 \cdot (5n) + \binom{3}{2} \cdot x \cdot (5n)^2 + \binom{3}{3} \cdot (5n)^3.\] Note, modulo 25, that most of the terms in this binomial expansion become 0, so that \[(x+5n)^3 = x^3 + 3x^2 \cdot (5n) \pmod{25}.\] As \(\frac{d}{dx} x^3 = 3x^2,\) this approximation looks a lot like the approximation \[f(2 + 5n) \equiv f(2) + f'(2) \cdot 5n \pmod{25},\] and indeed the same binomial theorem trick shows that this approximation will hold.

Now, using Newton's idea, let's try solving \[f(2) + f'(2) \cdot 5n \equiv 0 \pmod{25}.\] We know \(f(2) = -20,\) and we can compute \[f'(x) = 3x^2 - 34x + 12,\] so that \[f'(2) = 12 - 68 + 12 \equiv 6 \pmod{25}.\] Thus \[f(2 + 5n) \equiv f(2) + f'(2) \cdot 5n \equiv -20 + 30n \pmod{25},\] or \[f(2+5n) \equiv 5n + 5 \pmod{25}.\] Thus \[f(x) \equiv 0\pmod{25}\] can be solved by finding solutions to \(5n + 5 \equiv 0\pmod{25}.\) The solutions to this is \(5n \equiv 20 \pmod{25},\) so that \[x \equiv 2 + 5n \equiv 22 \pmod{25}\] is the solution to \(f(x) \equiv 0\pmod{25}.\)

We're now a little bit closer to solving our problem -- all we must do now is upgrade from a solution modulo 25 to a solution modulo 125. To do this, we use the seem approximation trick again: \[f(22 + 25n) \equiv f(22) + f'(22) \cdot 25n \pmod{125}.\] Observe that \[f(22) \equiv 75 \pmod{125},\] \[f'(22) \equiv 91 \pmod{125},\] so that we're solving \[75 + 91 \cdot 25n \equiv 0 \pmod{125}.\] One can rearrange this to \[25n \equiv 50 \pmod{125},\] so that \[x \equiv 22 + 25n \equiv 72 \pmod{125}\] is the final solution to our cubic equation modulo 125. And so, using tools from calculus, Hensel was able to solve a problem in number theory!

Appendix: The Langlands program

It turns out that the problem of solving polynomial equations in modular arithmetic leads to some very deep mathematics. Fix a polynomial \(f(x).\) By the Chinese remainder theorem, solving \[f(x) \equiv 0 \pmod{n}\] is equivalent to solving \(f(x) \equiv 0 \pmod{p^e},\) for each prime power \(p^e\) occuring in the prime factorization of \(n\) (the same way we solved a cubic equation modulo 3000 by breaking it up into equations modulo \(2^3,\) modulo \(3,\) and modulo \(5^3\)).

The procedure we described above is often called Hensel's lemma, and lets you reduce the problem of solving \(f(x) \equiv 0 \pmod{p^e}\) to the problem of solving \(f(x) \equiv 0 \pmod{p}.\) (There are a few exceptions to this rule, related to the conductor and discriminant of \(f.\))

Thus one is left with a natural question: for which primes \(p\) does \(f(x) \equiv 0 \pmod{p}\) have a solution? It turns out that answering this question depends a lot on the Galois group of \(f(x).\) When \(f(x)\) has an `abelian' Galois group, class field theory, a subject developed by Artin, Tate, and others, allows one to understand this problem. But when \(f(x)\) has a `non-abelian' Galois group, the situation is much more subtle. Robert Langlands was the first person to start understanding what was happening, and the famous Langlands program is devoted to fully understanding this question.

We encourage the interested reader to brave Langlands' wonderful essay Representation Theory: Its Rise and Its Role in Number Theory to learn more. In learning math, it is always good to read the masters.

联系我们 contact @ memedata.com