你为什么不使用依赖类型?
Why don't you use dependent types?

原始链接: https://lawrencecpaulson.github.io//2025/11/02/Why-not-dependent.html

## 证明助手的发展:从依赖类型到高阶逻辑 本文详细描述了作者数十年开发Isabelle证明助手的心路历程,以及他们对数学形式系统的不断演变观点。最初,作者探索了依赖类型理论,始于1977年接触N.G. de Bruijn的AUTOMATH系统。虽然印象深刻,但他们发现AUTOMATH缺乏更强类型理论中直接的Curry-Howard对应关系。 后来,马丁-洛夫类型理论吸引了作者,甚至构成了Isabelle的第一个版本(Isabelle/CTT)的基础。然而,教义上的分歧和内涵相等性问题促使他们改变了关注点。他们观察到构造演算(用于Lean)也引发了对基础稳定性的类似担忧。 最终,作者倡导了一种通用方法,使Isabelle能够适应各种逻辑。这导致Isabelle/HOL占据主导地位,它建立在Church的简单类型理论和公理化类型类之上。通过ALEXANDRIA项目的资助,最近证明了高阶逻辑形式化极其复杂的数学的能力——甚至超出了预期并消除了批评。尽管像Lean这样的依赖类型理论取得了成功,作者仍然致力于高阶逻辑,理由是那些系统中的性能问题和复杂性。他们总结说,选择一种形式主义并将其推向极限是一种有效的研究途径,而Isabelle/HOL继续证明其力量。

## 依赖类型:务实讨论 这次Hacker News讨论围绕一篇质疑依赖类型在形式化验证和编程中广泛应用的文章展开。文章承认依赖类型的强大之处——例如精确地类型化矩阵等数据结构——但核心论点是它们并非构建健壮系统所*必需*的。 许多评论者强调了权衡:依赖类型会引入复杂性和调试挑战,可能减慢开发速度。当处理复杂的不变量时,优势才最明显,但对于许多任务,像Isabelle/HOL(一个证明助手)中采用的更简单的方法——利用自动化、广泛的库和易读的证明——更实用。 一个关键点是开发者体验的重要性。学习曲线和潜在的复杂类型错误可能超过收益。一些人讨论了替代方法,例如在Python中使用`torchtyping`等库进行类型注解,或者利用C++和Rust等语言的编译时元编程来实现类似的安全性保证。最终,共识倾向于选择合适的工具来完成工作,认识到依赖类型并非通用解决方案。
相关文章

原文

02 Nov 2025

[ memories  AUTOMATH  LCF  type theory  Martin-Löf type theory  NG de Bruijn  ALEXANDRIA  ]

To be fair, nobody asks me this exact question. But people have regularly asked why Isabelle dispenses with proof objects. The two questions are essentially the same, because proof objects are intrinsic to all the usual type theories. They are also completely unnecessary and a huge waste of space. As described in an earlier post, type checking in the implementation language (rather than in the logic) can ensure that only legitimate proof steps are executed. Robin Milner had this fundamental insight 50 years ago, giving us the LCF architecture with its proof kernel. But the best answer to the original question is simply this: I did use dependent types, for years.

My time with AUTOMATH

I was lucky enough to get some personal time with N G de Bruijn when he came to Caltech in 1977 to lecture about AUTOMATH. I never actually got to use this system. Back then, researchers used the nascent Internet (the ARPAnet) not to download software so much as to run software directly on the host computer, since most software was not portable. But Eindhoven University was not on the ARPAnet, and AUTOMATH was configured to run on a computer we did not have:

Until September 1973, the computer was the Electrologica X8, after that Burroughs 6700. In both cases the available multiprogranming systems required the use of ALGOL 60.

I did however read many of the research reports, including the PhD dissertation by LS Jutting, where he presents his translation of Landau’s text Grundlagen der Analysis (described last time) from German into AUTOMATH. It is no coincidence that many of my papers, from the earliest to the latest, copied the idea of formalising a text and attempting to be faithful to it, if possible line by line.

As an aside, note that while AUTOMATH was a system of dependent types, it did not embody the Curry–Howard correspondence (sometimes wrongly called the Curry–Howard–de Bruijn correspondence). That correspondence involves having a type theory strong enough to represent the predicate calculus directly in the form of types. In AUTOMATH you had to introduce the symbols and inference rules of your desired calculus in the form of axioms, much as you do with Isabelle. In short, AUTOMATH was a logical framework:

like a big restaurant that serves all sorts of food: vegetarian, kosher, or anything else the customer wants

De Bruijn did not approve of the increasingly powerful type theories being developed in the 1990s. AUTOMATH was a weak language, a form of λ-calculus including a general product construction just powerful enough to express the inference rules of a variety of formalisms and to make simple definitions, again clearly the inspiration for Isabelle. Isabelle aims to be generic, like the big AUTOMATH restaurant. Only these days everybody prefers the same cuisine, higher-order logic, so Isabelle/HOL has become dominant. Unfortunately, I last spoke to Dick (as he was known to friends) when I was putting all my effort into Isabelle/ZF. He simply loathed set theory and saw mathematics as essentially typed. He never lived to see the enormous amount of advanced mathematics that would be formalised using types in Isabelle/HOL.

I annoyed him in another way. I kept asking, AUTOMATH looks natural, but how do we know that it is right? He eventually sent me a 300 page volume entitled The Language Theory of Automath. It describes AUTOMATH’s formal properties such as strong normalisation and Church–Rosser properties, but this was not the answer I wanted at all. I got that answer for a quite different type theory.

Martin-Löf type theory

In response to kind invitations from Bengt Nordström and Kent Petersson, I paid a number of visits to Chalmers University in Gothenburg to learn about Martin-Löf type theory. I was particularly impressed by its promise of a systematic and formal approach to program synthesis. I had already encountered intuitionism through a course on the philosophy of mathematics at Stanford University, as I recall taught by Ian Hacking. The “rightness” of Martin-Löf type theory was obvious, because it directly embodied the principles of intuition truth as outlined by Heyting: for example, that a proof of $A\land B$ consists of a proof of $A$ paired with a proof of $B$.

I devoted several years of research to Martin-Löf type theory. This included a whole year of intricate hand derivations to produce a paper that I once thought would be important, and the very first version of Isabelle. Yes: Isabelle began as an implementation of Martin-Löf type theory, which is still included in the distribution even today as Isabelle/CTT. But eventually I tired of what seemed to me a doctrinaire attitude bordering on a cult of personality around Per Martin-Löf. The sudden switch to intensional equality (everyone was expected to adopt the new approach) wrecked most of my work. Screw that.

You might ask, what about the calculus of constructions, which arose during that time and eventually gave us Rocq and Lean? (Not to mention LEGO.) To me they raised, and continue to raise, the same question I had put to de Bruijn. Gérard Huet said something like “it is nothing but function application”, which did not convince me. It’s clear that I am being fussy, because thousands of people find these formalisms perfectly natural and believable. But it is also true that the calculus of constructions underwent numerous changes over the past four decades. There seem to be several optional axioms that people sometimes adopt while attempting to minimise their use, like dieters enjoying an occasional croissant.

Decisions, decisions

We can see all this as an example of the choices we make in research. People were developing new formalisms. This specific fact was the impetus for making Isabelle generic in the first place. But we have to choose whether to spend our time developing formalisms or instead to choose a fixed formalism and see how far you can push it. Both are legitimate research goals.

For example, already in 1985, Mike Gordon was using higher-order logic to verify hardware. He was not distracted by the idea that some dependent type theory might work better for n-bit words and the like. The formalism that he implemented was essentially the same as the simple theory of types outlined by Alonzo Church in 1940. He made verification history using this venerable formalism, and John Harrison later found a clever way to encode the dimension of vector types including words. Isabelle/HOL also implements Church’s simple type theory, with one extension: axiomatic type classes. Isabella users also derive much power from the locale concept, a kind of module sysstem that lies outside any particular logic.

During all this time, both Martin-Löf type theory and the calculus of constructions went through several stages of evolution. It’s remarkable how the Lean community, by running with a certain version of the calculus, quickly formalised a vast amount of mathematics.

Pushing higher-order logic to its limit

I felt exceptionally lucky to win funding from the European Research Council for the advanced grant ALEXANDRIA. When I applied, homotopy type theory was still all the rage, so the proposal emphasised Isabelle’s specific advantages: its automation, its huge libraries and the legibility of its proofs.

The team started work with enthusiasm. Nevertheless, I fully expected that we would hit a wall, reaching mathematical material that could not easily be formalised in higher-order logic. Too much of Isabelle’s analysis library identified topological spaces with types. Isabelle’s abstract algebra library was old and crufty. A number of my research colleagues were convinced that higher-logic was not adequate for serious mathematics. But Anthony Bordg took up the challenge, leading a subproject to formalise Grothendieck schemes.

For some reason I had a particular fear of the field extension $F[a]$, which extends the field $F$ with some $a$ postulated to be a root of some polynomial over $F$. (For example, the field of complex numbers is precisely $\mathbb{R}[i]$, where $i$ is postulated to be a root a root of $x^2+1=0$.) And yet an early outcome of ALEXANDRIA was a proof, by Paulo Emílio de Vilhena and Martin Baillon, that every field admits an algebraically closed extension. This was the first proof of that theorem in any proof assistant, and its proof involves an infinite series of field extensions.

We never hit any wall. As our group went on to formalise more and more advanced results, such as the Balog–Szemerédi–Gowers theorem, people stopped saying “you can’t formalise mathematics without dependent types” and switched to saying “dependent types give you nicer proofs”. But they never proved this claim.

Now that dependent type theory has attained maturity and has an excellent tool in the form of Lean, shall I go back to dependent types? I am not tempted. The only aspects of Lean that I envy are its huge community and the Blueprint tool. I hear too many complaints about Lean’s performance. I’ve heard of too many cases where dependent types played badly with intensional equality – I sat through an entire talk on this topic – or otherwise made life difficult. Quite a few people have told me that the secret of dependent types is knowing when not to use them. And so, to me, they have too much in common with Tesla’s Full Self-Driving.

联系我们 contact @ memedata.com