新的基础是一致的——使用精益证明了一个困难的数学证明
New Foundations is consistent – a difficult mathematical proof proved using Lean

原始链接: https://leanprover-community.github.io/con-nf//

本文描述了如何使用交互式定理证明器 Lean 来验证蒯因“新基础”集合论的一致性。 自 2010 年以来,Randall Holmes 声称拥有其一致性的证明,最近利用精益完成了这一证明。 详细的证明可在 ConNF/Model/Result.lean 中找到,相关文档、网站和去形式化论文提供了进一步的见解。 用户可以在设置 Elan、克隆存储库并执行特定命令后在本地运行该项目。 该项目依赖于 Mathlib,它是 Lean 中的一个社区数学库。 所有形式化的定义和定理,包括源自 Mathlib 的定义和定理,都经过 Lean 可信内核的验证。 虽然自动化工具确保了这些结构的计算有效性,但人类应该仔细地在英语和代码中使用的正式语言之间进行翻译。 所讨论的理论,缠结类型理论(TTT),是一个复杂的多排序集合论,包含等式“=”和隶属关系“ε”。 它的困难在于不同层次上存在独特的属性,特别是外延性公理,这使得创建模型具有挑战性。 涉及构建基本类型和管理各种类型大小的复杂策略使得能够创建符合 TTT 规则的模型,同时满足其外延性公理。 该项目最后证明所创建的模型符合选定的 TTT 有限公理化。

本文讨论了在处理任务时使用人力与人工智能 (LLM) 之间的差异,特别是在错误管理和纠正方面。 作者认为,虽然人类可以在出现错误时通过理解上下文和适应的能力来提供追索和自我纠正,但法学硕士缺乏这种能力,因为他们只是根据预先存在的算法和数据处理信息。 当法学硕士发生错误时,用户纠正这种情况的选项有限,无法改进或选择不同的模型。 然而,最大的挑战在于缺乏自我纠正,因为法学硕士仅依赖于最初的编程,并且不会从过去的错误中学习或改进。 尽管计算机性能和能力取得了进步,但法学硕士目前在预测人类需求的能力方面落后于人类,导致经常出现差异以及需要进行广泛的测试和监督。 为了验证数学概念,使用了 Metamath 和 Lean 等证明助手,通过形式化方法提供严格的验证并确保准确性。 通过专注于小内核,与人类解释相比,这些平台提供了更高的可靠性。 此外,跨各种编程语言的多重验证进一步增强了可信度并消除了潜在的偏见或不一致。 总体而言,虽然不断取得重大进步,但目前的法学硕士仍然缺乏人类从经验中学习、适应不断变化的环境以及展现真正的洞察力或创造力的能力。
相关文章

原文

In 1937, Quine proposed a set theory called “New Foundations”, and since 2010, Randall Holmes has claimed to have a proof of its consistency. In this repository, we use the interactive theorem prover Lean to verify the difficult part of his proof, thus proving that New Foundations is indeed consistent. The proof is now complete, and the theorem statements can be found in ConNF/Model/Result.lean (source, docs).

See our website for more information, the documentation of our Lean code, and the deformalisation paper that translates the Lean definitions into English.

To run our code locally, install elan, clone the repository, and run the following command in a terminal in the repository’s root directory.

The code can then be viewed in an editor such as Visual Studio Code, or compiled directly from the command-line using lake build.

Objective

It is known that New Foundations is consistent if and only if a theory called Tangled Type Theory (TTT) is consistent (see theorem 1 here). We have formally constructed a model of TTT in Lean, thus proving (on paper) that New Foundations is consistent, or in short, Con(NF). We are working from various versions of the paper proof by Holmes:

but many alterations and additions have been made to make the proof compatible with Lean’s type theory.

This project depends on mathlib, the community mathematical library written in Lean. This allows us to use familiar results about things like cardinals and groups without having to prove them ourselves.

Every definition and theorem in mathlib and this project have been checked by Lean’s trusted kernel, which computationally verifies that the proofs we have constructed are indeed correct. However, Lean cannot check that the statements of the definitions and theorems match their intended English equivalents, so when drawing conclusions from the code in this project, translation to and from English must be done with care.

Tangled type theory

TTT is a many-sorted set theory with equality “=” and the membership relation “∈”. The sorts are indexed by a limit ordinal λ, and elements of λ are called type indices. A formula “x = y” is well-formed if x and y have the same type, and a formula “x ∈ y” is well-formed if x has any type less than y.

One of the axioms of tangled type theory is extensionality, which stipulates that a set of type α is uniquely determined by its elements of any type β < α. This is strange: for example, if two sets of type α differ, they have different type β elements for every β < α. This property makes it difficult to construct models of TTT.

Strategy

Our construction of the model uses the following rough strategy.

Construction of the base type

Let λ be a limit ordinal, κ > λ be a regular ordinal, and μ > κ be a strong limit cardinal with cofinality at least κ. Sets of size less than κ are called small.

We first construct an auxiliary type at level -1, called the base type, below all types that will eventually become part of the model. Elements of this type are called atoms (although they are not atoms in the ZFU or NFU sense, for instance). There are μ atoms, partitioned into litters of size κ.

Constructing each type

At each type level α, we will produce a collection of model elements for our intended model of TTT, which we will sometimes call t-sets. We also produce a group of permutations, called allowable permutations, which act on the t-sets. The membership relation is preserved under the action of allowable permutations. Each t-set is stipulated to have a support under the action of allowable permutations; this is a small set of objects called addresses, such that whenever an allowable permutation fixes all elements of a support, it also fixes the t-set.

Each t-set at level α will be given a preferred extension of some type β < α, and we can recover from a t-set’s elements which extension it prefers. The extensions of such a t-set in other lower types can be deduced from its β-extension. This allows us to satisfy TTT’s extensionality axiom.

Controlling the size of each type

Each type α can only be constructed under the assumption that all types β < α are of size exactly μ (among other hypotheses). It is easy to prove that the collection of t-sets at level α has cardinality at least μ, so we need to show that there are at most μ elements of this set. We do this by showing that there are not that many fundamentally different descriptions of tangles under the action of allowable permutations. This requires the freedom of action theorem, which is a technical lemma that allows us to construct allowable permutations. The main result of this section is here.

Finishing the induction

We can then run the above process recursively to produce the types of tangles at all type levels α. This is an easy step to perform in set theory, but requires a lot of work in type theory because of the interconnectedness of the various inductive hypotheses we need. We then check that our construction indeed produces a model of TTT by checking that it satisfies a finite axiomatisation of the theory. We have chosen to convert Hailperin’s finite axiomatisation of NF’s comprehension scheme into a finite axiomatisation of TTT, which we present in our results file. Note, however, that this choice is arbitrary, and any other finite axiomatisation can be easily proven with the infrastructure already in place.

Dependency graph

dependency graph

联系我们 contact @ memedata.com