Xortran - 一种用 Fortran IV 实现的 PDP-11 神经网络,具有反向传播功能。
Xortran - A PDP-11 Neural Network With Backpropagation in Fortran IV

原始链接: https://github.com/dbrll/Xortran

XORTRAN 是一个 FORTRAN IV 实现的多层感知器 (MLP),设计用于在古老的 PDP-11/34A 计算机(通过 SIMH 模拟)上学习 XOR 问题。该项目展示了使用 20 世纪 70 年代技术实现基本神经网络的可行性。 该网络使用一个包含四个神经元的隐藏层和泄漏 ReLU 激活函数,通过反向传播和均方误差进行训练。主要特点包括 He 类初始化、学习率退火和 Tanh 输出层。 在 RT-11 上运行,该代码需要至少 32KB 的内存和一个 FP11 浮点处理器。训练 17 个参数在真实硬件(或受限的 SIMH 环境)上需要几分钟。输出显示损失随 epoch 降低,最终产生准确的 XOR 预测。 XORTRAN 既是一项复古计算练习,也是早期科学计算与现代机器学习之间的历史桥梁,以 MIT 许可证发布。

最近一篇Hacker News帖子突出显示了一个引人入胜的项目:一个用Fortran IV实现的PDP-11神经网络,现在可在GitHub上获取。这引发了关于神经网络研究早期阶段的讨论。 一位评论员回忆说,一位教授在他们的认知心理学学习期间提到了PDP-11的实现,最初认为它效率低下。然而,值得注意的是,第一个卷积神经网络Neocognitron *就是* 在PDP-11上构建的,尽管是在反向传播广泛应用之前。 PDP-11/34虽然按今天的标准来说不算强大(相当于turbo XT),但可靠性高,其浮点单元显著地辅助了计算。另一位用户补充说,最初的Fortran IV编译器很不寻常,使用了堆栈机,但FPU在加速方面却出奇有效——这种优势后来在Fortran 77中得到了进一步改进。
相关文章

原文

XORTRAN is a multilayer perceptron (MLP) written in FORTRAN IV, compiled and executed under RT-11 on a PDP-11/34A (via SIMH simulator).

It learns the classic non-linear XOR problem using:

  • One hidden layer (4 neurons, leaky ReLU activation)
  • Backpropagation with mean squared error loss
  • He-like initialization (manual Gaussian via Box-Muller lite)
  • Learning rate annealing (0.5 → 0.1 → 0.01)
  • Tanh output

The code compiles with the DEC FORTRAN IV compiler (1974). Execution requires a system with at least 32 kilobytes of memory and an FP11 floating-point processor. The PDP-11/34A was chosen as it was the smallest and most affordable PDP-11 equipped with an FP11 floating-point processor in the 1970s.

The training of the 17 parameters should take less than a couple minutes on the real hardware. In SIMH, setting the throttle to 500K (set throttle 500K) will provide a more realistic execution speed.

The output shows the mean squared loss every 100 epochs, followed by the final predictions from the forward pass.

The network converges towards the expected XOR outputs after a few hundred epochs, gradually reducing the error until it accurately approximates the desired results.

.RUN XORTRN
   1  0.329960233835D+00
 100  0.195189856059D+00
 200  0.816064184115D-01
 300  0.654882376056D-02
 400  0.109833284544D-02
 500  0.928130032748D-03

0 0 GOT:0.008353 EXPECTED:0.
0 1 GOT:0.979327 EXPECTED:1.
1 0 GOT:0.947050 EXPECTED:1.
1 1 GOT:0.020147 EXPECTED:0.
STOP --
  • In SIMH, attach the RL1 drive (ATT RL1 xortran.rl1).

  • In RT-11 (I use the single job RT-11-SJ V5), assuming DL1: is assigned to DK::

.FORTRAN/LIST:XORTRN.LST XORTRN.FOR
.LINK XORTRN.OBJ,FORLIB
.RUN XORTRN

Or if you just want to run the binary:

This project demonstrates that a minimal FORTRAN IV environment from the 1970s was sufficient to implement a basic neural network with backpropagation.
It’s both a retro-computing curiosity and a small historical experiment bridging early scientific computing and modern machine learning.

© 2025 Damien Boureille

This code is released under the MIT License.
You are free to use, copy, modify, and redistribute it, provided that you credit the original author.

联系我们 contact @ memedata.com