PowerPC 仍然很强大 (在 G4 笔记本电脑上运行 Llama)
The PowerPC Has Still Got It (Llama on G4 Laptop)

原始链接: https://www.hackster.io/news/the-powerpc-has-still-got-it-c4348bd7a88c

## 老技术,新花样:在20年前的PowerBook上运行AI 如今,苹果的M系列芯片占据了头条,但该公司之前曾使用PowerPC处理器。 近期,复古计算爱好者Andrew Rossignol证明了这些旧芯片并未过时,他成功地在2005年的PowerBook G4上运行了一个大型语言模型(LLM)。 尽管这台机器存在局限性——1.5GHz处理器和仅1GB的内存——Rossignol将一个开源LLM推理引擎(llama2.c)和TinyStories模型适配到了PowerPC的大端架构。 这需要大量的软件修改和手动内存管理。 性能较慢,文本生成速度为每秒0.77个token,而现代Intel Xeon为6.91个。 然而,利用PowerPC的AltiVec向量处理扩展,将其提升到每秒0.88个token。 这项实验表明,通过巧妙的优化,即使是二十年前的硬件也能参与到现代AI中,推动了旧技术可能性的边界。

一篇Hackster.io文章展示了Llama大型语言模型在老式PowerPC G4笔记本电脑上成功运行,引发了Hacker News上的讨论。 许多用户纷纷表示,他们也开展了类似的项目,在各种“奇怪、老旧的系统”上运行Llama和其他模型(如qwen3.c),包括SPARC、PA-RISC、RISC-V、Alpha和POWER架构。 对话中明确,苹果公司并非*制造* PowerPC芯片,而是使用它们。 有用户区分了llama.c和llama.cpp,并确认后者已在G4上成功实现,并承诺撰写相关文章。 最后,一位评论员指出,人工智能本质上是矩阵运算,非常适合现有硬件能力。
相关文章

原文

For most people, the term “Apple silicon” brings to mind powerhouse processors like the M4 Max. Since Apple went through a lengthy Intel phase prior to the development of their M-series chips, it is often assumed that these are their first custom processors. But twenty years ago, Apple had different custom silicon in their computers — PowerPC microprocessors.

The advantages of these earlier chips were not as clear cut as the M-series chips. Diehard Apple fans swore that they were superior, while the PC crowd wouldn’t touch them with a ten-foot pole. But in any case, they are a couple decades old at this point, so they do not have a lot of gas left in the tank. However, Andrew Rossignol does not believe that the tank is empty just yet. Rossignol recently demonstrated that a PowerBook G4 from 2005 is capable of getting in on the action of running modern artificial intelligence (AI) algorithms — with some caveats, of course.

Process different

Rossignol, a vintage computing enthusiast, successfully ran a large language model (LLM) on a 1.5GHz PowerBook G4, a machine with just 1GB of RAM and a 32-bit processor. The experiment used a fork of llama2.c, an open-source LLM inference engine originally developed by Andrej Karpathy. Given the hardware constraints of the PowerBook, Rossignol chose the TinyStories model, a relatively small model with 110 million parameters that was designed specifically for generating simple short stories.

To make this work, Rossignol had to modify the original software to accommodate the PowerPC’s big-endian architecture, which differs from the little-endian format that most modern processors use. This involved converting model checkpoints and tokenizer data to the appropriate format, ensuring that numerical data was processed correctly. Additionally, the memory alignment requirements of the aging PowerPC chip meant that weights had to be copied into memory manually, rather than being memory-mapped as they would be on an x86 system.

Well, technically it works

Performance was, predictably, not so good. Running the model on an Intel Xeon Silver 4216 processor achieved a processing speed of 6.91 tokens per second. The same model on the PowerBook G4, however, managed just 0.77 tokens per second — taking a full four minutes to generate a short paragraph of text.

To improve performance, Rossignol leveraged AltiVec, the PowerPC’s vector processing extension. By rewriting the core matrix multiplication function using AltiVec’s single instruction, multiple data capabilities, he was able to increase inference speed to 0.88 tokens per second — a modest improvement, but you have to take what you can in a project like this.

Despite the slow performance, the fact that a 20-year-old laptop could successfully run a modern AI model at all is impressive. The PowerBook’s outdated architecture, limited RAM, and lack of specialized accelerators posed a number of challenges, but careful software optimizations and a deep understanding of the hardware allowed Rossignol to push the system well beyond its expected limits.

联系我们 contact @ memedata.com