炒作背后的引擎
The Engine Behind the Hype

原始链接: https://www.onuruzunismail.com/blog/the-engine-behind-the-hype

## Pi 的崛起:超越“龙虾”AI助手 OpenClaw 是一款流行的 AI 助手,在 GitHub 上拥有超过 10 万颗星,因其混乱的更名过程,甚至账户被加密诈骗劫持而臭名昭著。虽然 OpenClaw 占据头条,但其底层引擎——**Pi**,由 Mario Zechner 创建,正在悄然获得关注。 作者详细描述了对 AI 编码工具的长期探索,从基本的复制粘贴到像 Claude Code 和 Cursor 这样复杂的平台,最终发现它们受到过大的上下文窗口使用和臃肿的限制。Pi 提供了一种截然不同的方法:一个极简的核心,仅包含四个工具(读取、写入、编辑、Bash)和一个低于 1000 个 token 的系统提示。 这种简洁性转化为卓越的**上下文效率**。虽然 Claude Code 迅速消耗 token,但 Pi 可以以更少的 token 处理可比的工作负载,从而产生更清晰的输出和更好的代码理解。Pi 的扩展系统允许代理构建自己的插件,提供灵活性和所有权。 尽管 Pi 是一个单人项目,存在固有的风险,但它代表着一种潜在的转变,即转向更小、更可定制的 AI 工具,使能够力用户定制自己的工作流程,而不是适应预定义的產品。作者总结说,尽管像 OpenClaw 这样的工具备受炒作,但 Pi 目前是他们使用过的最有效的编码助手。

Hacker News 上的一篇帖子讨论了“炒作背后的引擎”(onuruzunismail.com),重点介绍了一个名为 pi-mono(github.com/badlogic/pi-mono)的极简 AI 代理框架。用户对其效率印象深刻,指出它使用的 token 数量明显少于主流 AI 工具,因为系统提示的开销较少。 讨论的中心是 AI 专用层的发展趋势,与“单体”工具形成对比。一些评论者质疑 AI 生成内容的质量,怀疑原始文章本身大部分是由 AI 撰写的,并对 Hacker News 上机器人数量的增加表示担忧。 一个关键点是,人们以人类的方式称呼 AI 的奇怪做法,质疑这种做法是否有效,或者仅仅是模仿成功的模式。该框架的“不要相信人”指令也引发了讨论。
相关文章

原文

The Lobster Everyone Knows

OpenClaw has north of 100K GitHub stars. It also has three names. Started as ClawdBot, became MoltBot after Anthropic sent a trademark complaint in January 2026, then settled on OpenClaw. During one of those renames, crypto scammers hijacked the old accounts in about ten seconds and pumped a fake token to $16 million. It has a Wikipedia page now. The whole thing is wild.

Peter Steinberger built it. A proactive AI assistant that talks to you on WhatsApp, Telegram, Slack, Discord, iMessage, whatever. People kept posting “I told it to do X and a few hours later it was done.” Someone claimed Claude called them on the phone. Whether or not that’s exaggerated, the project is impressive and Peter knows his shit far more than I do.

One thing stuck with me though. Peter seemed to have built or used his own harness instead of Claude Code. His own browser automation too, not Playwright as an MCP. I dug into the GitHub and found the engine underneath OpenClaw is a coding agent called Pi, built by Mario Zechner. The guy who made libGDX. Peter and Mario know each other. Everyone’s talking about the lobster. I want to talk about the engine.

Every Tool I’ve Tried

To explain why Pi matters to me, I need to give you the timeline. It’s long.

I started like most people. Copying and pasting code blocks into ChatGPT. Then moved to Claude. Then started pasting entire codebases using tools like PasteMax by Kevin Leanway. I watched his videos about planning workflows and learned a lot from them. Then came some VS Code extensions that let you talk to the model inside the IDE. Copilot completions never clicked for me until one day they did. Side story.

Then Cursor. I used Augment for a few months too. I thought Augment was superior during the window I was on it. My monthly spend across these tools with overages hit $100-$150. That’s when Claude Code entered the picture. It was API-only at first, so I saved it for things the other tools couldn’t handle. When Claude Plans launched, I jumped the same day. I was already using it more than anything else.

I also tried Kilo Code, Windsurf, ChatLLM (underrated), Roo Code, and more IDE plugins than I can list. I liked how Kilo Code handled the planning phase with when/then statements. Gravity showed up more recently and is worth watching. But Claude Code felt like the answer. I had a rhythm. Rough estimations for how long things would take. Greenfield, brownfield, whatever. With enough hand-holding and context management it was getting shit done. Infinitely more than the old days. I was happy.

The more I built, the more I wanted to build. That feedback loop is its own subject.

The Context Window Problem

Claude Code eats context. Or I eat context. Someone could say “that’s on you, you’re not feeding it the right stuff.” Fair enough, sometimes. But I’d start a session, have it read some files, and 50% of my context window was gone before I’d asked it to do anything. The system prompts, the MCPs, whatever Claude injects before you type a single character. I’ve written about this frustration before. My personal rule: once I hit 70K tokens, I’m in the yellow zone. Over 100K, time to wrap up and start fresh.

I kept trying to fix this. Gemini CLI became a thing and I tried it. People loved 2.5 Pro for coding, then lost their shit when 3 Pro dropped. I used the CLI before Gemini 3 and it made some sense for lighter tasks, but it was never a replacement. I use Gemini Pro daily for plenty of things, but coding isn’t one of them. Maybe I’m too tuned to how Claude thinks.

OpenCode was around. Its pitch is any model, any provider. But anything other than Claude felt off to me, and if I’m going to use Claude anyway, why learn a new tool’s feel when Claude’s own tool works well enough? Then GLM showed up. GLM is the RC Cola of Claude. Somewhat dependable. Can do some tasks better than Gemini Pro but not as good as Claude. I think that’s their own claim too. Off-brand Claude, and I love it for that.

Then I found Oh My OpenCode. A plugin for OpenCode that splits work across specialized agents: Sisyphus, Prometheus, Atlas, Hephaestus. Each agent has its own role with strict guardrails, not just prompt-based but hard-coded hooks. I liked how they thought about the problem. Atlas in particular seemed to handle context compaction. Sessions could run long without the context window blowing up. That was interesting.

Then Anthropic restricted OAuth access and cited Oh My OpenCode as justification for blocking OpenCode. The tool you pay for and depend on gets to set arbitrary-feeling rules that you follow or you’re out. That felt weird.

Meanwhile, when Claude Code was maxed out for the day, I felt like a fish out of water. That dependency bothered me.

The Engine Nobody Talks About

Back to Peter and OpenClaw. His mention of a custom harness sent me digging. That’s when I found Pi.

Mario Zechner built it. He went through a similar path to mine. Copying and pasting into ChatGPT, then Copilot, then Cursor, then Claude Code. Unlike me, he could build the tool he wanted. His philosophy: strip away all the bloat. Four tools. Read, Write, Edit, Bash. System prompt under 1,000 tokens. That’s it. All the AI needs is to read files, write files, edit files, and run commands. Everything else is noise.

I was skeptical. Another coding TUI. What could be different. I installed it on a VPS with GLM to have access from wherever. I’ve been tinkering with running AI outside the usual channels for a while now. For some reason I’m still afraid to put Claude on a VPS. I know there’s no reason for that fear but it’s there.

As Armin Ronacher wrote (Armin is a friend of Mario’s, and it’s interesting that these guys who deliver awesome shit tend to be friends with each other. I wish I had friends like that), Pi is interesting because of its tiny core and its extension system. It makes up for having almost nothing built in by letting extensions persist state into sessions. The agent can extend itself. You don’t download a plugin. You tell the agent to build one. The repo has 11K+ stars, which sounds like a lot until you compare it to OpenClaw. The lobster gets the attention. The engine gets quietly forked into production.

The Context Surprise

I started using Pi more. Then I noticed the token counts.

I had it read a pile of documents. It completed multiple tasks. I looked at the context usage and I was not at 50%. What the fuck is happening.

With Claude Code, that same workload would have put me deep into the yellow zone. With Pi, the context was barely warm. The output was cleaner, and it seemed to understand the codebase better with less to chew on.

This is the thing that got me. Context efficiency is something everyone talks about, and Claude Code’s token usage gets treated as the normal baseline. I accepted it too. But Pi showed me that baseline is bloated. The question becomes: what are Claude Code and OpenCode stuffing into the context that Pi doesn’t? MCPs eat tokens, sure. But the gap is too big to explain with MCPs alone. There’s a pile of system prompts and tool definitions and injected instructions that you never see and never asked for.

Mario’s bet was that most of it is unnecessary. Four tools and a short prompt. The model is smart enough to figure out the rest. Based on my experience, he was right.

One Person’s Project

Pi is one person’s work. Mario could walk away tomorrow. That’s the risk with any project like this. OpenClaw’s rebrand chaos showed what happens when governance is thin and things move fast. Crypto scammers don’t wait for your community decision-making process.

But the code is open. You can fork it. You own it. And maybe that’s the future anyway. Small tools, built for you, maintained by you. Your agent extending itself to fit your workflow instead of you contorting to fit someone else’s product decisions.

The search for the perfect coding workflow doesn’t end. It hasn’t ended for me. Things change too fast in this space for anything to be permanent. But right now, the thing that calls itself a shitty coding agent is the least shitty tool I’ve used.

ai, tools, opinion

联系我们 contact @ memedata.com