代理性自我:人工智能与自我提升的相似之处
The Agentic Self: Parallels Between AI and Self-Improvement

原始链接: http://muratbuffalo.blogspot.com/2026/01/the-agentic-self-parallels-between-ai.html

## AI 代理令人惊讶的人性内核 2025 年见证了人工智能目标的变化——从单纯的*说话*到积极的*行动*,从而催生了“代理型”人工智能。有趣的是,驱动这些代理的技术感觉非常…人性化。关键似乎不是复杂计算机科学的突破,而是模仿三种核心认知习惯:记录、自我对话和角色扮演。 就像将想法外包给纸张一样,代理现在利用“草稿板”——外部记忆——来克服有限的处理能力并实现更强大的推理。此外,通过生成内部独白来提示代理“思考”,这种行动/写作和推理的循环,反映了人类思维过程并提高了输出质量。 最后,“化身效应”通过“角色提示”和多代理系统转化为人工智能。为代理分配特定角色——例如架构师、工程师和评论家——可以集中他们的精力并显著提高性能,就像向教练寻求建议一样。 这表明有效地利用原始人工智能智能依赖于一个简单的流程:记录、迭代和验证——这是思维本身的根本机制。未来可能在于将这些技术与形式化方法和符号人工智能相结合,以实现更高的精度和可靠性。

一个黑客新闻的讨论围绕着一篇名为“能动自我:人工智能与自我提升的相似之处”的博文展开。核心争论在于大型语言模型(LLM)是否真的*像*人类一样思考,或者智能的显现仅仅是一种复杂的幻觉。 一些评论者指出LLM与人脑之间的显著差异,并指出许多人缺乏内在独白——通常被认为是人类思考的核心过程。另一些人则认为人类的思考也不是持续的叙述,而是经常在潜意识中运作。 图灵奖得主曼努埃尔·布鲁姆的一句话被分享,强调了写作能够扩展认知能力超越“有限自动机”的力量,但有人提出了关于有限资源(如纸张)的限制的反驳。对话涉及自我反思和逃离确定性计算的想法,暗示着意识更深层次的、或许是精神层面的面向。
相关文章

原文
2025 was the year of the agent. The goalposts for AGI shifted; we stopped asking AI to merely "talk" and demanded that it "act". As an outsider looking at the architecture of these new agents and agentic system, I noticed something strange. The engineering tricks used to make AI smarter felt oddly familiar. They read less like computer science and more like … self-help advice.

The secret to agentic intelligence seems to lie in three very human habits: writing things down, talking to yourself, and pretending to be someone else. They are almost too simple.

The Unreasonable Effectiveness of Writing

One of the most profound pieces of advice I ever read as a PhD student came from Prof. Manuel Blum, a Turing Award winner. In his essay "Advice to a Beginning Graduate Student", he wrote: "Without writing, you are reduced to a finite automaton. With writing you have the extraordinary power of a Turing machine."

If you try to hold a complex argument entirely in your head, you will fail. Your working memory is a "finite automaton”, it has short limits. But if you write things down, you offload that memory to the paper. You can then reference it, critique it, and build on it. The paper becomes your external hard drive.

AI Agents are now being built on this exact principle. An LLM has a limited context window, its effective attention span is limited. If it tries to solve a 50-step coding problem in one breath, it will crash and burn. To fix this, we give agents scratchpads. We force them to write down a plan before they execute code. We give them memory buffers to store facts for later. We are essentially asking AI to take notes. By externalizing their internal state onto a digital piece of paper, agents evolve from simple pattern-matchers into robust thinkers.

Thinking is Just Talking to Yourself in a Loop

For a long time, we treated LLMs as simple input-output machines: we asked a question, and it produced an answer. Often, the answers were unsatisfying—hallucinated or superficial. DeepSeek changed this by pausing the model before it answered, prompting it to “think” first. But what does thinking mean for a computer? It means generating text that remains hidden from the user: an internal monologue. In that sense, it mirrors human thought.

“Writing is nature’s way of letting you know how sloppy your thinking is.” — Guindon

“If you think without writing, you only think you’re thinking.” — Leslie Lamport

This brings us back to the unreasonable effectiveness of writing. But this time performed in a loop. Thinking isn’t instantaneous, it is a process. Prototyping for the win! The Agent follows a loop: Act/Write → Reason → Repeat. It talks to itself, breaking complex problems into manageable parts. It asks, “Wait, should I check this assumption?” or “That doesn’t look right; let me try again.” 


Role Playing: The Alter Ego Effect

A few years ago, I read The Alter Ego Effect: The Power of Secret Identities to Transform Your Life. Its core idea is that adopting a persona can unlock hidden capabilities. By stepping into a specific character, you bypass your own inhibitions and trigger a defined set of behaviors. Beyoncé had “Sasha Fierce” on stage, and most athletes have their “game faces”. The book made a convincing case, but this sounded so cheesy that I didn’t even blog about it at the time.

I guess that stuff actually works! In the world of AI Agents, they call this "Role Prompting" or "Multi-Agent Systems".

If you ask a single AI to "write code," it might do a mediocre job. But if you assign one AI the role of "The Architect" to plan the code, another the role of "The Engineer" to write it, and a third the role of "The Critic" to review it, the result is exponentially better.

These personas are not just theatrics; they are inductive biases. They constrain the search space. Just as adopting the persona of a "strict editor" helps a writer cut fluff, assigning an AI the role of a "debugger" forces it to look for errors rather than just generating text.

Advisor models act as meta-thinkers. They don’t execute tasks directly but monitor other agents, flag risks, and guide decisions. In coding, an Advisor might warn the Architect of a risky design or nudge the Engineer away from error-prone code. By providing this high-level oversight, Advisors let agents focus on immediate tasks while keeping long-term goals in view, making multi-agent systems more strategic.

Isn’t this just the classic advice-column cliche: "get a coach"?

Where do we go from here?

Maybe we are just discovering that the best way to utilize pattern-matching-based raw intelligence is through a simple universal workflow: Write it down, talk it through or try it out, and check your work. These steps may provide the minimal scaffolding required to sustain reasoning, forming the core mechanics of thought.

Lamport offers a corollary to Guindon:
Mathematics is nature's way of showing how sloppy our writing is.

Mathematics closes the loopholes that prose leaves open. What writing does to thought, mathematics does to writing. This is likely where symbolic AI and formal methods become important. Writing a specification already sharpens thinking. Formalizing it sharpens it again and makes reasoning as precise and reliable as possible.

联系我们 contact @ memedata.com