The secret to agentic intelligence seems to lie in three very human habits: writing things down, talking to yourself, and pretending to be someone else. They are almost too simple.
The Unreasonable Effectiveness of Writing
One of the most profound pieces of advice I ever read as a PhD student came from Prof. Manuel Blum, a Turing Award winner. In his essay "Advice to a Beginning Graduate Student", he wrote: "Without writing, you are reduced to a finite automaton. With writing you have the extraordinary power of a Turing machine."If you try to hold a complex argument entirely in your head, you will fail. Your working memory is a "finite automaton”, it has short limits. But if you write things down, you offload that memory to the paper. You can then reference it, critique it, and build on it. The paper becomes your external hard drive.
AI Agents are now being built on this exact principle. An LLM has a limited context window, its effective attention span is limited. If it tries to solve a 50-step coding problem in one breath, it will crash and burn. To fix this, we give agents scratchpads. We force them to write down a plan before they execute code. We give them memory buffers to store facts for later. We are essentially asking AI to take notes. By externalizing their internal state onto a digital piece of paper, agents evolve from simple pattern-matchers into robust thinkers.
Thinking is Just Talking to Yourself in a Loop
For a long time, we treated LLMs as simple input-output machines: we asked a question, and it produced an answer. Often, the answers were unsatisfying—hallucinated or superficial. DeepSeek changed this by pausing the model before it answered, prompting it to “think” first. But what does thinking mean for a computer? It means generating text that remains hidden from the user: an internal monologue. In that sense, it mirrors human thought.“Writing is nature’s way of letting you know how sloppy your thinking is.” — Guindon
“If you think without writing, you only think you’re thinking.” — Leslie Lamport
This brings us back to the unreasonable effectiveness of writing. But this time performed in a loop. Thinking isn’t instantaneous, it is a process. Prototyping for the win! The Agent follows a loop: Act/Write → Reason → Repeat. It talks to itself, breaking complex problems into manageable parts. It asks, “Wait, should I check this assumption?” or “That doesn’t look right; let me try again.”
Role Playing: The Alter Ego Effect
A few years ago, I read The Alter Ego Effect: The Power of Secret Identities to Transform Your Life. Its core idea is that adopting a persona can unlock hidden capabilities. By stepping into a specific character, you bypass your own inhibitions and trigger a defined set of behaviors. Beyoncé had “Sasha Fierce” on stage, and most athletes have their “game faces”. The book made a convincing case, but this sounded so cheesy that I didn’t even blog about it at the time.I guess that stuff actually works! In the world of AI Agents, they call this "Role Prompting" or "Multi-Agent Systems".
If you ask a single AI to "write code," it might do a mediocre job. But if you assign one AI the role of "The Architect" to plan the code, another the role of "The Engineer" to write it, and a third the role of "The Critic" to review it, the result is exponentially better.
These personas are not just theatrics; they are inductive biases. They constrain the search space. Just as adopting the persona of a "strict editor" helps a writer cut fluff, assigning an AI the role of a "debugger" forces it to look for errors rather than just generating text.
Advisor models act as meta-thinkers. They don’t execute tasks directly but monitor other agents, flag risks, and guide decisions. In coding, an Advisor might warn the Architect of a risky design or nudge the Engineer away from error-prone code. By providing this high-level oversight, Advisors let agents focus on immediate tasks while keeping long-term goals in view, making multi-agent systems more strategic.
Isn’t this just the classic advice-column cliche: "get a coach"?
Where do we go from here?
Maybe we are just discovering that the best way to utilize pattern-matching-based raw intelligence is through a simple universal workflow: Write it down, talk it through or try it out, and check your work. These steps may provide the minimal scaffolding required to sustain reasoning, forming the core mechanics of thought.Lamport offers a corollary to Guindon:
Mathematics is nature's way of showing how sloppy our writing is.
Mathematics closes the loopholes that prose leaves open. What writing does to thought, mathematics does to writing. This is likely where symbolic AI and formal methods become important. Writing a specification already sharpens thinking. Formalizing it sharpens it again and makes reasoning as precise and reliable as possible.