I’ve been surprised by one aspect of using large language models more than any other.
They often put into words things I have long understood, but could not write down clearly. When that happens, it feels less like learning something new and more like recognition. A kind of “yes, that’s it” moment.
I have not seen this effect discussed much, but I think it matters. I also think it has improved how I think.
Much of what we know is tacit
As programmers and developers, we build up a lot of understanding that never quite becomes explicit.
You know when a design is wrong before you can say why. You sense a bug before you can reproduce it. You recognize a bad abstraction instantly, even if it takes an hour to explain the problem to someone else.
This is not a failure. It is how experience operates. The brain compresses experience into patterns that are efficient for action, not for speech. Those patterns are real, but they are not stored in sentences.
The problem is that reflection, planning, and teaching all require language. If you cannot express an idea, you cannot easily inspect it or improve it.
LLMs are good at the opposite problem
Large language models are built to do exactly this – turn vague structure into words.
When you ask a good question and the response resonates, the model is not inventing insight. It is mapping a latent structure to language in a way that happens to align with your own internal model.
That alignment is what produces the sense of recognition. I already had the shape of the idea. The model supplied a clean verbal form.
Putting things into words changes the thought
Once an idea is written down, it becomes easier to work with.
Vague intuitions turn into named distinctions. Implicit assumptions become visible. At that point you can test them, negate them, or refine them.
This is not new. Writing has always done this for me. What is different is the speed. I can probe half-formed thoughts, discard bad formulations, and try again without much friction. That encourages a kind of thinking I might have otherwise skipped.
The feedback loop matters
After you see a good articulation of an idea, you start thinking with that style of language.
Over time I’ve noticed that now I do this without an LLM to hand. Can I phrase in precise language what I am thinking, feeling, believing, right now, and why.
In that sense, the model is not improving my thinking directly. It is improving the interface between my thinking and language. Since reasoning depends heavily on what one can represent explicitly, that improvement can feel like a real increase in clarity.
The more I do this, the better I get at noticing what I actually think.