Cognition with the help of AI is already a significant force in our world, resulting in humanity-sized missed opportunities and risks. In this article, we will explore the risks of AI-assisted cognition and how to use these tools without falling into the trap of intellectual stagnation.
To understand what AI-assisted cognition is, we first need to understand what cognition is.
“Cognitions are mental processes that deal with knowledge. They encompass psychological activities that acquire, store, retrieve, transform, or apply information. Cognitions are a pervasive part of mental life, helping individuals understand and interact with the world.” Q: Wikipedia
Cognition can be assisted by external static information or external cognition.
For example, most people would put a book into the category of external static information and a discussion about a topic with another human, because humans think and process information themselves, into the external cognition category.
But where do discussions with AIs fit in? They are able to process information that can result in original solutions, but they are still static and currently cannot learn.
In early 2026, the USA prepared to invade Greenland and, therefore, the EU. Only a few months prior to that it was completely unthinkable that the USA would even think about threatening an invasion of Greenland. As AI base models are stuck in the past, they do not easily accept these events as real and often label them as “hypothetical”, “fake news”, or “impossible”. This also affects new models like Gemini 3 Pro, GLM-5 or GPT-5.3-codex.
As most new LLMs are just post-trained on a base model that is relatively old, even when post-trained on new events, they do not completely utilize this information in their cognition and are still skewed towards the static patterns of the base model’s hidden states. They basically think something different from what they say.
So you might see the problem here already: If a lot of people use AIs to discuss, write, autocomplete, and brainstorm, but AI cognition does not reflect new events and cultural changes, like the change in the relationship between the USA and the EU, new geopolitical realities, and the EU population’s stance toward the USA, people will be skewed toward these old patterns and ideas. Cultural change has to build and maintain momentum indefinitely to persist against the static cognitive skew of AIs.
Human knowledge and ideas, and thus human development, are highly dependent on the Dynamic Dialectic Substrate.
Understanding the Dynamic Dialectic Substrate will help to understand how AI-assisted cognition can endanger human development and how to use AI-assisted cognition without endangering human development.
The Dynamic Dialectic Substrate is the sum of all local and global dialectic processes and conclusions. It is the fundamental foundation upon which all of humanity is built, and the origin of all thoughts, concepts, ideas, and solutions that humans utilize.
The Dynamic Dialectic Substrate creates new concepts through a process of qualitative merging existing concepts, which can happen in a single person, a group of people, or even globally.
The above image is a narrow slice of the Dialectic process present in the Dynamic Dialectic Substrate. You can see how concepts merge and evolve in higher and higher concepts. In this example the following dialectic process emerges:
Stage 1:
- “Cold is Painful” and “Fire is Hot” result in “Fire removes Cold-Pain”
- “Significant Water extinguishes Fire” and “Rain is falling Water” result in “Strong Rain extinguishes Fire”
- “Rain is falling Water” and “Hut has a roof” result in “Hut shelters from Rain”
Stage 2:
- “Fire removes Cold-Pain” and “Strong Rain extinguishes Fire” result in “Rain extinguishes Fire and therefore causes Cold-Pain”
- “Strong Rain extinguishes Fire” and “Hut shelters from Rain” result in “Inside a Hut, Fire survives Rain”
Stage 3:
- “Rain extinguishes Fire and therefore causes Cold-Pain” and “Inside a Hut, Fire survives Rain” result in “Hut protects Fire and therefore protects against Cold-Pain”
Because LLMs prefer or skew toward certain patterns and concepts (known as inductive bias), even after post-training, they reduce the cognitive range when used as a tool for cognition at the population level. This is especially true if only a few AI models are used, or if many AI models share just a few base models. This will lead to a loss of diversity of ideas, concepts, and solutions, which will slow down human development.
You might think of this as a world in which a significant portion of the population is speaking to the same five people to discuss problems, the world, relationships, and basically anything. It is hard to overstate how much influence these five people would have on humanity, even if they try their absolute best to be as neutral and open as possible. Humans who speak with these five people would still have their thinking massively shifted, and this becomes a significant problem at the population level.
It is entirely possible that we already have lost paths to great scientific discoveries or cultural shifts because of AI-skew or unnoticed refusal.
I tried to visualize this problem in the following image that shows how the range of higher level concepts is skewed into the direction the base model prefers:
To come back to the example of the USA invading Greenland: It is obvious that humans using AI to brainstorm the geopolitical future of the EU, the USA, and Greenland will encounter patterns skewed toward the base model’s “worldview.” This bias might prevent many in the EU from even considering the possibility of moving away from foreign services or software. Such a shift could have massive consequences, especially since the EU relies heavily on USA services and software that could be turned off at any time. If this AI-skew affects even single individuals of specific groups such as politicians, CEOs, managers, or scientists the impact can be already be significant because of their decision-making power.
Because base models are incredibly expensive to train and carry resilient biases, those without access to a GPU cluster must accept that these issues exist. To avoid problems like AI-skew and unnoticed refusal, they should instead focus on using specific strategies to mitigate them.
Speaking and discussing with other humans is obviously the most effective way to mitigate these problems. It might also be wise to mention that if you already have a good idea of a solution through AI-assisted cognition, you have to be careful not to nudge other humans in your direction. Try not to use questions or hints that will nudge other humans to a solution or thought that you had through AI-assisted cognition as long the other person is exploring a cognitive path you had not explored yet.
Regarding solutions that involve direct AI use, our range of options is quite limited, and as of now there is no solution that would completely or partially solve this problem on a population scale. Here are options that at least widen the range of concepts and ideas one can get out of LLMs while sadly not mitigating the main problem:
- Use Search Engines to find relevant sources of information or let the AI search for you via
Web Searchand prevent it from giving you a solution or thought directly. - Use a variety of AIs with different base models
- Explore different “AI personas” that simulate different perspectives and thinking styles like: “You are Einstein”, “You are on Drug X”, “You are a deranged but distinguished sea otter”
Even though we have indications and even some evidence that AI-assisted cognition can endanger human development, the extent and depth are still unknown and unclear. More outcome-focused research is needed to understand the significance. Since we do not have a second humanity to A/B test all of that, there will always be a lot of uncertainty and speculation on this topic, since no one can isolate their cognition from the influence of population-level AI-assisted cognitive skew if they want to participate with other humans or their creations, which must already be influenced by AI-skew if it has any significant influence.
For me it is not entirely clear how we will recognize the effects of AI-skew and unnoticed refusal on a population level. We cannot know what innovations, discoveries, and cultural changes we are missing because of it. Although I am sure there will be figures that will extrapolate small indications into all-consuming dooming narratives, as I might do a little bit here for the sake of argument and attention to be compliant to our shared attention economy, it is probably, as everything, not that easy.
It is also not easy to imagine solutions for all of that, but I, for my part, will certainly try to exercise more “Cognition Hygiene”… Apart from this, it is much, much more fun for me to speak with humans about thoughts and ideas than with AIs.
I’ve seen slow awareness about this incredibly important topic that I hope to be able to speed up a bit with this article and by giving people a framework to understand and speak about it. If people have no words about something, it is hard to think and speak about it. It will be interesting to see how this topic evolves.
The topic of AI-skew and AI-assisted cognition is full of unknowns and it would be lovely to speak with people about it. I hope this article can be a starting point for that. If you want to share your thoughts, or are interested in a conversation about that, you can mail me at [email protected]