As we saw earlier, asking a modern model for complex Brainf*ck code often results in the model falling into an infinite loop—spewing the same characters over and over. The minimalistic nature of the language results in highly repetitive structures in the code. This poses a unique challenge to the way LLMs work.
An LLM is more likely to output what it has already seen based on previous tokens, and that pertains to its own output too. When some structure is repeated more than a couple of times, there is a likelihood that the model may learn that token X is the most likely output following itself. With every subsequent iteration, this increases the likelihood of outputting X in a self-fulfilling prophecy, resulting in the infinite loop.