I came across an article called “How AI-assisted coding will change software engineering: hard truths” that made me pause to think about how I use Large Language Models (LLMs), particularly the section on what the author calls the “knowledge paradox.”
- Seniors use AI to accelerate what they already know how to do
- Juniors try to use AI to learn what to do
- The results differ dramatically
I’ve watched senior engineers use AI to:
- Rapidly prototype ideas they already understand
- Generate basic implementations they can then refine
- Explore alternative approaches to known problems
- Automate routine coding tasks
Meanwhile, juniors often:
- Accept incorrect or outdated solutions
- Miss critical security and performance considerations
- Struggle to debug AI-generated code
- Build fragile systems they don’t fully understand
What resonated strongly was the realization that I had been trying to use LLMs like a senior developer when my experience and knowledge was not the same. So what were my pitfalls when trying to mimic a senior developer’s approach?
Acquiring Information
The beauty of LLMs is how you can ask the most granular or simple questions and receive an instant reply. When I tried to understand a new concept and need a visual, I asked. When I used a new framework or language, I asked. When I didn’t understand a function or syntax, I asked. But because all of this is completely new to me, I hadn’t yet developed the ability to be skeptical enough of whether the output response is accurate enough. Because I didn’t know anything yet about these new topics, my prompts remained consistently simple:
- “Explain this syntax to me.”
- “Explain this code to me.”
- “Why is this included?”
- “How does this work?”
- “Provide a diagram of this concept.”
- “Explain this term to me.”
Even after reading the output, I still reused these prompts to understand the outputs because new concepts or terms were being introduced.
When you’re eager to level up quickly, it’s easy to rely on these tools to accelerate learning. But the output can only be as good as the prompt you provide. Without a critical eye to evaluate the accuracy of what the LLM tells you, imitating this approach is risky.
Debugging
Debugging can be especially tricky when you don’t have a base understanding of the framework and language you are using. When I was eager or in a rush, I prompted in 3 different ways.
- Explain what was failing and paste the error logs.
- Include steps that replicated how the error occurred or steps I had taken to fix the issue.
- Add more context.
When I was lucky, LLMs provided a solution and fixed the immediate problem. But, did I go through the struggle to help me have a deeper understanding of the framework/language? Did I get better at reading errors? Did I consider potential side effects or re-evaluate whether the output was a robust solution? Some might argue yes, but for me, this is what I sacrificed for speed, leading to stagnation in my debugging skills and my overall learning.
How to counteract the pitfalls?
Knowing this doesn’t prevent me from using LLMs. It just motivates me to find a tailored approach with guardrails that prevents me from falling into these pitfalls. I’m curious to know what other junior developers are doing. Here is what I have been trying.
Information
Documentation is the first place I go to. Instead of prompting an LLM to search the documentation and provide a link to what I need, I go directly to the documentation and use the search bar. If I don’t understand a certain section, then I use the LLM to understand that section. That way I’m initially exposed to a “source of truth” as my base knowledge.
Debugging
Instead of using LLMs to discover an immediate solution, I use it as a rubber duck to isolate the problem, sniff out any suspicions I have, and try to narrow down source of the problem. Of course, if your senior developer has time, I consult them as well to verify what I think the issue is. Then, I either try to come up with a solution myself or use an LLM by giving it a specific/descriptive prompt of the exact problem. Once I see the solution, I try to do some quick googling to see if anyone else has tackled this problem before so that I have multiple potential solutions. Exposure to multiple options becomes an opportunity to think about side effects, weigh trade-offs, or decide which one is the most robust solution for your particular problem.
Final Note
My biggest mistake was using LLMs to appear productive by using a senior developer’s approach. LLMs can help accelerate learning, but it cannot replace the problem-solving that is the essence of our job. So I remind myself that speed is not my top priority. Senior developers can treat LLMs like junior developers and guide them. However, junior developers need to use LLMs differently to build intuition, knowledge, and experience with what LLM output is right to trust or use.
Pause and think. That’s our greatest asset.