未来的技能不是“人工智能”,而是“专注力”。
The skill of the future is not 'AI', but 'Focus'

原始链接: https://www.carette.xyz/posts/focus_will_be_the_skill_of_the_future/

大型语言模型(LLM)为工程师带来了巨大的潜力,可以自动化任务并加速开发。然而,过度依赖LLM会严重损害工程师基本的解决问题的能力。虽然LLM擅长根据现有数据提供解决方案,但其局限性,包括偏差和可能出现幻觉,都需要仔细审查其输出结果。危险在于工程师盲目接受LLM生成的解决方案,而忽略了其背后的“原因”。这种从理解基本原理到仅仅寻求答案的转变,可能会削弱解决真正新颖问题所必需的基础技能。与提供探索和利用之间平衡的搜索引擎不同,LLM鼓励立即利用,这可能会阻碍创新和长期的解决问题的能力。对快速解决方案的压力加剧了这种趋势,威胁着工程师对算法的关注和掌握。我们必须优先重视理解和批判性思维,避免未来复杂的解决问题仅仅依赖于AI,而牺牲人类的创造力。

Hacker News 最新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 未来的技能不是“AI”,而是“专注力”(carette.xyz) 14 分 weird_trousers 1小时前 | 隐藏 | 过去 | 收藏 | 讨论 加入我们,参加 6 月 16-17 日在旧金山举办的 AI 初创公司学校! 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系我们 搜索:

原文

If you frequent Hacker News regurlarly, you have likely noticed the buzz around engineers using AI (specifically Large Language Models, or LLMs) to tackle Computer Science problems.

I want to be clear: I’m not against LLMs.
LLMs are incredibly powerful tools, and can be a huge boon to engineers. They can automate repetitive tasks, generate code snippets, help with brainstorming, assist in debugging, … and this can frees up engineers’ time and mental energy, which could be channeled into more complex, creative problem-solving.
But, like any tool, LLMs should be used wisely.
LLMs can hallucinate, exhibit inconsistencies (especially with self-reflection models), and harbor biases. These limitations mean that LLM outputs require careful review before they can be trusted.

A key concern with LLMs lies in their training data.
The data can be biased, contradictory sometimes, but those data contain solutions to known problems.
If an engineer wants to “reinvent the wheel,” an LLM might offer a solution (good or bad, depending on the prompt). But when faced with truly novel problems, LLMs often provide unreliable responses, placing the burden of error detection squarely on the engineer.

This reliance on readily available solutions, particularly for familiar problems, creates a real risk: engineers may inadvertently atrophy their own problem-solving skills, hindering their ability to tackle truly novel challenges.
The solution lies is balance, and a focus on the “why”, not just the “what”.
Engineers should strive to understand the reasoning behind LLM-generated solutions, not simply accept them blindly. Blind acceptance shifts the focus from solving problems to merely obtaining a solution. Crucially, solving complex problems often depends on mastering simpler and foundational skills, which the engineer might lose quickly.

This idea summarizes why I disagree with those who equate the LLM revolution to the rise of search engines, like Google in the 90s. Search enginers offer a good choice between Exploration (crawl through the list and pages of results) and Exploitation (click on the top result).
LLMs, however, do not give this choice, and tend to encourage immediate exploitation instead. Users may explore if the first solution does not work, but the first choice is always to exploit.
Exploitation and exploration are complementary. Remove the exploration and you will introduce more and more instability into the exploitation process.

Computer Science emerged because Humans needed tools to solve problems faster and wanted to focus on the real problems, not repetitive tasks. Humans built machines to accelerate problem-solving, but engineers remained the masters of the algorithms.
I fear we’re losing our grip on this mastery. Not because engineers are becoming less and less intelligent, but because the pressure to deliver solutions quickly is paramount.
In embracing these “fast-paced solutions”, we risk losing a fundamental skill: focus. Because focus, like any skill, requires practice.

This is a worrying trend. If engineers become less adept at solving complex problems, what does the future hold? Will our ability to tackle complex challenges rest solely on self-reflecting AIs, rather than human ingenuity?

联系我们 contact @ memedata.com