(评论)
(comments)
原始链接: https://news.ycombinator.com/item?id=39312896
这篇文章提出了关于深度学习的最新进展及其对社会的潜在影响的几点,特别是关于利用生成式预训练 Transformer 3 (GPL-3) 构建的语言处理应用程序及其与人工智能安全策略的关系。 首先,一些专家声称,虽然生成式预训练 Transformer 可以提高自然语言生成能力,但局限性仍然存在,包括处理标点符号的效率低下以及缺乏某些领域的专业知识。 其他人则警告不要信任人工智能,强调人工智能安全措施限制或限制其使用的情况,引发了围绕人工智能安全原则的审查指令交付的问题。 此外,关于这些技术的未来所有权的争论仍然存在,学者们质疑,随着人们对人工智能相关安全漏洞的日益关注,开源模型是否最终会占据主导地位,还是会逐渐消失。 最终,人们对这些发展的更广泛影响持怀疑态度,评论员对人工智能在革命性领域中的潜在作用表示怀疑,这些领域以前被认为不属于其领域,尽管初步迹象表明该领域取得了突破性进展。 这些担忧表明,未来需要采取细致入微的方法,平衡通过 GPU 进行语言建模创新带来的机遇与解决围绕数据保护、隐私保护和网络安全考虑的关键问题。
As time moves on, the good blog posts, tutorials, books, etc where you currently learn the deeper knowledge such as memory management, will stop being written and will slowly get very outdated as information is reorganized.
I've already seen this happen in my career. When I first started, the way you learned some new technology was to buy a book on it. Hardly anybody does this anymore, and as a result there aren't many books out there. People have turned more to tutorials, videos, blog posts, and Stack Overflow. The quick iterations of knowledge from these faster delivery mechanisms also further make books more outdated by the time they written, which further makes them less economical.
As AI becomes the primary way to learn (and I definitely believe that it will), the tutorials, videos, blog posts, and even Stack Overflow are going to taper off just like books did. I honestly expect AI to become the only way to learn about things in the future (things that haven't yet been invented/created, and will never get the blog post because an AI will just read the code and tell you about it).
It could be an amazing future, but not unless Google and others change their approach. I think we may need to go through a new Enlightenment period where we discover that we shouldn't be afraid of knowledge and unorthodox (and even heretical) opinions and theories. Hopefully it won't take 1,500 years next time.
reply