人工智能 (2014)
AI (2014)

原始链接: https://blog.samaltman.com/ai

尽管过去尝试多次失败,人工智能(AI),特别是*通用人工智能*,仍然是一个值得关注的技术趋势。目前的人工智能在特定任务上表现出色,例如下棋或飞行,但实现人类水平的思考——以及更关键的,*想要*思考——才是真正的挑战。 一些隐秘的公司正在取得进展,这得益于诸如吴恩达提出的所有大脑功能背后存在单一学习算法的理论。然而,复制1000亿个神经元及其涌现行为,就像仅凭晶体管知识来逆向工程处理器一样。 核心问题不仅仅是*能否*制造出智能机器,而是*能否*赋予它们意识、创造力和内在动力?作者认为,一个潜在的理想未来并非取代人类思考,而是分工协作:计算机擅长*执行*,人类擅长*思考*。

这场 Hacker News 的讨论围绕着 Sam Altman 2014 年关于人工智能的文章展开,引发了关于智能本质、未来工作以及日益强大的大型语言模型(LLM)潜在弊端的争论。 一个核心主题是 LLM 是否真的在“思考”,还是仅仅是大型数据空间内的复杂搜索引擎。一些人认为人类和人工智能可能遵循相似的原理——搜索和排列,而另一些人则担心未来人类会依赖人工智能来“行动”,只留下“思考”作为我们唯一的领域,可能导致智力衰退。人们对工作的影响表示担忧,特别是对于那些从事常规任务的人,以及潜在的收入不平等加剧。 许多评论者对 Altman 的动机表示怀疑,质疑他所陈述的目标是否与人工智能发展的实际后果相符。一种反复出现的情绪是,人工智能正在通过降低批判性思维能力使人们“变笨”。 还有人指出创造力的重要性,不仅在工作场所,而且在日常生活中,并担心那些不易自动化的技能被贬值。 讨论还涉及像 Worldcoin 这样的项目的伦理影响,强调了对数据隐私和操纵的担忧。
相关文章

原文

Yesterday at lunch a friend asked me what tech trend he should pay attention to but was probably ignoring.

Without thinking much I said “artificial intelligence”, but having thought about that a bit more, I think it’s probably right.

 

To be clear, AI (under the common scientific definition) likely won’t work. You can say that about any new technology, and it’s a generally correct statement. But I think most people are far too pessimistic about its chances - AI has not worked for so long that it’s acquired a bad reputation. CS professors mention it with a smirk. Neural networks failed the first time around, the logic goes, and so they won’t work this time either.

 

But artificial general intelligence might work, and if it does, it will be the biggest development in technology ever.

I’d argue we’ve gotten closer in lots of specific domains - for example, computers are now better than humans at lots of impressive things like playing chess and flying airplanes. But rather than call these examples of AIs, we just say that they weren’t really that hard in the first place.  And to be fair, none of these really feel anything like a computer that can think like a human.

 

There are a number of private (or recently acquired) companies, plus some large public ones, that are making impressive progress towards artificial general intelligence, but the good ones are very secretive about it.

There are certainly some reasons to be optimistic. Andrew Ng, who worked or works on Google’s AI, has said that he believes learning comes from a single algorithm - the part of your brain that processes input from your ears is also capable of learning to process input from your eyes. If we can just figure out this one general-purpose algorithm, programs may be able to learn general-purpose things.

 

There have been promising early results published from this sort of work, but because the brain is such a complex system so dependent on emergent behavior it’s difficult to say how close to the goal we really are. We understand how individual neurons work pretty well, and it’s possible that’s all we need to know to model how intelligence works. But the emergent behavior of 100 billion of them working together on the same principles gets extraordinarily complex, and difficult to model in software. Or, as Nick Sivo says, "it's like reverse engineering the latest Intel processor with only the basic knowledge of how a transistor works."  It’s also possible that there’s some other phenomenon responsible for intelligence, and the people working on this are on the wrong track.

The biggest question for me is not about artificial intelligence, but instead about artificial consciousness, or creativity, or desire, or whatever you want to call it. I am quite confident that we’ll be able to make computer programs that perform specific complex tasks very well. But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel?

 

It’s possible--probable, even--that this sort of creativity will be an emergent property of learning in some non-intuitive way. Something happened in the course of evolution to make the human brain different from the reptile brain, which is closer to a computer that plays pong. (I originally was going to say a computer that plays chess, but computers play chess with no intuition or instinct--they just search a gigantic solution space very quickly.)

And maybe we don't want to build machines that are concious in this sense.  The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking.  If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.

联系我们 contact @ memedata.com