原文
原始链接: https://news.ycombinator.com/item?id=40608413
该用户认为,与人类相比,大型语言模型 (LLM) 表现出卓越的智能,因为它们能够在推理情感和沟通问题时提供有用且有效的帮助。 他们指出自己的个人经历表现出同理心、避免微侵犯以及更强的数学和科学技能。 尽管存在无法驾驶等限制,但用户强调一般智力并不需要精通所有领域。 他们承认与数据可用性、持续学习和潜在进步相关的挑战,同时消除了对人工智能反刍信息及其情境学习能力的担忧。 该用户最后强调了他们在人工智能研究方面的广泛背景,并观察了过去几十年的显着进步,断言法学硕士在许多基本任务中超越了人类的表现。
I don't really pick a definitiom intelligence
llms could be regurgitating training data
they take more data to train than humans
They can't do some tasks, like driving
However, in my experience, LLM are more empathetic than humans, more able to help me reason about my feelings and communication problems than humans, less likely to perform microagressions or be racist or ableist than humans, and better at math and science than most humans. These are just my personal feelings as an autistic person, which I can back up only loosely with benchmark data, but which I will expect to see the world coming to realize over the next years.
So in terms of being able to constructively interact with me in an intelligent and helpful way, LLMs are often more useful better than humans that I have access to. I say they are smarter than these people as well, because AI will give me solutions that are useful, and which other humans could not give me.
The fact that it cannot drive doesn't bother me since I don't consider driving a general skill but a specialized skill. It can still have general intelligence without being able to do some specific things. Going back to my original post, I specifically reject AGI definitions where to be generally intelligent the AI has to out perform humans in every possible skill. I would consider that a super intelligent AGI.
As for the information problem and data issue, AIs so far have been black boxes isolated from reality and we haven't solved the online continuous learning problem. I believe that as we turn AIs into agents which are constantly interacting with reality via high bandwidth token streams, we will have a lot more data to train with. I also believe that we'll start being able to train continuously on that data. Then even assuming that training is no more efficient than it is today, I think the extra data could make the difference.
I'm also not convinced that AI won't eventually be able to learn from as little data as humans do. I don't think it has to be the case, and I also don't discount the possibility of an AI winter that leaves AI less than efficient than humans are for a long long time maybe even forever. However I also feel like we may come to understand why humans learn so fast, and might be able to transfer some insights into artificial systems. I also know that people will be trying very hard to solve the AI energy and data usage problems, since their major threats against large-scale AI adoption. So we'll be trying really hard to do it and we'll have a blueprint for how to do it - our brains. That means there's a chance we'll crack that problem.
Finally the regurgitation issue is irrelevant to intelligence - just like it would be irrelevant if the brain is secretly just regurgitating stuff it learned. Because the brain can also do novel things.
Furthermore we know that llms can learn and usefully reason about context information outside of their training distributions. This is called in context learning.
For example if I come from a culture that the AI was not really well trained on, I can give it four or five examples of values that are important to me in that culture, and then it will be able to extrapolate how to apply or respect those values in situations that I present.
And again here's the kicker- it'll do this more faithfully than the average person. Remember that if you tell a person five values from a culture outside of their own, and ask them to uphold those values... Perhaps half will just get angry and give you some kind of racist slur, and then 80% of the remainder will lack the empathy and mental flexibility to do a good job.
Finally I need to point out that I have studied AI for over two decades out of books starting from the '80s, then the '90s, then the 00s and 10s. And the change in the literature and capabilities has been unreal.
Perhaps you are forgetting how feeble AI was before, or simply not putting it to use. There are many many tasks that no AI from over 3 years ago could have touched, and now suddenly you can do it for just a $20 a month subscription.
The change in capabilities is so drastic that I wonder if you're simply discounting that change because you're not using AI, comparing it to old AI, or seeing it enable things that no AI before could have possibly done, no matter how hard you tried.
So to conclude, the change has been too great, enabled too many new things, and taking such a big departure from old AI, and consistently outperforms humans on so many tasks that I find important, that I feel it would be not only senseless to say that there isn't some intelligence there - some useful information processing capability that I can depend on and rely on more than a human, in many tasks and settings where humans are consistently bad. In fact it would be harmful for me if I didn't realize that these things have changed, because I would not be benefiting from them.