理查德·萨顿(Richard Sutton)和安德鲁·巴托(Andrew Barto)赢了2024年图灵奖
Richard Sutton and Andrew Barto Win 2024 Turing Award

原始链接: https://awards.acm.org/about/2024-turing

安德鲁·G·巴托(Andrew G. Barto)和理查德·S·萨顿(Richard S.图灵奖因其在加强学习(RL)方面的基础工作而奖。从1980年代开始,他们开发了RL的关键概念,数学框架和算法,这是一种创建智能系统的关键方法。 RL受到心理学和神经科学的启发,涉及代理人学习根据环境中的奖励信号来最佳行为。 Barto和Sutton使用Markov决策过程正式化RL,即使在未知的环境和奖励下也可以学习。他们的贡献包括时间差异学习,策略梯度方法以及神经网络的使用。他们的教科书“强化学习:介绍”仍然是该领域的开创性资源。 RL的实用应用最近随着深度学习的整合而激增,导致了Alphago的胜利和在机器人技术,聊天机器人开发甚至矩阵乘法算法等领域的进步等成就。他们的工作还提供了对人脑多巴胺系统的见解。

理查德·萨顿(Richard Sutton)和安德鲁·巴托(Andrew Barto)因对强化学习的贡献(RL)赢得了2024年的图灵奖。黑客新闻讨论强调了他们的作品的重要性,尤其是他们的教科书“强化学习:介绍”,以获得其可访问性。 许多评论者对RL和AI的影响分享了个人轶事和见解。一位用户有趣地讲述了购买Barto的房屋,并提供了质数优惠。引用了一篇主张利用计算的启发式方法的“痛苦的教训”,引发了有关AI未来方向的辩论,包括对“黑匣子” AI的担忧以及对可证明,可调的AI系统的潜在需求。 一些评论者讨论了AI不断增长的计算需求及其在游戏玩法和正式验证等各个领域的作用的影响。关于萨顿对人类“继承”观点的一个有争议的评论引发了关于个人信仰是否应影响对职业成就的认识的辩论。总体而言,该线程庆祝萨顿和巴托的作品,同时也承认了复杂的和潜在的关于AI快速发展的影响。

原文

ACM has named Andrew G. Barto and Richard S. Sutton as the recipients of the 2024 ACM A.M. Turing Award for developing the conceptual and algorithmic foundations of reinforcement learning. In a series of papers beginning in the 1980s, Barto and Sutton introduced the main ideas, constructed the mathematical foundations, and developed important algorithms for reinforcement learning—one of the most important approaches for creating intelligent systems.

Barto is Professor Emeritus of Information and Computer Sciences at the University of Massachusetts, Amherst. Sutton is a Professor of Computer Science at the University of Alberta, a Research Scientist at Keen Technologies, and a Fellow at Amii (Alberta Machine Intelligence Institute).

The ACM A.M. Turing Award, often referred to as the “Nobel Prize in Computing,” carries a $1 million prize with financial support provided by Google, Inc. The award is named for Alan M. Turing, the British mathematician who articulated the mathematical foundations of computing.

What is Reinforcement Learning?

The field of artificial intelligence (AI) is generally concerned with constructing agents—that is, entities that perceive and act. More intelligent agents are those that choose better courses of action. Therefore, the notion that some courses of action are better than others is central to AI. Reward—a term borrowed from psychology and neuroscience—denotes a signal provided to an agent related to the quality of its behavior. Reinforcement learning (RL) is the process of learning to behave more successfully given this signal.

The idea of learning from reward has been familiar to animal trainers for thousands of years. Later, Alan Turing’s 1950 paper “Computing Machinery and Intelligence,” addressed the question “Can machines think?” and proposed an approach to machine learning based on rewards and punishments.

While Turing reported having conducted some initial experiments with this approach and Arthur Samuel developed a checker-playing program in the late 1950s that learned from self-play, little further progress occurred in this vein of AI in the following decades. In the early 1980s, motivated by observations from psychology, Barto and his PhD student Sutton began to formulate reinforcement learning as a general problem framework.

They drew on the mathematical foundation provided by Markov decision processes (MDPs), wherein an agent makes decisions in a stochastic (randomly determined) environment, receiving a reward signal after each transition and aiming to maximize its long-term cumulative reward. Whereas standard MDP theory assumes that everything about the MDP is known to the agent, the RL framework allows for the environment and the rewards to be unknown. The minimal information requirements of RL, combined with the generality of the MDP framework, allows RL algorithms to be applied to a vast range of problems, as explained further below.

Barto and Sutton, jointly and with others, developed many of the basic algorithmic approaches for RL. These include their foremost contribution, temporal difference learning, which made an important advance in solving reward prediction problems, as well as policy-gradient methods and the use of neural networks as a tool to represent learned functions. They also proposed agent designs that combined learning and planning, demonstrating the value of acquiring knowledge of the environment as a basis for planning.

Perhaps equally influential was their textbook, Reinforcement Learning: An Introduction (1998), which is still the standard reference in the field and has been cited over 75,000 times. It allowed thousands of researchers to understand and contribute to this emerging field and continues to inspire much significant research activity in computer science today.

Although Barto and Sutton’s algorithms were developed decades ago, major advances in the practical applications of RL came about in the past fifteen years by merging RL with deep learning algorithms (pioneered by 2018 Turing Awardees Bengio, Hinton, and LeCun). This led to the technique of deep reinforcement learning.

The most prominent example of RL was the victory by the AlphaGo computer program over the best human Go players in 2016 and 2017. Another major achievement recently has been the development of the chatbot ChatGPT. ChatGPT is a large language model (LLM) trained in two phases, the second of which employs a technique called reinforcement learning from human feedback (RLHF), to capture human expectations.

RL has achieved success in many other areas as well. A high-profile research example is robot motor skill learning in the in-hand robotic manipulation and solution of a physical (Rubik’s Cube), which showed it possible to do all the reinforcement learning in simulation yet ultimately be successful in the significantly different real world.

Other areas include network congestion control, chip design, internet advertising, optimization, global supply chain optimization, improving the behavior and reasoning capabilities of chatbots, and even improving algorithms for one of the oldest problems in computer science, matrix multiplication.

Finally, a technology that was partly inspired by neuroscience has returned the favor. Recent research, including work by Barto, has shown that specific RL algorithms developed in AI provide the best explanations for a wide range of findings concerning the dopamine system in the human brain.

“Barto and Sutton’s work demonstrates the immense potential of applying a multidisciplinary approach to longstanding challenges in our field,” explains ACM President Yannis Ioannidis. “Research areas ranging from cognitive science and psychology to neuroscience inspired the development of reinforcement learning, which has laid the foundations for some of the most important advances in AI and has given us greater insight into how the brain works. Barto and Sutton’s work is not a stepping stone that we have now moved on from. Reinforcement learning continues to grow and offers great potential for further advances in computing and many other disciplines. It is fitting that we are honoring them with the most prestigious award in our field.”

“In a 1947 lecture, Alan Turing stated ‘What we want is a machine that can learn from experience,’” noted Jeff Dean, Senior Vice President, Google. “Reinforcement learning, as pioneered by Barto and Sutton, directly answers Turing’s challenge. Their work has been a lynchpin of progress in AI over the last several decades. The tools they developed remain a ce ntral pillar of the AI boom and have rendered major advances, attracted legions of young researchers, and driven billions of dollars in investments. RL’s impact will continue well into the future. Google is proud to sponsor the ACM A.M. Turing Award and honor the individuals who have shaped the technologies that improve our lives.”

News Release | Printable PDF

 

联系我们 contact @ memedata.com