原文
[Submitted on 16 Apr 2025 (v1), last revised 17 Jan 2026 (this version, v5)]
View a PDF of the paper titled Reinforcement Learning from Human Feedback, by Nathan Lambert
View PDF HTML (experimental)Abstract:Reinforcement learning from human feedback (RLHF) has become an important technical and storytelling tool to deploy the latest machine learning systems. In this book, we hope to give a gentle introduction to the core methods for people with some level of quantitative background. The book starts with the origins of RLHF -- both in recent literature and in a convergence of disparate fields of science in economics, philosophy, and optimal control. We then set the stage with definitions, problem formulation, data collection, and other common math used in the literature. The core of the book details every optimization stage in using RLHF, from starting with instruction tuning to training a reward model and finally all of rejection sampling, reinforcement learning, and direct alignment algorithms. The book concludes with advanced topics -- understudied research questions in synthetic data and evaluation -- and open questions for the field.
From: Nathan Lambert [view email]
[v1] Wed, 16 Apr 2025 21:36:46 UTC (5,200 KB)
[v2] Wed, 11 Jun 2025 15:15:22 UTC (7,032 KB)
[v3] Sun, 2 Nov 2025 20:03:47 UTC (7,093 KB)
[v4] Fri, 2 Jan 2026 00:09:40 UTC (8,065 KB)
[v5] Sat, 17 Jan 2026 17:17:41 UTC (8,732 KB)
[v1] Wed, 16 Apr 2025 21:36:46 UTC (5,200 KB)
[v2] Wed, 11 Jun 2025 15:15:22 UTC (7,032 KB)
[v3] Sun, 2 Nov 2025 20:03:47 UTC (7,093 KB)
[v4] Fri, 2 Jan 2026 00:09:40 UTC (8,065 KB)
[v5] Sat, 17 Jan 2026 17:17:41 UTC (8,732 KB)