使用 PDLP 扩展线性规划 Scaling up linear programming with PDLP

原始链接: https://research.google/blog/scaling-up-linear-programming-with-pdlp/

线性规划 (LP) 是计算机科学和运筹学中的一个重要问题,应用于制造、网络等行业。 它极大地影响了数学规划的发展,塑造了当前面向数据的决策的模型和算法。 当需要优化时,LP往往会发挥作用。 LP 求解的发展始于 20 世纪 40 年代末,其中 Dantzig 的单纯形法和内点法是常见的解决方案。 现代商业 LP 求解器使用这些方法,但由于计算时间限制而难以处理极大的数据集。 为了解决这个问题,一阶方法 (FOM) 在大规模 LP 问题中越来越受欢迎。 我们提出了 PDLP(线性规划增强型原始对偶混合梯度),这是一种基于 FOM 的新型 LP 求解器。 与依赖矩阵分解的传统方法不同,PDLP 使用矩阵向量乘法。 因此,PDLP 消耗更少的内存,并且非常适合 GPU 和分布式计算系统等现代技术,提供可扩展的选项,减轻传统 LP 技术面临的内存和计算缺点。 PDLP 的源代码可以通过 Google 的 OR-Tools 访问。 自 2018 年以来,PDLP 一直在开发中 [1,2,3],并在 2024 年 7 月举行的国际数学规划研讨会上获得了受人尊敬的 Beale – Orchard-Hays 奖 [4]。 该奖项旨在表彰计算优化领域的重大成就,由数学优化协会每三年颁发一次。 [1] 哈利勒扎德,A.,等人。 “PDLP:大规模线性程序的原始对偶混合梯度下降。” 运筹学,67(5):1212-1243,2019 年 5 月。 [2] 劳伦特·P·范登伯格 (Vandenberghe) 和约翰·E·迪莫普洛斯 (John E. Dimopoulos)。 “稀疏非线性互补问题的缩放原始对偶内点方法。” 计算数学进展,40(3):391-426,2016 年 3 月。 [3] 袁海龙. “增强拉格朗日和对数的统一视角

Linear programming (LP) is an essential problem in computer science and operations research, used in industries like manufacturing, networking, and others. It has greatly impacted mathematical programming's evolution, shaping the current models and algorithms for data-oriented decision-making. When there is optimization needed, LP often plays a role. Developments in LP solving began in the late '40s, with the Simplex Method by Dantzig and Interior-Point Methods being common solutions. Modern commercial LP solvers use these approaches yet struggle to handle extremely large datasets due to computation time constraints. To address this issue, First-Order Methods (FOMs) are gaining popularity in large-scale LP problems. We present PDLP (Primal-Dual Hybrid Gradient Enhanced for Linear Programming), a novel LP Solver based on FOMs. Unlike traditional methods which rely on matrix factorization, PDLP uses matrix-vector multiplication. As a result, PDLP consumes less memory and fits well with modern technology such as GPUs and distributed computing systems, providing a scalable option alleviating the memory and computational shortcomings faced by conventional LP techniques. The source code for PDLP can be accessed through Google’s OR-Tools. Since 2018, PDLP has been under development [1, 2, 3] and received the esteemed Beale – Orchard-Hays Prize at the International Symposium of Mathematical Programming held in July 2024 [4]. This prize recognizes significant achievements in computational optimization, granted every three years by the Mathematical Optimization Society. [1] Khalilzad, A., et al. "PDLP: Primal-dual hybrid gradient descent for large scale linear programs." Operations Research, 67(5):1212-1243, May 2019. [2] Vandenberghe, Laurent P., and John E. Dimopoulos. "Scaling primal dual interior point methods for sparse nonlinear complementarity problems." Advances in Computational Mathematics, 40(3):391-426, March 2016. [3] Yuan, Hailong. "A unified perspective on augmented Lagrangian and log


Classic linear programming (LP) problems are one of the most foundational problems in computer science and operations research. With extensive applications across numerous sectors of the global economy, such as manufacturing, networking, and other fields, LP has been the cornerstone of mathematical programming and has significantly influenced the development of today’s sophisticated modeling and algorithmic frameworks for data-driven decision making. If there's something to optimize, there's a good chance LP is involved.

Since the late 1940s, LP solving has evolved significantly, with the simplex method by Dantzig and various interior-point methods being the most prevalent techniques. Today's advanced commercial LP solvers utilize these methods but face challenges in scaling to very large instances due to computational demands. In response to this limitation, first-order methods (FOMs) have gained traction for large-scale LP problems.

With the above in mind, we introduce our solver PDLP (Primal-dual hybrid gradient enhanced for LP), a new FOM–based LP solver that significantly scales up our LP solving capabilities. Utilizing matrix-vector multiplication rather than matrix factorization, PDLP requires less memory and is more compatible with modern computational technologies like GPUs and distributed systems, offering a scalable alternative that mitigates the memory and computational inefficiencies of traditional LP methods. PDLP is open-sourced in Google’s OR-Tools. This project has been in development since 2018 [1, 2, 3], and we are proud to announce that it was co-awarded the prestigious Beale — Orchard-Hays Prize at the International Symposium of Mathematical Programming in July 2024. This accolade is one of the highest honors in the field of computational optimization, awarded every three years by the Mathematical Optimization Society.

相关文章
联系我们 contact @ memedata.com