Classic linear programming (LP) problems are one of the most foundational problems in computer science and operations research. With extensive applications across numerous sectors of the global economy, such as manufacturing, networking, and other fields, LP has been the cornerstone of mathematical programming and has significantly influenced the development of today’s sophisticated modeling and algorithmic frameworks for data-driven decision making. If there's something to optimize, there's a good chance LP is involved.
Since the late 1940s, LP solving has evolved significantly, with the simplex method by Dantzig and various interior-point methods being the most prevalent techniques. Today's advanced commercial LP solvers utilize these methods but face challenges in scaling to very large instances due to computational demands. In response to this limitation, first-order methods (FOMs) have gained traction for large-scale LP problems.
With the above in mind, we introduce our solver PDLP (Primal-dual hybrid gradient enhanced for LP), a new FOM–based LP solver that significantly scales up our LP solving capabilities. Utilizing matrix-vector multiplication rather than matrix factorization, PDLP requires less memory and is more compatible with modern computational technologies like GPUs and distributed systems, offering a scalable alternative that mitigates the memory and computational inefficiencies of traditional LP methods. PDLP is open-sourced in Google’s OR-Tools. This project has been in development since 2018 [1, 2, 3], and we are proud to announce that it was co-awarded the prestigious Beale — Orchard-Hays Prize at the International Symposium of Mathematical Programming in July 2024. This accolade is one of the highest honors in the field of computational optimization, awarded every three years by the Mathematical Optimization Society.