AI Improves at Improving Itself Using an Evolutionary Trick

原始链接: https://spectrum.ieee.org/evolutionary-ai

Recent advancements in AI are enabling coding agents to recursively improve themselves, potentially revolutionizing software development. Companies like Microsoft and Google already utilize AI for a significant portion of their coding. Researchers have now developed Darwin Gödel Machines (DGMs), which employ large language models (LLMs) and evolutionary algorithms to create coding agents that learn and improve through iterative testing and modification. Unlike traditional methods, DGMs retain all generated agents, fostering "open-ended exploration" and allowing initially unsuccessful innovations to contribute to later breakthroughs. DGMs have demonstrated impressive progress on coding benchmarks, surpassing fixed improvement systems. While still behind expert human coders, the potential for DGMs to evolve beyond human expertise is significant. However, concerns arise regarding safety, interpretability, and alignment with human values. Researchers are implementing guardrails and exploring methods to reward AI for transparency. While some worry about a "singularity" scenario, others remain optimistic about the continued importance of human creativity in the field.

This Hacker News thread discusses an IEEE article about "Darwin Gödel Machines" (DGMs), a new AI system that uses evolutionary algorithms and large language models (LLMs) to improve its coding abilities. DGMs create multiple "coding agents" and use an LLM to make targeted improvements to their code, guided by the LLM's understanding of human coding practices. The improved agents are then tested and scored on coding benchmarks. The discussion also critiques claims made by Microsoft and Google CEOs about the percentage of their code being written by AI. Commenters question whether the reported 20-30% figures refer to the entire codebase or just new code, suggesting the numbers may be inflated and don't account for revisions and discarded code. Some find the claims unrealistic based on their own experiences with AI-assisted coding tools. The general consensus is one of skepticism, highlighting the need for more clarity on how these numbers are calculated.
相关文章

原文

In April, Microsoft’s CEO said that artificial intelligence now wrote close to a third of the company’s code. Last October, Google’s CEO put their number at around a quarter. Other tech companies can’t be far off. Meanwhile these firms create AI, which will presumably be used to help programmers further.

Researchers have long hoped to fully close the loop, creating coding agents that recursively improve themselves. New research reveals an impressive demonstration of such a system. Extrapolating, one might see a boon to productivity, or a much darker future for humanity.

“It’s nice work,” said Jürgen Schmidhuber, a computer scientist at the King Abdullah University of Science and Technology (KAUST), in Saudi Arabia, who was not involved in the new research. “I think for many people, the results are surprising. Since I’ve been working on that topic for almost forty years now, it’s maybe a little bit less surprising to me.” But his work over that time was limited by the tech at hand. One new development is the availability of large language models (LLMs), the engines powering chatbots like ChatGPT.

In the 1980s and 1990s, Schmidhuber and others explored evolutionary algorithms for improving coding agents, creating programs that write programs. An evolutionary algorithm takes something (such as a program), creates variations, keeps the best ones, and iterates on those.

But evolution is unpredictable. Modifications don’t always improve performance. So in 2003, Schmidhuber created problem solvers that rewrote their own code only if they could formally prove the updates to be useful. He called them Gödel machines, named after Kurt Gödel, a mathematician who’d done work on self-referencing systems. But for complex agents, provable utility doesn’t come easily. Empirical evidence may have to suffice.

The Value of Open-Ended Exploration

The new systems, described in a recent preprint on arXiv, rely on such evidence. In a nod to Schmidhuber, they’re called Darwin Gödel Machines (DGMs). A DGM starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent’s coding ability. LLMs have something like intuition about what might help, because they’re trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges.

Some evolutionary algorithms keep only the best performers in the population, on the assumption that progress moves endlessly forward. DGMs, however, keep them all, in case an innovation that initially fails actually holds the key to a later breakthrough when further tweaked. It’s a form of “open-ended exploration,” not closing any paths to progress. (DGMs do prioritize higher scorers when selecting progenitors.)

The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents’ scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. “We were actually really surprised that the coding agent could write such complicated code by itself,” said Jenny Zhang, a computer scientist at the University of British Columbia and the paper’s lead author. “It could edit multiple files, create new files, and create really complicated systems.”

A family tree style image shows one node at the top branching off into 8 nodes, some of which branch off into more nodes.The first coding agent (numbered 0) created a generation of new and slightly different coding agents, some of which were selected to create new versions of themselves. The agents’ performance is indicated by the color inside the circles, and the best performing agent is marked with a star. Jenny Zhang, Shengran Hu et al.

Critically, the DGMs outperformed an alternate method that used a fixed external system for improving agents. With DGMs, agents’ improvements compounded as they improved themselves at improving themselves. The DGMs also outperformed a version that didn’t maintain a population of agents and just modified the latest agent. To illustrate the benefit of open-endedness, the researchers created a family tree of the SWE-bench agents. If you look at the best-performing agent and trace its evolution from beginning to end, it made two changes that temporarily reduced performance. So the lineage followed an indirect path to success. Bad ideas can become good ones.

On a graph with "SWE-bench score" on the y axis and "iterations" on the x axis, a black line goes up with two dips. The black line on this graph shows the scores obtained by agents within the lineage of the final best-performing agent. The line includes two performance dips. Jenny Zhang, Shengran Hu et al.

The best SWE-bench agent was not as good at the best agent designed by expert humans, which currently scores about 70 percent, but it was generated automatically, and maybe with enough time and computation an agent could evolve beyond human expertise. The study is a “big step forward” as a proof of concept for recursive self-improvement, said Zhengyao Jiang, a cofounder of Weco AI, a platform that automates code improvement. Jiang, who was not involved in the study, said the approach could made further progress if it modified the underlying LLM, or even the chip architecture. (Google DeepMind’s AlphaEvolve designs better basic algorithms and chips and found a way to accelerate the training of its underlying LLM by 1 percent.)

DGMs can theoretically score agents simultaneously on coding benchmarks and also specific applications, such as drug design, so they’d get better at getting better at designing drugs. Zhang said she’d like to combine a DGM with AlphaEvolve.

Could DGMs reduce employment for entry-level programmers? Jiang sees a bigger threat from everyday coding assistants like Cursor. “Evolutionary search is really about building really high-performance software that goes beyond the human expert,” he said, as AlphaEvolve has done on certain tasks.

The Risks of Recursive Self-Improvement

One concern with both evolutionary search and self-improving systems—and especially their combination, as in DGM—is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.)

In 2017, experts met in Asilomar, California, to discuss beneficial AI, and many signed an open letter called the Asilomar AI Principles. In part, it called for restrictions on “AI systems designed to recursively self-improve.” One frequently imagined outcome is the so-called singularity, in which AIs self-improve beyond our control and threaten human civilization. “I didn’t sign that because it was the bread and butter that I’ve been working on,” Schmidhuber told me. Since the 1970s, he’s predicted that superhuman AI will come in time for him to retire, but he sees the singularity as the kind of science-fiction dystopia people love to fear. Jiang, likewise, isn’t concerned, at least for the time being. He still places a premium on human creativity.

Whether digital evolution defeats biological evolution is up for grabs. What’s uncontested is that evolution in any guise has surprises in store.

From Your Site Articles

Related Articles Around the Web

联系我们 contact @ memedata.com