Researchers from Meta AI, along with several universities and research laboratories, have introduced Hyperagents — a novel approach to building artificial intelligence systems that can both improve their performance and refine the very mechanisms behind those improvements.

At the core of this development is the concept of combining two roles within a single agent: one that executes tasks and another that analyzes and modifies the system. Unlike earlier approaches, where the self-improvement mechanism was fixed, in Hyperagents this mechanism is subject to change. This means the system can optimize not only its behavior but also the way it learns — an approach the authors describe as metacognitive self-modification.

The technology builds upon ideas from the Darwin Gödel Machine (DGM) but removes a key limitation: the dependence on predefined instructions for self-improvement. In the new version, DGM-Hyperagents (DGM-H), the improvement process becomes flexible and is no longer tied to a specific domain.

In a series of experiments, the system demonstrated steady performance gains across diverse areas, including programming, scientific paper review, reward design for robotics, and solving olympiad-level mathematical problems. Hyperagents not only improved task execution but also gradually refined their own self-learning strategies — for example, by accumulating experience, tracking outcomes, and developing more effective approaches over time. These improvements proved to be transferable: self-optimization skills gained in one domain carried over to others and continued to deliver results.

According to the researchers, Hyperagents highlight the potential for AI systems that do not merely find better solutions, but systematically and continuously enhance their own learning processes. This capability could significantly accelerate technological progress.