Train GRPO-powered RL agents with minimal code changes and maximal performance! Agent Reinforcement Trainer (ART) ART is an open-source reinforcement training library for improving LLM performance in agentic workflows. ART utilizes the powerful GRPO reinforcement learning algorithm to train models from their own experiences. Unlike most RL libraries, ART allows you to execute agent runs in your existing codebase while offloading all the complexity of the RL training loop to the ART backend. Read about the training loop. Then try out one of the notebooks below! 📒 Notebooks 🔁 Training Loop Overview ART's functionality is divided into a client and a server. The OpenAI-compatible client is responsible for interfacing between ART and your codebase. Using the client, you can pass messages and get completions from your LLM as it improves. The server runs independently on any machine with a GPU. It abstracts away the complexity of the inference and training portions of the RL loop while allowing for some custom configuration. An outline of the training loop is shown below: Inference Your code uses the ART client to perform an agentic workflow (usually executing several rollouts in parallel to gather data faster). Completion requests are routed to the ART server, which runs the model's latest LoRA in vLLM. As the agent executes, each system , user , and assistant message is stored in a Trajectory. When a rollout finishes, your code assigns a reward to its Trajectory, indicating the performance of the LLM. Training When each rollout has finished, Trajectories are grouped and sent to the server. Inference is blocked while training executes. The server trains your model using GRPO, initializing from the latest checkpoint (or an empty LoRA on the first iteration). The server saves the newly trained LoRA to a local directory and loads it into vLLM. Inference is unblocked and the loop resumes at step 1. This training loop runs until a specified number of inference and training iterations have completed. 🧩 Supported Models ART should work with most vLLM/HuggingFace-transformers compatible causal language models, or at least the ones supported by Unsloth. Gemma 3 does not appear to be supported for the time being. If any other model isn't working for you, please let us know on Discord or open an issue on GitHub! 🤝 Contributing ART is in active development, and contributions are most welcome! Please see the CONTRIBUTING.md file for more information. 📖 Citation @misc { hilton2025art , author = { Brad Hilton and Kyle Corbitt and David Corbitt and Saumya Gandhi and Angky William and Bohdan Kovalenskyi and Andie Jones } , title = { ART: Agent Reinforcement Trainer } , year = { 2025 } , publisher = { GitHub } , journal = { GitHub repository } , howpublished = { \url{https://github.com/openpipe/art} } } ⚖️ License This repository's source code is available under the Apache-2.0 License. 🙏 Credits ART stands on the shoulders of giants. While we owe many of the ideas and early experiments that led to ART's development to the open source RL community at large, we're especially grateful to the authors of the following projects: Finally, thank you to our partners who've helped us test ART in the wild! We're excited to see what you all build with it.