+
Skip to content
forked from RLinf/RLinf

RLinf is a flexible, scalable and open-source infrastructure designed for reinforcement-learning (RL) post-training of foundation models — including large language models (LLMs), vision-language models (VLMs), and vision-language-action (VLAs) models.

License

Notifications You must be signed in to change notification settings

anHappyDog/RLinf

 
 

Repository files navigation

RLinf-logo
Hugging Face Ask DeepWiki

RLinf: Reinforcement Learning Infrastructure for Agentic AI

RLinf is a flexible and scalable open-source infrastructure designed for post-training foundation models via reinforcement learning. The 'inf' in RLinf stands for Infrastructure, highlighting its role as a robust backbone for next-generation training. It also stands for Infinite, symbolizing the system’s support for open-ended learning, continuous generalization, and limitless possibilities in intelligence development.

RLinf-overview

What's NEW!

Key Features

RLinf is unique with:

  • Macro-to-Micro Flow: a new paradigm M2Flow, which executes macro-level logical flows through micro-level execution flows, decoupling logical workflow construction (programmable) from physical communication and scheduling (efficiency).

  • Flexible Execution Modes

    • Collocated mode: shares all GPUs across all workers.
    • Disaggregated mode: enables fine-grained pipelining.
    • Hybrid mode: a customizable combination of different placement modes, integrating both collocated and disaggregated modes.
  • Auto-scheduling Strategy: automatically selects the most suitable execution mode based on the training workload, without the need for manual resource allocation.

  • Embodied Agent Support

    • Fast adaptation support for mainstream VLA models: OpenVLA, OpenVLA-OFT, π₀ and π₀.₅.
    • Support for mainstream CPU & GPU-based simulators via standardized RL interfaces: ManiSkill3, LIBERO.
    • Enabling the first RL fine-tuning of the $\pi_0$ and $\pi_{0.5}$ model family with a flow-matching action expert.

RLinf is fast with:

  • Hybrid mode with fine-grained pipelining: achieves a 120%+ throughput improvement compared to other frameworks.
  • Automatic Online Scaling Strategy: dynamically scales training resources, with GPU switching completed within seconds, further improving efficiency by 20–40% while preserving the on-policy nature of RL algorithms.

RLinf is flexible and easy to use with:

  • Multiple Backend Integrations

    • FSDP + Hugging Face: rapid adaptation to new models and algorithms, ideal for beginners and fast prototyping.
    • Megatron + SGLang: optimized for large-scale training, delivering maximum efficiency for expert users with demanding workloads.
  • Adaptive communication via the asynchronous communication channel

  • Built-in support for popular RL methods, including PPO, GRPO, DAPO, Reinforce++, and more.

Main Results

Embodied Intelligence

OpenVLA-OFT model results on ManiSkill3
Model Vision Semantic Position Average
HFrl4vla 76.6% 75.4% 77.6% 76.1%
HFGRPO-OpenVLA-OFT 84.6% 51.6% 42.9% 61.5%
HFPPO-OpenVLA-OFT 80.5% 56.6% 56.1% 64.5%
HFPPO-OpenVLA 82.0% 80.6% 89.3% 82.2%
HFGRPO-OpenVLA 74.7% 74.4% 81.6% 75.5%
OpenVLA-OFT model results on LIBERO
Model Spatial Goal Object Long Average
OpenVLA-OFT-SFT (one-shot) 56.5% 45.6% 25.6% 9.7% 34.4%
OpenVLA-OFT-RLinf 99.0% 99.0% 99.0% 94.4% 97.9%
Improvement +42.5% +53.4% +73.4% +84.7% +63.5%
  • RLinf supports both PPO and GRPO algorithms, enabling state-of-the-art training for Vision-Language-Action models.
  • The framework provides seamless integration with mainstream embodied intelligence benchmarks, including ManiSkill3 and LIBERO, and achieves strong performance across diverse evaluation metrics.

Math Reasoning

1.5B model results
Model AIME 24 AIME 25 GPQA-diamond Average
HFDeepSeek-R1-Distill-Qwen-1.5B (base model) 28.3324.9027.4526.89
HFDeepMath-1.5B 37.8030.4232.1133.44
HFDeepScaleR-1.5B-Preview 40.4130.9327.5432.96
HFAReaL-1.5B-Preview-Stage-3 40.7331.5628.1033.46
AReaL-1.5B-retrain* 44.4234.2733.8137.50
HFFastCuRL-1.5B-V3 43.6532.4935.0037.05
HFRLinf-math-1.5B 48.4435.6338.4640.84

* We retrain the model using the default settings for 600 steps.

7B model results
Model AIME 24 AIME 25 GPQA-diamond Average
HFDeepSeek-R1-Distill-Qwen-7B (base model) 54.9040.2045.4846.86
HFAReaL-boba-RL-7B 61.6649.3846.9352.66
HFSkywork-OR1-7B 66.8752.4944.4354.60
HFPolaris-7B-Preview 68.5551.2443.8854.56
HFAceMath-RL-Nemotron-7B 67.3055.0045.5755.96
HFRLinf-math-7B 68.3352.1948.1856.23
  • RLinf achieves state-of-the-art performance on math reasoning tasks, consistently outperforming existing models across multiple benchmarks (AIME 24, AIME 25, GPQA-diamond) for both 1.5B and 7B model sizes.

Roadmap

1. System-Level Enhancements

  • Support for heterogeneous GPUs
  • Support for asynchronous pipeline execution
  • Support for Mixture of Experts (MoE)
  • Support for vLLM inference backend

2. Application-Level Extensions

  • Support for Vision-Language Models (VLMs) training
  • Support for deep searcher agent training
  • Support for multi-agent training
  • Support for integration with more embodied simulators (e.g., Meta-World, GENESIS, RoboTwin)
  • Support for more Vision Language Action models (VLAs), such as GR00T, WALL-OSS
  • Support for world model
  • Support for real-world RL embodied intelligence

Getting Started

Complete documentation for RLinf can be found Here.

Quickstart

Key Design

Example Gallery

Advanced Features

Extending The Framework:

Blogs

Build Status

Type Status
Reasoning RL-MATH Build Status
Embodied RL-VLA Build Status

Contribution Guidelines

We welcome contributions to RLinf. Please read contribution guide before taking action.

Citation and Acknowledgement

If you find RLinf helpful, please cite the paper:

@misc{yu2025rlinfflexibleefficientlargescale,
  title={RLinf: Flexible and Efficient Large-scale Reinforcement Learning via Macro-to-Micro Flow Transformation}, 
  author={Chao Yu and Yuanqing Wang and Zhen Guo and Hao Lin and Si Xu and Hongzhi Zang and Quanlu Zhang and Yongji Wu and Chunyang Zhu and Junhao Hu and Zixiao Huang and Mingjie Wei and Yuqing Xie and Ke Yang and Bo Dai and Zhexuan Xu and Xiangyuan Wang and Xu Fu and Zhihao Liu and Kang Chen and Weilin Liu and Gang Liu and Boxun Li and Jianlei Yang and Zhi Yang and Guohao Dai and Yu Wang},
  year={2025},
  eprint={2509.15965},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2509.15965}, 
}

If you use RL+VLA in RLinf, you can also cite our empirical study paper:

@misc{liu2025rlbringvlageneralization,
  title={What Can RL Bring to VLA Generalization? An Empirical Study}, 
  author={Jijia Liu and Feng Gao and Bingwen Wei and Xinlei Chen and Qingmin Liao and Yi Wu and Chao Yu and Yu Wang},
  year={2025},
  eprint={2505.19789},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2505.19789}, 
}

Acknowledgements RLinf has been inspired by, and benefits from, the ideas and tooling of the broader open-source community. In particular, we would like to thank the teams and contributors behind VeRL, AReaL, Megatron-LM, SGLang, and PyTorch Fully Sharded Data Parallel (FSDP), and if we have inadvertently missed your project or contribution, please open an issue or a pull request so we can properly credit you.

Contact: We welcome applications from Postdocs, PhD/Master's students, and interns. Join us in shaping the future of RL infrastructure and embodied AI!

About

RLinf is a flexible, scalable and open-source infrastructure designed for reinforcement-learning (RL) post-training of foundation models — including large language models (LLMs), vision-language models (VLMs), and vision-language-action (VLAs) models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • Shell 0.5%
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载