NeMo Gym is a framework for building reinforcement learning (RL) training environments for large language models (LLMs). It provides infrastructure to develop environments, scale rollout collection, and integrate seamlessly with your preferred training framework.
NeMo Gym is a component of the NVIDIA NeMo Framework, NVIDIAβs GPU-accelerated platform for building and training generative AI models.
- Scaffolding and patterns to accelerate environment development: multi-step, multi-turn, and user modeling scenarios
- Contribute environments without expert knowledge of the entire RL training loop
- Test environment and throughput end-to-end independent of the RL training loop
- Interoperable with existing environments, systems and RL training frameworks
- Growing collection of training environments and datasets to enable Reinforcement Learning from Verifiable Reward (RLVR)
Important
NeMo Gym is currently in early development. You should expect evolving APIs, incomplete documentation, and occasional bugs. We welcome contributions and feedback - for any changes, please open an issue first to kick off discussion!
NeMo Gym is designed to run on standard development machines:
- GPU: Not required for NeMo Gym framework operation
- GPU may be needed for specific resource servers or model inference (see individual server documentation)
- CPU: Any modern x86_64 or ARM64 processor (e.g., Intel, AMD, Apple Silicon)
- RAM: Minimum 8 GB (16 GB+ recommended for larger environments)
- Storage: Minimum 5 GB free disk space for installation and basic usage
- Operating System:
- Linux (Ubuntu 20.04+, or equivalent)
- macOS (11.0+ for x86_64, 12.0+ for Apple Silicon)
- Windows (via WSL2)
- Python: 3.12 or higher
- Git: For cloning the repository
- Internet Connection: Required for downloading dependencies and API access
- API Keys: OpenAI API key with available credits (for the quickstart examples)
- Other model providers supported (Azure OpenAI, self-hosted models via vLLM)
- Ray: Automatically installed as a dependency (no separate setup required)
# Clone the repository
git clone git@github.com:NVIDIA-NeMo/Gym.git
cd Gym
# Install UV (Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.local/bin/env
# Create virtual environment
uv venv --python 3.12
source .venv/bin/activate
# Install NeMo Gym
uv sync --extra dev --group docsCreate an env.yaml file that contains your OpenAI API key and the policy model you want to use. Replace your-openai-api-key with your actual key. This file helps keep your secrets out of version control while still making them available to NeMo Gym.
echo "policy_base_url: https://api.openai.com/v1
policy_api_key: your-openai-api-key
policy_model_name: gpt-4.1-2025-04-14" > env.yamlNote
We use GPT-4.1 in this quickstart because it provides low latency (no reasoning step) and works reliably out-of-the-box. NeMo Gym is not limited to OpenAI modelsβyou can use self-hosted models via vLLM or any OpenAI-compatible inference server. See the documentation for details.
Terminal 1 (start servers):
# Start servers (this will keep running)
config_paths="resources_servers/example_simple_weather/configs/simple_weather.yaml,\
responses_api_models/openai_model/configs/openai_model.yaml"
ng_run "+config_paths=[${config_paths}]"Terminal 2 (interact with agent):
# In a NEW terminal, activate environment
source .venv/bin/activate
# Interact with your agent
python responses_api_agents/simple_agent/client.pyTerminal 2 (keep servers running in Terminal 1):
# Create a simple dataset with one query
echo '{"responses_create_params":{"input":[{"role":"developer","content":"You are a helpful assistant."},{"role":"user","content":"What is the weather in Seattle?"}]}}' > weather_query.jsonl
# Collect verified rollouts
ng_collect_rollouts \
+agent_name=simple_weather_simple_agent \
+input_jsonl_fpath=weather_query.jsonl \
+output_jsonl_fpath=weather_rollouts.jsonl
# View the result
cat weather_rollouts.jsonl | python -m json.toolThis generates training data with verification scores!
Terminal 1 with the running servers: Ctrl+C to stop the ng_run process.
Now that you can generate rollouts, choose your path:
-
Use an existing training environment β Browse the Available Resource Servers below to find a training-ready environment that matches your goals.
-
Build a custom training environment β Implement or integrate existing tools and define task verification logic. Get started with the Creating a Resource Server tutorial.
- Documentation - Technical reference docs
- Tutorials - Hands-on tutorials and practical examples
We'd love your contributions! Here's how to get involved:
- Report Issues - Bug reports and feature requests
- Contributing Guide - How to contribute code, docs, or new environments
If you use NeMo Gym in your research, please cite it using the following BibTeX entry:
@misc{nemo-gym,
title = {NeMo Gym: An Open Source Framework for Scaling Reinforcement Learning Environments for LLM},
howpublished = {\url{https://github.com/NVIDIA-NeMo/Gym}},
author={NVIDIA},
year = {2025},
note = {GitHub repository},
}NeMo Gym includes a curated collection of resource servers for training and evaluation across multiple domains:
Purpose: Demonstrate NeMo Gym patterns and concepts.
| Name | Demonstrates | Config | README |
|---|---|---|---|
| Multi Step | Instruction_Following example | example_multi_step.yaml | README |
| Simple Weather | Basic single-step tool calling | simple_weather.yaml | README |
| Stateful Counter | Session state management (in-memory) | stateful_counter.yaml | README |
Purpose: Training-ready environments with curated datasets.
Tip
Each resource server includes example data, configuration files, and tests. See each server's README for details.
| Resource Server | Domain | Dataset | Description | Value | Config | Train | Validation | License |
|---|---|---|---|---|---|---|---|---|
| Google Search | agent | Nemotron-RL-knowledge-web_search-mcqa | Multi-choice question answering problems with search tools integrated | Improve knowledge-related benchmarks with search tools | config | β | - | Apache 2.0 |
| Math Advanced Calculations | agent | Nemotron-RL-math-advanced_calculations | An instruction following math environment with counter-intuitive calculators | Improve instruction following capabilities in specific math environments | config | β | - | Apache 2.0 |
| Workplace Assistant | agent | Nemotron-RL-agent-workplace_assistant | Workplace assistant multi-step tool-using environment | Improve multi-step tool use capability | config | β | β | Apache 2.0 |
| Mini Swe Agent | coding | SWE-bench_Verified | A software development with mini-swe-agent orchestration | Improve software development capabilities, like SWE-bench | config | β | β | MIT |
| Instruction Following | instruction_following | Nemotron-RL-instruction_following | Instruction following datasets targeting IFEval and IFBench style instruction following capabilities | Improve IFEval and IFBench | config | β | - | Apache 2.0 |
| Structured Outputs | instruction_following | Nemotron-RL-instruction_following-structured_outputs | Check if responses are following structured output requirements in prompts | Improve instruction following capabilities | config | β | β | Apache 2.0 |
| Equivalence Llm Judge | knowledge | Nemotron-RL-knowledge-openQA | Short answer questions with LLM-as-a-judge | Improve knowledge-related benchmarks like GPQA / HLE | config | β | - | Apache 2.0 |
| Mcqa | knowledge | Nemotron-RL-knowledge-mcqa | Multi-choice question answering problems | Improve benchmarks like MMLU / GPQA / HLE | config | β | - | Apache 2.0 |
| Math With Judge | math | Nemotron-RL-math-OpenMathReasoning | Math dataset with math-verify and LLM-as-a-judge | Improve math capabilities including AIME 24 / 25 | config | β | β | Creative Commons Attribution 4.0 International |