Official implementation of the paper "Curriculum Reinforcement Learning from Easy to Hard Tasks Improves LLM Reasoning".
This repository implements a curriculum learning framework for training large language models (LLMs) on reasoning tasks using GRPO (Group Relative Policy Optimization). The framework progressively trains models from easy to hard tasks, improving their reasoning capabilities across multiple domains.
- Overview
- Table of Contents
- Installation
- Curriculum Schedules
- Configuration
- Training
- Citation
- License
- Acknowledgments
- Python 3.10+
- CUDA 12.x compatible GPU
- Conda or Mamba package manager
- Clone the repository:
git clone https://github.com/divelab/E2H-Reasoning.git
cd E2H-Reasoning- Create the conda environment:
bash env/build_env.shThe framework supports four curriculum learning schedules:
Simple linear progression through tasks based on training progress.
Equal probability for all task difficulty levels throughout training.
Smooth transition from easy to hard tasks using cosine annealing.
Gaussian distribution with a moving center, transitioning from easy to hard tasks.
Configuration Example:
algorithm:
e2h_args:
curriculum_schedule: gaussian # Options: classic, balanced, cosine, gaussian
scheduler_params:
mu_exp: 0.5
sigma: 0.5The project uses Hydra for configuration management. Configuration files are located in config/.
curriculum-reasoning/
├── config/ # Hydra configuration files
│ ├── algorithm/ # Algorithm configs (GRPO)
│ ├── model/ # Model configs (Qwen, Llama)
│ ├── task/ # Task configs (GSM8K, MATH, etc.)
│ └── config.yaml # Base configuration
├── env/
│ └── environment.yml # Conda environment specification
├── src/
│ ├── datasets.py # Dataset loading and preprocessing
│ ├── rewards.py # Reward function implementations
│ └── trainer.py # CurriculumGRPOTrainer
├── main.py # Main entry point for training/testing
├── run.sh # SLURM submission script
└── README.md # This file
config/
├── algorithm/
│ └── grpo.yaml # GRPO training parameters
├── model/
│ ├── qwen1.5b.yaml # Qwen 1.5B model config
│ ├── qwen3b.yaml # Qwen 3B model config
│ └── llama3b.yaml # Llama 3B model config
├── task/
│ ├── gsm8k.yaml # GSM8K task config
│ ├── math.yaml # MATH task config
│ ├── aqua.yaml # AQUA task config
│ ├── blocksworld.yaml # Blocksworld task config
│ └── countdown.yaml # Countdown task config
└── config.yaml # Base configuration
If want to just run our code without modifying any args.
bash runs.sh \
--model=<qwen1.5b,qwen3b,llama3b> \
--task=<aqua,blocksworld,countdown,gsm8k,math> \
--curriculum_schedule=<classic,balanced,cosine,gaussian>If using VLLM server, then execute the following command before training.
CUDA_VISIBLE_DEVICES=0 \
trl vllm-serve \
--model Qwen/Qwen2.5-1.5B-Instruct \
--dtype bfloat16 \
--max_model_len 4096 \
--trust_remote_code true\
--log_level warning \
&or, VLLM can be run in colocate mode, by changing the configs in algorithm/grpo.yaml
CUDA_VISIBLE_DEVICES=1,2 \
accelerate launch \
--mixed_precision bf16 \
--num_processes 2 \
--dynamo_backend no \
--use_deepspeed \
--zero_stage 3 \
--gradient_accumulation_steps 4 \
--gradient_clipping 1 \
--zero3_init_flag true \
--zero3_save_16bit_model true \
main.py \
mode=train \
model=qwen1.5b \
task=blocksworld \
<ARG Overides>CUDA_VISIBLE_DEVICES=1,2 \
accelerate launch \
--mixed_precision bf16 \
--num_machines 1 \
--num_processes 1 \
--dynamo_backend no \
main.py \
mode=test \
model=qwen1.5b \
task=blocksworld \
<ARG Overides>If you use this code in your research, please cite:
@article{parashar2025curriculum,
title={Curriculum Reinforcement Learning from Easy to Hard Tasks Improves LLM Reasoning},
author={Parashar, Shubham and Gui, Shurui and Li, Xiner and Ling, Hongyi and Vemuri, Sushil and Olson, Blake and Li, Eric and Zhang, Yu and Caverlee, James and Kalathil, Dileep and Ji, Shuiwang},
journal={arXiv preprint arXiv:2506.06632},
year={2025}
}This project is licensed under the MIT License - see the LICENSE file for details.
- Built with TRL (Transformer Reinforcement Learning)
- Uses vLLM for efficient inference
- Configuration management via Hydra
- Training optimization with DeepSpeed
For questions or issues, please open an issue on GitHub or contact the authors.