Skip to content
forked from LengSicong/MMR1

MMR1: Advancing the Frontiers of Multimodal Reasoning

License

Notifications You must be signed in to change notification settings

DAMO-NLP-SG/MMR1

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

If our project helps you, please give us a star ⭐ on GitHub to support us. πŸ™πŸ™

hf_data hf_checkpoint License
Hits GitHub issues GitHub closed issues

πŸ“° News

  • [2025.03.11] πŸ”₯πŸ”₯ Release MMR1-Math-v0-7B, achieving SOTA with only 6k public training data!

Introduction

Introducing MMR1-Math-v0, a Large Multimodal Model specialized in mathematical tasks. Remarkably, MMR1-Math-v0 achieves state-of-the-art performance among open-source 7B multimodal models, competing effectively even against proprietary models with significantly larger parameter sizesβ€”all trained using only 6k carefully curated data instances.

πŸ’‘ Key Highlights:

  • SOTA Performance: Sets a new state-of-the-art benchmark on math-related multimodal tasks among open-source 7B models.

  • Minimal Training Data: Remarkably achieves top-tier performance with just 6k high-quality samples from public training datasets.

  • Efficient Training with GRPO: 6 hours of RL training with 64 H100s for 15 epochs.

  • Public and High-Quality Data: Publicly sourced datasets, rigorously filtered and balanced across both difficulty and mathematical problem types.

  • Balanced Data Strategy: Uniform sampling of data based on both task difficulty (filtering out overly simple problems) and mathematical reasoning diversity.

βœ… Evaluation Results

We evaluated our model using VLMEvalKit on four mathematical reasoning benchmarks: MathVista_MINI, MathVision, LogicVista, and MathVerse_MINI.

We also include results on the MathVerse_MINI_Vision_Only_cot (MathVerse_V) subset to maintain consistency with the VLMEvalKit leaderboard. The table below compares our model's performance against various open-source and proprietary models.

Model size MathVista MathVision LogicVista MathVerse MathVerse_V
Close-sourced
GPT-4o 1120 - 60.0 31.2 52.8 40.6 -
Gemini-2.0-flash - 70.4 43.6 52.3 47.8 -
Claude3.7-Sonnet - 66.8 41.9 58.2 46.7 -
R1-related
LLaVA-CoT 11B 52.5 19.9 39.6 22.6 -
Open-R1-Multimodal 7B 60.6 - - - -
Mulberry 7B 63.1 - - - -
LMM-R1 3B 63.2 26.4 - - 41.6
R1-Onevision 7B - 26.2 - - 44.1
MM-Eureka 8B 67.1 22.2 - - 40.4
MM-Eureka 38B 64.2 26.6 - - 48.9
Open-sourced
Ovis2-8b 8B 71.8 25.9 39.4 42.3 -
MiniCPM-o-2.6 8B 71.9 21.7 36.0 35.0 -
VITA-1.5 7B 66.2 19.5 38.9 - 23.4
Qwen2.5-VL (official) 7B 68.2 25.4 47.9 41.1 -
Qwen2.5-VL (reproduced) 7B 67.5 25.6 46.8 42.5 46.9
Ours
MMR1-math-v0 7B 71.0 30.2 50.8 45.1 49.8

Ablation Studies

To further examine the effectiveness of GRPO, we perform ablation experiments by comparing our model with two SFT-based variants. Specifically, we fine-tune Qwen2.5-VL-7B on the 6k dataset using direct answer supervision (Qwen2.5-VL-sft) and chain-of-thought supervision (Qwen2.5-VL-sft-cot).

Model size MathVista MathVision LogicVista MathVerse MathVerse_V
Qwen2.5-VL-sft 7B 52.2 27.0 31.8 20.7 24.7
Qwen2.5-VL-sft-cot 7B 54.7 23.4 33.8 23.7 25.7
MMR1-math-v0 7B 71.0 30.2 50.8 45.1 49.8

🏫 Project Zoo

Project Latest Model Checkpoints Data Link
MMR1-Math MMR1-Math-v0 hf_space hf_space πŸ”—
MMR1-Science (coming soon!)

πŸ’ͺ TODO

This project is under active development. Stay tuned for our upcoming updates!

  • Release data composition and preprocessing scripts.
  • Release GRPO training scripts.
  • Cold-start before RL training. Both dataset and checkpoint for cold-start will be released soon.
  • More efficient GRPO training recipes. (Coming soon)
  • More model sizes and variants.

πŸ› οΈ Requirements and Installation

Basic Dependencies:

  • Python >= 3.10
  • transformers>=4.49.0
  • flash-attn>=2.4.3
  • vllm>=0.7.3

Install required packages:

pip install -r requirements.txt

πŸ€– Inference

Here we show a code snippet to show you how to use MMR1-Math with transformers and qwen_vl_utils:

from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "MMR1/MMR1-Math-v0-7B", 
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",
    device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("MMR1/MMR1-Math-v0-7B")
# Example input
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "path/to/image.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]
# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
Batch inference
# Sample messages for batch inference
messages1 = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "file:///path/to/image1.jpg"},
            {"type": "image", "image": "file:///path/to/image2.jpg"},
            {"type": "text", "text": "What are the common elements in these pictures?"},
        ],
    }
]
messages2 = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages2]
# Preparation for batch inference
texts = [
    processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
    for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=texts,
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)

πŸ—οΈ Training

Coming soon!

🀝 Contribution and Contact

This project is still under active development. Community feedback and contributions are highly appreciated. If you want to contribute, please feel free to make a pull request or create an issue.

If you have any questions or would like to engage with our community, feel free to scan the QR code below to join our WeChat group.

πŸ‘ Acknowledgement

Our MMR1 is build on top of Qwen2.5VL, LLaMA-Factory and EasyR1. Besides, our MMR1 benefits from tons of open-source efforts. We sincerely appreciate these efforts and compile a list in ACKNOWLEDGEMENT.md to express our gratitude. If your work is used in MMR1 but not mentioned in either this repo or the technical report, feel free to let us know ❀️.

πŸ’‘ Some other multimodal-LLM projects from our team may interest you ✨.

VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding
Boqiang Zhang* , Kehan Li* , Zesen Cheng* , Zhiqiang Hu* , Yuqian Yuan* , Guanzheng Chen* , Sicong Leng* , Yuming Jiang* , Hang Zhang* , Xin Li* , Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao
github github arXiv

VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
Zesen Cheng*, Sicong Leng*, Hang Zhang*, Yifei Xin*, Xin Li*, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, Lidong Bing
github github arXiv

VCD: Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding
Sicong Leng*, Hang Zhang*, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, Lidong Bing
github github arXiv

The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio
Sicong Leng*, Yun Xing*, Zesen Cheng*, Yang Zhou, Hang Zhang, Xin Li, Deli Zhao, Shijian Lu, Chunyan Miao, Lidong Bing
github github arXiv

Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss
Zesen Cheng*, Hang Zhang*, Kehan Li*, Sicong Leng, Zhiqiang Hu, Fei Wu, Deli Zhao, Xin Li, Lidong Bing
github github arXiv

VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM
Yuqian Yuan, Hang Zhang, Wentong Li, Zesen Cheng, Boqiang Zhang, Long Li, Xin Li, Deli Zhao, Wenqiao Zhang, Yueting Zhuang, Jianke Zhu, Lidong Bing
github github arXiv

πŸ“‘ Citation

If you find MMR1 useful for your research and applications, please cite using this BibTeX:

@misc{MMR1-Math2025,
  title={MMR1: Advancing the Frontiers of Multimodal Reasoning},
  author={Sicong Leng*, Jing Wang*, Jiaxi Li*, Hao Zhang*, Zhiqiang Hu, Boqiang Zhang, Hang Zhang, Yuming Jiang, Xin Li, Deli Zhao, Fan Wang, Yu Rong, Aixin Sun†, Shijian Lu†},
  year={2025},
  howpublished={\url{https://github.com/LengSicong/MMR1}},
}

πŸ”’ License

This project is released under the Apache 2.0 license as found in the LICENSE file. The service is a research preview intended for non-commercial use ONLY, subject to the model Licenses of Qwen, Terms of Use of the data generated by OpenAI and Gemini, and Privacy Practices of ShareGPT. Please get in touch with us if you find any potential violations.

Star History

Star History Chart

About

MMR1: Advancing the Frontiers of Multimodal Reasoning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published