Vision-Matters is a simple visual perturbation framework that can be easily integrated into existing post-training pipelines including SFT, DPO, and GRPO. Our findings highlight the critical role of visual perturbation: better reasoning begins with better seeing.
1. School of Computer Science, Shanghai Jiao Tong University
2. Shanghai Innovation Institute 3. Zhongguancun Academy
4. State Key Laboratory of General Artificial Intelligence, BIGAI 5. Lehigh University
If you like our work, please give us a ⭐!
- [2025.06.12] Our paper: Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning is available on arXiv.
- [2025.06.11] We release our models, datasets and codebase.
For SFT and DPO training, we use ms-swift framework. You can build environment refer this.
To install from source:
git clone https://github.com/YutingLi0606/Vision-Matters.git
cd SFT-DPO-Training
pip install -e .
For GRPO training, we use Easy-R1 framework.
- Python 3.9+
- transformers>=4.51.0
- flash-attn>=2.4.3
- vllm>=0.8.3
We provide a Dockerfile to easily build environments.
We recommend using the pre-built docker image in EasyR1.
docker pull hiyouga/verl:ngc-th2.6.0-cu126-vllm0.8.4-flashinfer0.2.2-cxx11abi0
Note
We recommend creating two separate environments to run SFT, DPO and GRPO independently.
Format your dataset in the same structure as Rejection-sampling/example.json, and then run the following command:
bash Rejection-sampling/rejection-sampling.sh
You can start the training with a single command:
# SFT Training
bash SFT-DPO-Training/run/sft.sh
# DPO Training
bash SFT-DPO-Training/run/dpo.sh
# GRPO Training
bash GRPO-Training/examples/example.sh
Before running the evaluation, please download the evaluation datasets from 🤗 Vision-Matters Evaluation.
And then run:
bash Evaluation/inf.sh
Tip
How to merge model?
bash SFT-DPO-Training/run/merge.sh and bash GRPO-Training/examples/merge.sh
If you use Vision-Matters
or its methods in your work, please cite the following BibTeX entries:
bibtex
@article{li2025vision,
title={Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning},
author={Li, Yuting and Wei, Lai and Zheng, Kaipeng and Huang, Jingyuan and Kong, Linghe and Sun, Lichao and Huang, Weiran},
journal={arXiv preprint arXiv:2506.09736},
year={2025}
}
Our work is built upon Easy-R1 and ms-swift.
✨ Feel free to contribute and reach out if you have any questions! ✨