MonSter++: Unified Stereo Matching, Multi-view Stereo, and Real-time Stereo with Monodepth Priors
[2025/9]
We have open-sourced our lightweight real-time model RT-MonSter++[2025/9]
Weights for RT-MonSter++ model released!
Model | Link |
---|---|
KITTI 2012 | Download 🤗 |
KITTI 2015 | Download 🤗 |
mix_all | Download 🤗 |
The mix_all model is trained on all the datasets we collect over 2M image pairs, which has the best performance on zero-shot generalization.
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu121
pip install tqdm
pip install scipy
pip install opencv-python
pip install scikit-image
pip install tensorboard
pip install matplotlib
pip install timm==0.6.13
pip install mmcv==2.2.0 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.4/index.html
pip install accelerate==1.0.1
pip install gradio_imageslider
pip install gradio==4.29.0
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
pip install openexr
pip install pyexr
pip install imath
pip install h5py
pip install swanlab
We obtained the 1st place on the world-wide KITTI 2012 leaderboard and KITTI 2015 leaderboard.


We obtained the 2nd place on the world-wide ETH3D leaderboard, while maintaining the lowest inference cost, particularly compared with the top-ranked method.

If you find our works useful in your research, please consider citing our papers:
MonSter:
@InProceedings{Cheng_2025_CVPR,
author = {Cheng, Junda and Liu, Longliang and Xu, Gangwei and Wang, Xianqi and Zhang, Zhaoxing and Deng, Yong and Zang, Jinliang and Chen, Yurui and Cai, Zhipeng and Yang, Xin},
title = {MonSter: Marry Monodepth to Stereo Unleashes Power},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2025},
pages = {6273-6282}
}
MonSter++:
@article{cheng2025monster,
title={MonSter: Marry Monodepth to Stereo Unleashes Power},
author={Cheng, Junda and Liu, Longliang and Xu, Gangwei and Wang, Xianqi and Zhang, Zhaoxing and Deng, Yong and Zang, Jinliang and Chen, Yurui and Cai, Zhipeng and Yang, Xin},
journal={arXiv preprint arXiv:2501.08643},
year={2025}
}