Ruijie Zhu,
Ziyang Song,
Li Liu,
Jianfeng He,
Tianzhu Zhang*,
Yongdong Zhang,
*Corresponding author.
Deep Space Exploration Laboratory/School of Information Science and Technology,
University of Science and Technology of China
Accepted by TCSVT 2023
Robust depth estimation by proposed hierarchical adaptive bins. Top: Input RGB Image from multiple datasets. Middle: Depth maps predicted by our model. Bottom: The cumulative probabilities on depth bins (bars) and the depth distribution of ground truth (splines). Note that we use the same parameters and weights of the model to predict the depth values of these images.
- 27 Nov. 2023: The project website was released.
- 14 Nov. 2023: The extended paper HA-Bins was accepted by TCSVT 2023.
- 23 Oct. 2022: We won the second place🥈 (MixBins_RVC) on Monocular Depth Estimation track in ECCV2022 workshop: Robust Vision Challenge 2022.
Please refer to get_started.md for installation and dataset_prepare.md for dataset preparation.
We provide train.md and inference.md for the instruction of training and inference.
If you like our work and use the codebase or models for your research, please cite our work as follows.
@ARTICLE{zhu2023habins,
author={Zhu, Ruijie and Song, Ziyang and Liu, Li and He, Jianfeng and Zhang, Tianzhu and Zhang, Yongdong},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
title={HA-Bins: Hierarchical Adaptive Bins for Robust Monocular Depth Estimation Across Multiple Datasets},
year={2024},
volume={34},
number={6},
pages={4354-4366},
doi={10.1109/TCSVT.2023.3335316}}
This codabase is adapted from the Monocular-Depth-Estimation-Toolbox, an excellent depth estimation toolbox created by Zhenyu Li. Please also consider citing it.
@misc{lidepthtoolbox2022,
title={Monocular Depth Estimation Toolbox},
author={Zhenyu Li},
howpublished = {\url{https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox}},
year={2022}
}