Official Code for the CVPR 2025 Paper
"[CVPR 2025] Distilling Monocular Foundation Model for Fine-grained Depth Completion"
- [2025.04.23] We have released the 2rd stage training code! 🎉
- [2025.04.11] We have released the inference code! 🎉
- 📦 Easy-to-use data generation pipeline
- 🧠 Checkpoints trained on a larger mixed dataset
- 🤖 Inference code for SLAM applications
⚠️ Note: We're currently struggling with the response to a journal manuscript revision 📝😅.
Thanks for your patience and continued support — we're doing our best to roll out updates as soon as we can! 🙏
DMD³C introduces a novel framework for fine-grained depth completion by distilling knowledge from monocular foundation models. This approach significantly enhances depth estimation accuracy in sparse data, especially in regions without ground-truth supervision.
git clone https://github.com/kakaxi314/BP-Net.git
cp DMD3C/* BP-Net/
cd BP-Net/DMD3C/
- 📥 [Google Drive – Checkpoints] Comming soon...
Download any sequence from the KITTI Raw dataset, which includes:
- Camera intrinsics
- Velodyne point cloud
- Image sequences
Make sure the structure follows the standard KITTI format.
Open demo.py
and go to line 338, where you can modify the input sequence path according to your downloaded KITTI data.
# demo.py (Line 338)
sequence = "/path/to/your/kitti/sequence"
bash demo.sh
You will get results like this: