- The pre-trained 2D diffusion model trained on billions of web images can generate high-quality texture.
- The reconstruction model can ensure consistency across multi-views.
- We cyclically utilizes a 2D diffusion-based generation module and a feed-forward 3D reconstruction module during the multi-step diffusion process.
Welcome to watch ๐ this repository for the latest updates.
โ [2024.7.28] : We have released our paper, Cycle3D on arXiv.
โ [2024.7.28] : Release project page.
- Code release.
- Online Demo.
Coming soon!
This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!
If you find our paper and code useful in your research, please consider giving a star โญ and citation ๐.
@misc{tang2024cycle3dhighqualityconsistentimageto3d,
title={Cycle3D: High-quality and Consistent Image-to-3D Generation via Generation-Reconstruction Cycle},
author={Zhenyu Tang and Junwu Zhang and Xinhua Cheng and Wangbo Yu and Chaoran Feng and Yatian Pang and Bin Lin and Li Yuan},
year={2024},
eprint={2407.19548},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.19548},
}