We present GSD, a diffusion model approach based on Gaussian Splatting (GS) representation for 3D object reconstruction from a single view. Prior works suffer from inconsistent 3D geometry or mediocre rendering quality due to improper representations. We take a step towards resolving these shortcomings by utilizing the recent state-of-the-art 3D explicit representation, Gaussian Splatting, and an unconditional diffusion model. This model learns to generate 3D objects represented by sets of GS ellipsoids. With these strong generative 3D priors, though learning unconditionally, the diffusion model is ready for view-guided reconstruction without further model fine-tuning. This is achieved by propagating fine-grained 2D features through the efficient yet flexible splatting function and the guided denoising sampling process. In addition, a 2D diffusion model is further employed to enhance rendering fidelity, and improve reconstructed GS quality by polishing and re-using the rendered images. The final reconstructed objects explicitly come with high-quality 3D structure and texture, and can be efficiently rendered in arbitrary views. Experiments on the challenging real-world CO3D dataset demonstrate the superiority of our approach.
我们提出了GSD,一种基于高斯喷溅(Gaussian Splatting,GS)表示的扩散模型方法,用于从单个视角进行3D物体重建。先前的工作由于不恰当的表示方法而导致3D几何不一致或渲染质量中等。我们通过利用最近的最先进3D显式表示方法——高斯喷溅,以及无条件的扩散模型,试图解决这些问题。这个模型学习生成由一组GS椭球体表示的3D物体。凭借这些强大的生成3D先验,虽然是无条件学习的,扩散模型已经可以进行视图引导的重建,无需进一步的模型微调。这是通过通过高效而灵活的光滑函数和引导去噪采样过程传播细粒度的2D特征实现的。此外,还进一步使用了2D扩散模型来增强渲染保真度,并通过优化和重复使用渲染图像来改善重建的GS质量。最终重建的物体具有高质量的3D结构和纹理,能够在任意视角高效渲染。在具有挑战性的真实世界CO3D数据集上的实验证明了我们方法的优越性。