Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 1.64 KB

2409.07245.md

File metadata and controls

5 lines (3 loc) · 1.64 KB

Single-View 3D Reconstruction via SO(2)-Equivariant Gaussian Sculpting Networks

This paper introduces SO(2)-Equivariant Gaussian Sculpting Networks (GSNs) as an approach for SO(2)-Equivariant 3D object reconstruction from single-view image observations. GSNs take a single observation as input to generate a Gaussian splat representation describing the observed object's geometry and texture. By using a shared feature extractor before decoding Gaussian colors, covariances, positions, and opacities, GSNs achieve extremely high throughput (>150FPS). Experiments demonstrate that GSNs can be trained efficiently using a multi-view rendering loss and are competitive, in quality, with expensive diffusion-based reconstruction algorithms. The GSN model is validated on multiple benchmark experiments. Moreover, we demonstrate the potential for GSNs to be used within a robotic manipulation pipeline for object-centric grasping.

本文提出了SO(2)-等变高斯雕刻网络(SO(2)-Equivariant Gaussian Sculpting Networks, GSNs),该方法用于从单视角图像观测中进行SO(2)-等变的3D物体重建。GSNs以单个观测图像为输入,生成一个描述物体几何和纹理的高斯散点表示。通过在解码高斯颜色、协方差、位置和不透明度之前使用共享特征提取器,GSNs 实现了极高的处理速度(超过150FPS)。实验表明,GSNs 能够通过多视图渲染损失高效训练,且在质量上与昂贵的基于扩散的重建算法相媲美。GSN模型在多个基准实验中得到了验证。此外,我们展示了GSNs在机器人操作流程中应用的潜力,特别是在以物体为中心的抓取任务中。