Skip to content

gaoxiangjun/ConTex-Human

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 

Repository files navigation

ConTex-Human

Paper PDF Paper PDF Project Page

Accepted as a CVPR 2024 Paper.

Official implementation of ConTex-Human: Free-View Rendering of Human from a Single Image with Texture-Consistent Synthesis. Code will be released.

Xiangjun Gao · Xiaoyu Li · Chaopeng Zhang, Qi Zhang · Yanpei Cao · Ying Shan · Long Quan

HKUST1, Tencent AI Lab 2

Abstract: In this work, we propose a method to address the challenge of rendering a 3D human from a single image in a free-view manner. Some existing approaches could achieve this by using generalizable pixel-aligned implicit fields to reconstruct a textured mesh of a human or by employing a 2D diffusion model as guidance with the Score Distillation Sampling (SDS) method, to lift the 2D image into 3D space. However, a generalizable implicit field often results in an over-smooth texture field, while the SDS method tends to lead to a texture-inconsistent novel view with the input image. In this paper, we introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view through depth and text-guided attention injection. Moreover, to alleviate the color distortion that occurs in the side region, we propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture. With the above techniques, we could achieve high-fidelity and texture-consistent human rendering from a single image. Experiments conducted on both real and synthetic data demonstrate the effectiveness of our method and show that our approach outperforms previous baseline methods.

Comparison with SOTA

comp_tech.mp4

Method Overview

BibTeX

@misc{gao2023contexhuman,
      title={ConTex-Human: Free-View Rendering of Human from a Single Image with Texture-Consistent Synthesis}, 
      author={Xiangjun Gao and Xiaoyu Li and Chaopeng Zhang and Qi Zhang and Yanpei Cao and Ying Shan and Long Quan},
      year={2023},
      eprint={2311.17123},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}