FaceShot: Bring Any Character into Life
Junyao Gao, Yanan Sun‡ *, Fei Shen, Xin Jiang, Zhening Xing, Kai Chen*, Cairong Zhao*
(* corresponding authors, ‡ project leader)
Bringing characters like Teddy Bear into life requires a bit of magic. FaceShot makes this magic a reality by introducing a training-free portrait animation framework which can animate any character from any driven video, especially for non-human characters, such as emojis and toys.
Your star is our fuel! We're revving up the engines with it!
- [2025/1/23] 🔥 We release the code, project page and paper.
- Preprocessing script for pre-store target images and appearance gallery.
- Appearance gallery.
- Gradio demo.
![]() |
||
Toy Character |
![]() |
||
2D Anime Character |
![]() |
||
3D Anime Character |
![]() |
||
Animal Character |
git clone https://github.com/Jeoyal/FaceShot.git
cd ./FaceShot
This script has been tested on CUDA version of 12.4.
conda create -n faceshot python==3.10
conda activate faceshot
pip install -r requirements.txt
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
pip install "git+https://github.com/XPixelGroup/BasicSR.git"
-
Download the checkpoint of CMP from here and put it into
./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints
. -
Download the
ckpts
folder from the huggingface repo which contains necessary pretrained checkpoints and put it under./ckpts
. You may usegit lfs
to download the entireckpts
folder.
chmod 777 inference.sh
./inference.sh
All assets and code are under the license unless specified otherwise.
If this work is helpful for your research, please consider citing the following BibTeX entry.
@article{gao2024faceshot,
title={FaceShot: Bring Any Character into Life},
author={Gao, Junyao and Sun, Yanan and Shen, Fei and Xin, Jiang and Xing, Zhening and Chen, Kai and Zhao, Cairong},
journal={arXiv preprint},
year={2025}
}
The code is built upon MOFA-Video and DIFT.