Tobigticon is a service that provides the production of your own moving emoticon using GAN based image2video method.
Our model, learned by unsupervised manner, generates an emotional video of an image without ground truth through network blending parmas and landmark generation.
EmoGE'T, which is derived from Emoticon GEnerated by Tobigs, is a team of 8 members of Tobigs who worked on an image2video based project.
Make your own moving emoticons with Tobigticons!
Following the below options, the model creates your own animated emoticon.
Select an Image Style
Animation | Baby | Painting |
---|---|---|
Select an Emotion
Happiness | Disgusted | Surprised |
---|---|---|
You can see more information about web demo here.
We have tested on:
- CUDA 11.0
- python 3.8.5
- pytorch 1.7.1
- numpy 1.19.2
- opencv-python 4.5.1
- dlib 19.21.1
- scikit-learn 0.24.0
- Pillow 8.1.0
- Ninja 1.10.0
- glob2 0.7
You can generate your own moving emoticon :)
python emoticon_generate.py --file ImagePath --transform Animation --emotion Emotion --type OutputType
For example,
python emoticon_generate.py --file 00001.jpg --transform baby --emotion disgusted --type mp4
Training the landmark generation model using the Sol1 approach.
python sol1/train.py --data_path DataPath --conditions Conditions
Train the video generation model using sol2
# Train the model
python sol2/train.py --image_discriminator PatchImageDiscriminator --video_discriminator CategoricalVideoDiscriminator --dim_z_category 3 --video_length 16
# Generate the video using the model
python sol2/generate_videos.py [model path] [image] [class] [save_path]
Blend style | Generate video |
---|---|
Blend style | Generate video |
---|---|
Blend style | Generate video |
---|---|
Blend style | Generate video |
---|---|
Blend style | Generate video |
---|---|
- Rosinality, stylegan2-pytorch, 2019, https://github.com/rosinality/stylegan2-pytorch
- PieraRiccio, stylegan2-pytorch, 2019, https://github.com/PieraRiccio/stylegan2-pytorch
- justinpinkney, toonify, 2020, https://github.com/justinpinkney/toonify
- marsbroshok, face-replace, 2016, https://github.com/marsbroshok/face-replace
- sergeytulyakov, mocogan, 2017, https://github.com/sergeytulyakov/mocogan
- Yaohui Wang, Piotr Bilinski, Francois Bremond, Antitza Dantcheva. ImaGINator: Conditional Spatio-Temporal GAN for Video Generation. 2019.
MinJung Shin |
YeJi Lee |
YuMin Lee |
Hyebin Choi |
MinKyeong Kim |
SangHyeon Kim |
JaeYoon Jeong |
YuJin Han |