An attempt to solve hairstyle transfer problem using Diffusion using Diffusers pipeline for RePaint paper.
RePaint Paper link : https://arxiv.org/abs/2201.09865
This Repo has been forked from HuggingFace Diffusers and changes have been in RePaint Pipeline and scheduling scripts
Using Hairstyle_transfer_repaint_random.ipynb notebook, you can generate muliple hairstyles for a given face image. You can even generate more different cases by tweaking generator seed and eta parameter
Using Hairstyle_transfer_repaint.ipynb notebook, you can transfer hairstyle transfer from hair image to source(face) image. Repaint inference algorithm has been changed to make merge between face and hair boundary sublime and indistinguishable.Changes made are:
- Instead of starting from gaussian random noise image, we will take hair image and add image to about 50 steps
- When harmonizing sampling begins, first instance of denoise step will allow addition of face
- While the first adds face the rest of steps for reverse diffusion for that transition will only do denoising
- For the section in harmonizing sampling where the noise is added to the image, remains the same.
- After this harmonizing sampling period, there is addition of more steps consisting of noise and denoise in that order to further smoothen and gel the two images better. -Transition/harmonizing Timeline
Even though it works well for the above hair and source image. I've found instances of this approach not working well.
Reasons i suspect are:
- Alignment between hair and source image needs to be spot on.
- skin tone, if there is clear visible distinction the method fails, showing clear boundary between hair and face.
- Face Alignment
- Dlib
- Opencv
- Diffusers
- Matplotlib
- Pillow
- Pytorch
- Numpy