Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model change to use GPU properly #286

Closed

Conversation

jasonkneen
Copy link
Contributor

@jasonkneen jasonkneen commented Aug 13, 2024

Summary by Sourcery

Change the face swapper model to use a non-fp16 ONNX file for better GPU support.

Enhancements:

  • Update the model file used for face swapping to a non-fp16 version to improve GPU compatibility.

Copy link
Contributor

sourcery-ai bot commented Aug 13, 2024

Reviewer's Guide by Sourcery

This pull request modifies the face swapper module to use a different model file. The changes involve updating the model file name from 'inswapper_128_fp16.onnx' to 'inswapper_128.onnx' in two locations within the 'face_swapper.py' file. This change likely affects how the model utilizes the GPU for processing.

File-Level Changes

Files Changes
modules/processors/frame/face_swapper.py Updated the model file name in the conditional download function from 'inswapper_128_fp16.onnx' to 'inswapper_128.onnx'
modules/processors/frame/face_swapper.py Modified the model path in the get_face_swapper function to use 'inswapper_128.onnx' instead of 'inswapper_128_fp16.onnx'

Tips
  • Trigger a new Sourcery review by commenting @sourcery-ai review on the pull request.
  • Continue your discussion with Sourcery by replying directly to review comments.
  • You can change your review settings at any time by accessing your dashboard:
    • Enable or disable the Sourcery-generated pull request summary or reviewer's guide;
    • Change the review language;
  • You can always contact us if you have any questions or feedback.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @jasonkneen - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Could you provide more context on why this change from 'inswapper_128_fp16.onnx' to 'inswapper_128.onnx' was necessary? It would be helpful to understand the reasoning behind moving away from the fp16 version, and if you've tested the performance impact of this change. Given the PR title mentions 'use GPU properly,' could you elaborate on how this change improves GPU utilization?
Here's what I looked at during the review
  • 🟡 General issues: 1 issue found
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment to tell me if it was helpful.

@@ -17,7 +17,7 @@

def pre_check() -> bool:
download_directory_path = resolve_relative_path('../models')
conditional_download(download_directory_path, ['https://huggingface.co/hacksider/deep-live-cam/blob/main/inswapper_128_fp16.onnx'])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question (performance): Can you explain the rationale behind changing from the FP16 model to the standard precision model?

This change might affect memory usage and inference speed. What considerations led to this decision, and have you evaluated its impact on the system's performance?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By changing it you lose little to know performance sr quality and it utilises the GPU properly

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the clarification, Jason. It's good to know that the performance and quality impact is minimal while gaining proper GPU utilization.

To ensure that this change is well-documented and understood by future maintainers, could you please add a comment in the code or update the documentation to reflect this rationale? This will help others understand why the non-fp16 model was chosen and the benefits it brings.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, Jason. Adding that comment or updating the documentation will be very helpful for future maintainers. If you need any assistance with the wording or placement of the comment, feel free to ask. Once that's done, I believe this change will be well-supported and clear for anyone working on the project in the future.

@leihuayi
Copy link

I was also thinking of making a PR for changing the link of model downloaded for GPU, but keep a fp16 version: https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128_fp16.onnx
Because indeed the default link does not work (see issues #717 , #697 ).

Are you sure not using fp16 does not slow down a lot the processing time ?

@rotatorotator
Copy link

Can you please explain why this is better for GPU? Shouldn´t a non FP16 be slower?

@KRSHH KRSHH closed this Nov 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants