-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model change to use GPU properly #286
Model change to use GPU properly #286
Conversation
Reviewer's Guide by SourceryThis pull request modifies the face swapper module to use a different model file. The changes involve updating the model file name from 'inswapper_128_fp16.onnx' to 'inswapper_128.onnx' in two locations within the 'face_swapper.py' file. This change likely affects how the model utilizes the GPU for processing. File-Level Changes
Tips
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @jasonkneen - I've reviewed your changes - here's some feedback:
Overall Comments:
- Could you provide more context on why this change from 'inswapper_128_fp16.onnx' to 'inswapper_128.onnx' was necessary? It would be helpful to understand the reasoning behind moving away from the fp16 version, and if you've tested the performance impact of this change. Given the PR title mentions 'use GPU properly,' could you elaborate on how this change improves GPU utilization?
Here's what I looked at during the review
- 🟡 General issues: 1 issue found
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment to tell me if it was helpful.
@@ -17,7 +17,7 @@ | |||
|
|||
def pre_check() -> bool: | |||
download_directory_path = resolve_relative_path('../models') | |||
conditional_download(download_directory_path, ['https://huggingface.co/hacksider/deep-live-cam/blob/main/inswapper_128_fp16.onnx']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question (performance): Can you explain the rationale behind changing from the FP16 model to the standard precision model?
This change might affect memory usage and inference speed. What considerations led to this decision, and have you evaluated its impact on the system's performance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By changing it you lose little to know performance sr quality and it utilises the GPU properly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the clarification, Jason. It's good to know that the performance and quality impact is minimal while gaining proper GPU utilization.
To ensure that this change is well-documented and understood by future maintainers, could you please add a comment in the code or update the documentation to reflect this rationale? This will help others understand why the non-fp16 model was chosen and the benefits it brings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, Jason. Adding that comment or updating the documentation will be very helpful for future maintainers. If you need any assistance with the wording or placement of the comment, feel free to ask. Once that's done, I believe this change will be well-supported and clear for anyone working on the project in the future.
This reverts commit f1c158c.
I was also thinking of making a PR for changing the link of model downloaded for GPU, but keep a fp16 version: Are you sure not using fp16 does not slow down a lot the processing time ? |
Can you please explain why this is better for GPU? Shouldn´t a non FP16 be slower? |
Summary by Sourcery
Change the face swapper model to use a non-fp16 ONNX file for better GPU support.
Enhancements: