Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix: use device in all Torch models #5026

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from

Conversation

jacobsela
Copy link

@jacobsela jacobsela commented Oct 31, 2024

What changes are proposed in this pull request?

Make CLIP zoo model work on all GPUs in a system.

How is this patch tested? If it is not, please explain why.

I ran embeddings on the model on GPUs other than 'coda:0'

Release Notes

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release
    notes for FiftyOne users.

What areas of FiftyOne does this PR affect?

  • App: FiftyOne application changes
  • Build: Build and test infrastructure changes
  • Core: Core fiftyone Python library changes
  • Documentation: FiftyOne documentation changes
  • Other - fiftyone.zoo

Summary by CodeRabbit

  • Chores
    • Enhanced device management for model processing, improving compatibility with CPU and GPU configurations.

Copy link
Contributor

coderabbitai bot commented Oct 31, 2024

Walkthrough

The changes involve modifications to the device management in the TorchOpenClipModel and TorchYoloNasModel classes within the fiftyone/utils/open_clip.py and fiftyone/utils/super_gradients.py files, respectively. The updates replace direct calls to .cuda() with .to(self.device) for moving tensors and models to the appropriate device, enhancing compatibility across different hardware configurations.

Changes

File Change Summary
fiftyone/utils/open_clip.py Updated _get_text_features, _embed_prompts, and _predict_all methods to use text.to(self.device) and imgs.to(self.device) for device management.
fiftyone/utils/super_gradients.py Modified _load_model method to use model.to(self.device) for transferring the model to the appropriate device.

Possibly related PRs

  • Enable GPU inference for transformers models #4587: The changes in this PR also focus on device management for tensor operations, specifically moving tensors to the appropriate device, which aligns closely with the modifications made in the main PR regarding the TorchOpenClipModel class.

Suggested reviewers

  • mwoodson1
  • jacobmarks

Poem

In the patch of code, a rabbit hops,
With changes made, it never stops.
Through functions and loops, it scurries with glee,
Enhancing the zoo for all to see! 🐇✨


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 9a89a70 and 310a6bd.

📒 Files selected for processing (1)
  • fiftyone/utils/open_clip.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • fiftyone/utils/open_clip.py

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@danielgural
Copy link
Contributor

danielgural commented Oct 31, 2024

Still works fine and I can see difference between cpu and cuda. Note for future, this change is not pulled upstream by
fob.compute_similarity( dataset, model="clip-vit-base32-torch", brain_key="img_sim", device="cuda", )
and just noticed. Something for next time :)

@harpreetsahota204 can you run this code when you test:
import fiftyone.brain as fob
model = foz.load_zoo_model("clip-vit-base32-torch", device="cuda")
print(model._model.visual.conv1._parameters["weight"][0].device)

To make sure the model is also multi-gpu

danielgural
danielgural previously approved these changes Oct 31, 2024
Copy link
Contributor

@danielgural danielgural left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

import fiftyone.brain as fob
from fiftyone import ViewField as F

dataset = foz.load_zoo_dataset('quickstart')
session = fo.launch_app(dataset)
model = foz.load_zoo_model("clip-vit-base32-torch", device="cuda")
embeddings = dataset.compute_embeddings(model)

worked as expected

fiftyone/utils/clip/zoo.py Outdated Show resolved Hide resolved
@brimoor brimoor changed the title bugfix Use device in all Torch models Nov 1, 2024
@brimoor brimoor changed the title Use device in all Torch models Bugfix: use device in all Torch models Nov 1, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (1)
fiftyone/utils/super_gradients.py (1)

98-100: Consider adding a docstring note about device flexibility.

Since this change enables flexible device selection, it would be helpful to document this capability in the class or method docstring. This would help users understand that they can use any available GPU.

Add a note like this to the class docstring:

 """FiftyOne wrapper around YOLO-NAS from
 https://github.com/Deci-AI/super-gradients.
+
+The model automatically uses the appropriate device (CPU/GPU) based on availability
+and can work with any CUDA device, not just the default one.

 Args:
     config: a :class:`TorchYoloNasModelConfig`
 """
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 3b3596f and 9a89a70.

📒 Files selected for processing (3)
  • fiftyone/utils/clip/zoo.py (1 hunks)
  • fiftyone/utils/open_clip.py (3 hunks)
  • fiftyone/utils/super_gradients.py (1 hunks)
🔥 Files not summarized due to errors (1)
  • fiftyone/utils/clip/zoo.py: Error: Server error: no LLM provider could handle the message
🔇 Additional comments (4)
fiftyone/utils/super_gradients.py (1)

99-99: LGTM! Device management improvement.

The change from model.cuda() to model.to(self.device) improves GPU device flexibility, allowing the model to work with any available GPU instead of being restricted to the default one.

Let's verify that the model parameters are correctly moved to the specified device:

✅ Verification successful

Device management change is properly implemented

The verification confirms that the change from model.cuda() to model.to(self.device) is correct and consistent with the codebase's device management pattern. Key findings:

  1. The base TorchImageModel class properly initializes self.device in its constructor:
device = self.config.device
if device is None:
    device = "cuda:0" if torch.cuda.is_available() else "cpu"
  1. All other model implementations (CLIP, SAM, YOLO-NAS) consistently use self.device for device management.

  2. The change aligns with the existing pattern used across the codebase for moving tensors and models to the target device.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if there are any remaining hard-coded device assignments
# that might conflict with the new flexible device management

# Test 1: Look for any remaining .cuda() calls that might bypass device management
rg -g '*.py' '\.cuda\(' 

# Test 2: Check for proper device initialization in the parent class
ast-grep --pattern 'class TorchImageModel:
  $$$
  def __init__($$$):
    $$$
    self.device = $$$
    $$$
'

Length of output: 137


Script:

#!/bin/bash
# Let's try a different approach to verify device management

# Check for device-related code in TorchImageModel and its subclasses
rg -g '*.py' 'class.*TorchImageModel' -A 10

# Look for device initialization patterns
rg -g '*.py' 'self\.device' -B 2 -A 2

# Check for any remaining cuda-specific code
rg -g '*.py' 'cuda|gpu' -B 2 -A 2

Length of output: 26257

fiftyone/utils/open_clip.py (3)

109-109: LGTM: Device handling improvement in text feature extraction.

The change from .cuda() to .to(self.device) is correct and aligns with the PR's objective of supporting multiple GPUs.


121-121: LGTM: Device handling improvement in prompt embedding.

The change from .cuda() to .to(self.device) is correct and consistent with the previous change.


109-109: Verify device consistency across model components.

The device handling changes look good, but we should verify that all model components (model, tensors, etc.) are consistently using the same device.

Also applies to: 121-121, 146-146

✅ Verification successful

Based on the shell script results, I can now generate the final response as the device handling patterns are clear across the codebase.

Device handling is consistent across model components

The device handling in the OpenCLIP implementation follows the established patterns used throughout the codebase:

  • The model and tensors are consistently moved to the same device using to(self.device)
  • Device initialization follows the standard pattern of defaulting to CUDA if available, otherwise CPU
  • All tensor operations (text encoding, image encoding) maintain device consistency
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify device handling consistency across the codebase
# Look for any remaining .cuda() calls or device-related patterns

# Test 1: Check for any remaining .cuda() calls
echo "Checking for remaining .cuda() calls..."
rg '\.cuda\(' 

# Test 2: Check for device-related patterns in TorchOpenClipModel and related classes
echo "Checking device handling patterns..."
rg -A 3 'to\(.*device|\.device|device\s*=' 

Length of output: 14047

@@ -143,7 +143,7 @@ def _predict_all(self, imgs):
frame_size = (width, height)

if self._using_gpu:
imgs = imgs.cuda()
imgs = imgs.to(self.device)
Copy link
Contributor

@coderabbitai coderabbitai bot Nov 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider adding error handling for device transfers.

While the device handling change is correct, the subsequent autocast context is hardcoded to "cuda". This might cause issues when running on CPU.

Apply this diff to make it device-aware:

         if self._using_gpu:
             imgs = imgs.to(self.device)
 
-        with torch.no_grad(), torch.amp.autocast("cuda"):
+        with torch.no_grad(), torch.amp.autocast(device_type=self.device.type if self._using_gpu else "cpu"):
             image_features = self._model.encode_image(imgs)
             text_features = self._get_text_features()

Also applies to: 147-152

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jacobsela coderabbit raises an interesting point here. Does torch.amp.autocast("cuda") need to be updated?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is valid and will cause problems if not handled. It's in my todo for this week to more thoroughly review the code before moving further with this PR because this message makes me think that there are probably more places I haven't noticed that make hardware assumptions.

@brimoor brimoor changed the base branch from release/v1.0.2 to develop November 7, 2024 23:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants