-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot import name '_C' from 'sam2' (/home/suw469/segment-anything-2/sam2/__init__.py) #131
Comments
I tried to run video_predictor_example.ipynb inside the segment-anything-2 package but encountered the same ImportError as above. So not sure if there are additional setting I should adjust to run the code. |
I have also encountered the same error while using Colab and it has not been resolved yet. |
Hi, we have added Regarding this error: This is usually because you haven't run the |
Hi @ronghanghu, thank you for your suggestion. I tried pip install -e ".[demo]" but got the following error message: [suw469@compute-g-17-158 ~]$ cd segment-anything-2 × Getting requirements to build editable did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. × Getting requirements to build editable did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. |
Based on this suggestion: You may also try python setup.py build_ext --inplace in the repo root as others suggested in #77 Could I ask what is repo root? Does it mean that I should first "cd segment-anything-2" and then "python setup.py build_ext --inplace"? |
I have tried |
Thank you for your suggestion. I tried to create a jupyter notebook Test.ipynb at /home/suw469 and run the following code: However, I got the following error message: × Getting requirements to build editable did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. × Getting requirements to build editable did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. I don't know why the system couldn't find the directory but my path to segment-anything-2 is: |
Hi @lindawang0122ds, regarding this latest error you saw:
As mentioned in
and rerun the installation. Also, you should make sure
print |
Setting path and reinstall works. Thank you! |
Thank you for your suggestion. It looks like my current Linux system doesn't have CUDA Toolkit yet so I will try to install it to setup the environment. Just to double check, is CUDA Toolkit the necessary requirement to run the segment-anything-2 model? I'm a bit worried about whether my Linux system could allow installing the CUDA Toolkit so wanna know if there are alternative way to setup the environment. |
Hi @lindawang0122ds, we have recently made the CUDA extension step optional (in #155) as a workaround to this problem. You can pull the latest code and reinstall via # run the line below inside the SAM 2 repo
git pull;
pip uninstall -y SAM-2;
rm -f sam2/*.so;
pip install -e ".[demo]" which allows using SAM 2 without CUDA extension (the results should stay the same in most cases, see |
Thank you for your suggestion @ronghanghu! I add CUDA Toolkit to my environment and specify the path. However, there is still error during install -e. I also verified my environment and the output is: True /n/app/cuda/12.1-gcc-9.2.0. So I think I do have CUDA Toolkit there. I'm not quite sure if the error is related to the CUDA Toolkit version or other issue. Could you give me some suggestion on that? Following is the output error: [suw469@compute-g-17-157 segment-anything-2]$ export CUDA_HOME=/n/app/cuda/12.1-gcc-9.2.0 [suw469@compute-g-17-157 segment-anything-2]$ python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)' |
Just to update, I created a .ipynb file in /home/suw469 (where my segment anything model has the path: /home/suw469/segment-anything-2) and run the following python code: os.environ['CUDA_HOME'] = '/n/app/cuda/12.1-gcc-9.2.0' Following is the output for the above code. Does the output message confirm that I'm all set? I didn't encounter any error message this time. But since the last output message is copying something I'm a bit confused so wanna double check. /home/suw469/segment-anything-2 |
Yes, the output you pasted, everything looks correct now. You have successfully build the CUDA extension "_C.so", which should appear under |
Here is my original code, which I pasted from the demo code provided by sam2 called video_predictor_example.ipynb:
ann_frame_idx = 0 # the frame index we interact with
ann_obj_id = 1 # give a unique id to each object we interact with (it can be any integers)
Let's add a positive click at (x, y) = (210, 350) to get started
points = np.array([[210, 350]], dtype=np.float32)
for labels,
1
means positive click and0
means negative clicklabels = np.array([1], np.int32)
_, out_obj_ids, out_mask_logits = predictor.add_new_points(
inference_state=inference_state,
frame_idx=ann_frame_idx,
obj_id=ann_obj_id,
points=points,
labels=labels,
)
show the results on the current (interacted) frame
plt.figure(figsize=(12, 8))
plt.title(f"frame {ann_frame_idx}")
plt.imshow(Image.open(os.path.join(video_dir, frame_names[ann_frame_idx])))
show_points(points, labels, plt.gca())
show_mask((out_mask_logits[0] > 0.0).cpu().numpy(), plt.gca(), obj_id=out_obj_ids[0])
Following is the error message I encountered, which I cannot import name '_C' from 'sam2' (/home/suw469/segment-anything-2/sam2/init.py)
ImportError Traceback (most recent call last)
Cell In[31], line 8
6 # for labels,
1
means positive click and0
means negative click7 labels = np.array([1], np.int32)
----> 8 _, out_obj_ids, out_mask_logits = predictor.add_new_points(
9 inference_state=inference_state,
10 frame_idx=ann_frame_idx,
11 obj_id=ann_obj_id,
12 points=points,
13 labels=labels,
14 )
16 # show the results on the current (interacted) frame
17 plt.figure(figsize=(12, 8))
File ~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File ~/segment-anything-2/sam2/sam2_video_predictor.py:221, in SAM2VideoPredictor.add_new_points(self, inference_state, frame_idx, obj_id, points, labels, clear_old_points, normalize_coords)
219 # Clamp the scale of prev_sam_mask_logits to avoid rare numerical issues.
220 prev_sam_mask_logits = torch.clamp(prev_sam_mask_logits, -32.0, 32.0)
--> 221 current_out, _ = self._run_single_frame_inference(
222 inference_state=inference_state,
223 output_dict=obj_output_dict, # run on the slice of a single object
224 frame_idx=frame_idx,
225 batch_size=1, # run on the slice of a single object
226 is_init_cond_frame=is_init_cond_frame,
227 point_inputs=point_inputs,
228 mask_inputs=None,
229 reverse=reverse,
230 # Skip the memory encoder when adding clicks or mask. We execute the memory encoder
231 # at the beginning of
propagate_in_video
(after user finalize their clicks). This232 # allows us to enforce non-overlapping constraints on all objects before encoding
233 # them into memory.
234 run_mem_encoder=False,
235 prev_sam_mask_logits=prev_sam_mask_logits,
236 )
237 # Add the output to the output dict (to be used as future memory)
238 obj_temp_output_dict[storage_key][frame_idx] = current_out
File ~/segment-anything-2/sam2/sam2_video_predictor.py:810, in SAM2VideoPredictor._run_single_frame_inference(self, inference_state, output_dict, frame_idx, batch_size, is_init_cond_frame, point_inputs, mask_inputs, reverse, run_mem_encoder, prev_sam_mask_logits)
808 # potentially fill holes in the predicted masks
809 if self.fill_hole_area > 0:
--> 810 pred_masks_gpu = fill_holes_in_mask_scores(
811 pred_masks_gpu, self.fill_hole_area
812 )
813 pred_masks = pred_masks_gpu.to(storage_device, non_blocking=True)
814 # "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it
File ~/segment-anything-2/sam2/utils/misc.py:223, in fill_holes_in_mask_scores(mask, max_area)
220 # Holes are those connected components in background with area <= self.max_area
221 # (background regions are those with mask scores <= 0)
222 assert max_area > 0, "max_area must be positive"
--> 223 labels, areas = get_connected_components(mask <= 0)
224 is_hole = (labels > 0) & (areas <= max_area)
225 # We fill holes with a small positive mask score (0.1) to change them to foreground.
File ~/segment-anything-2/sam2/utils/misc.py:61, in get_connected_components(mask)
47 def get_connected_components(mask):
48 """
49 Get the connected components (8-connectivity) of binary masks of shape (N, 1, H, W).
50
(...)
59 components for foreground pixels and 0 for background pixels.
60 """
---> 61 from sam2 import _C
63 return _C.get_connected_componnets(mask.to(torch.uint8).contiguous())
ImportError: cannot import name '_C' from 'sam2' (/home/suw469/segment-anything-2/sam2/init.py)
The text was updated successfully, but these errors were encountered: