Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2 issues #5

Open
NerdCat822 opened this issue Nov 27, 2023 · 3 comments
Open

2 issues #5

NerdCat822 opened this issue Nov 27, 2023 · 3 comments

Comments

@NerdCat822
Copy link

hello i want to get a instance segmentation results.
but this only give the results of semantic segmentation and panoptic segmentation.
can you add the command for instance segmentation please?

And i cannot get the pretrained weight of [FoodSeg103-SETR-MLA].
please add the another link.
Thank you.

@starhiking
Copy link
Collaborator

Hi,

panoptic segmentation result contains instance result and other non-food results.

We have added the auxiliary link in install.md , and #3 may help you

@NerdCat822
Copy link
Author

Hello, i download the 3 weight file succeesful.
but this code not works now..
here is the error message.
Namespace(SAM_checkpoint='ckpts/sam_vit_h_4b8939.pth', ann_dir='ann_dir/test', area_thr=0, aug_test=False, box_nms_thresh=None, category_txt='FoodSAM/FoodSAM_tools/category_id_files/foodseg103_category_id.txt', color_list_path='FoodSAM/FoodSAM_tools/color_list.npy', confidence_threshold=0.5, crop_n_layers=None, crop_n_points_downscale_factor=None, crop_nms_thresh=None, crop_overlap_ratio=None, data_root='dataset/FoodSeg103/Images', detection_config='configs/Unified_learned_OCIM_RS200_6x+2x.yaml', device='cuda', eval=False, eval_options=None, img_dir='img_dir/test', img_path=None, min_mask_region_area=None, model_type='vit_h', num_class=104, options=None, opts=['MODEL.WEIGHTS', 'ckpts/Unified_learned_OCIM_RS200_6x+2x.pth'], output='Output/Panoramic_Results', points_per_batch=None, points_per_side=None, pred_iou_thresh=None, ratio_thr=0.5, semantic_checkpoint='ckpts/SETR_MLA/iter_80000.pth', semantic_config='configs/SETR_MLA_768x768_80k_base.py', stability_score_offset=None, stability_score_thresh=None, top_k=80)
Create Logger success in Output/Panoramic_Results/sam_process.log
[2023-11-30 03:34:41,727] [panoptic.py:285] [INFO] running sam!
[2023-11-30 03:35:12,269] [panoptic.py:303] [INFO] Processing 'dataset/FoodSeg103/Images/img_dir/test/00004990.jpg'...
Traceback (most recent call last):
File "FoodSAM/panoptic.py", line 335, in
main(args)
File "FoodSAM/panoptic.py", line 309, in main
masks = generator.generate(image)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/automatic_mask_generator.py", line 163, in generate
mask_data = self._generate_masks(image)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/automatic_mask_generator.py", line 206, in _generate_masks
crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/automatic_mask_generator.py", line 245, in _process_crop
batch_data = self._process_batch(points, cropped_im_size, crop_box, orig_size)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/automatic_mask_generator.py", line 283, in _process_batch
return_logits=True,
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/predictor.py", line 234, in predict_torch
multimask_output=multimask_output,
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/modeling/mask_decoder.py", line 98, in forward
dense_prompt_embeddings=dense_prompt_embeddings,
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/modeling/mask_decoder.py", line 132, in predict_masks
hs, src = self.transformer(src, pos_src, tokens)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/modeling/transformer.py", line 96, in forward
key_pe=image_pe,
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/modeling/transformer.py", line 178, in forward
attn_out = self.cross_attn_image_to_token(q=k, k=q, v=queries)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/rlagusrb/miniconda3/envs/FoodSAM/lib/python3.7/site-packages/segment_anything/modeling/transformer.py", line 231, in forward
attn = q @ k.permute(0, 1, 3, 2) # B x N_heads x N_tokens x N_tokens
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)

what should do i do?

@starhiking
Copy link
Collaborator

Your environments (torch, cudatoolkit) may have issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants