Replies: 1 comment 1 reply
-
HI @piyalitt , the whole body bundle has an option to do low-res inference, but the expected performance will not be as good as the high-res model. You can see the config option in the inference.json. In general, this model outputs 105 channels simultaneously, using softmax and argmax to discretize the prediction labels. It takes lots of memory. But as you have a 40g GPU, which is a pretty good one, it should work with most CTs. I guess some of your CTs are in very large volumes (e.g., >300 slices)? A possible way is to truncate the volume into subvolumes, and then recover it back after prediction with a high-res model. Anyway, the team is working on an efficient way to do universal segmentation, such as we can do organ segmentation one by one until all the needed anatomies are segmented. This will significantly reduce the GPU memory usage. Hope the above helps. |
Beta Was this translation helpful? Give feedback.
-
Hello everyone. I attempted to use one of the model zoos for whole-body CT, but I encountered an issue. Unfortunately, the only available A100 40 gb does not have sufficient GPU RAM for the "inference". I tried reducing the batch size to 1 and decreasing the region of interest, but these attempts were unsuccessful. Do any of you have any suggestions for a possible solution?
P.S. There is multi GPU for training so I hope we have multi GPU for inference. Thank you :)
Beta Was this translation helpful? Give feedback.
All reactions