You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I've been using the model for some experiments with code based on the demo/ctw1500_detection.py. I would like to move from VisualizationDemo because it does not support batch inference. Looking through the source code I tried something like:
Inference on single image is fine, but giving multiple images crashes inside mutil_path_fuse_module.py, specifically on feature_fuse = char_context + x + global_context with
RuntimeError: The size of tensor a (2) must match the size of tensor b (4) at non-singleton dimension 0
What should I do to get batch inference?
Thanks.
The text was updated successfully, but these errors were encountered:
Inside of Mutil_Path_Fuse_Module class the forward method just uses proposals[0] instead of iterating over them when there are multiple. However this looks like it would never work, so now I'm confused a bit (since I assume the model was used for batch inference at least in training).
Hello,
I've been using the model for some experiments with code based on the
demo/ctw1500_detection.py
. I would like to move fromVisualizationDemo
because it does not support batch inference. Looking through the source code I tried something like:Inference on single image is fine, but giving multiple images crashes inside
mutil_path_fuse_module.py
, specifically onfeature_fuse = char_context + x + global_context
withWhat should I do to get batch inference?
Thanks.
The text was updated successfully, but these errors were encountered: