-
Notifications
You must be signed in to change notification settings - Fork 263
ONNX Conversion Scripts #73
base: dpt_scriptable
Are you sure you want to change the base?
Conversation
Thank you for the scripts @timmh. I have been trying to export larger size models but the script freezes and I have to kill the computer manually to restart it. The only modifications I have made are to the |
This sounds like you are running out of RAM. Things you could try:
model.eval()
+model.to("cuda")
dummy_input = torch.zeros((batch_size, 3, net_h, net_w))
+dummy_input = dummy_input.to("cuda") |
Thank you for the response. I tested to see if I can use my GPU and made the code changes as you suggested but I got the following error. Any idea about what might be causing it? |
@yohannes-taye I can reproduce the issue but to be honest I have no idea where the issue stems from. Probably there is some tensor in the model which is created on the wrong device. I think the best way forward for you would be to increase your swap space and export on the CPU. |
This PR implements ONNX conversion scripts and scripts to run the resulting models on monodepth and segmentation tasks. Furthermore fixes from #42 are incorporated. The converted weights are available here and are verified to produce numerically similar results to the original models on exemplary inputs. Please let me know if I should add anything to the README.