Skip to content
This repository has been archived by the owner on Sep 13, 2024. It is now read-only.

Static LibTorch mobile build for windows #59

Open
janakg opened this issue May 26, 2020 · 5 comments
Open

Static LibTorch mobile build for windows #59

janakg opened this issue May 26, 2020 · 5 comments

Comments

@janakg
Copy link

janakg commented May 26, 2020

@peterjc123 Would it be possible to build a static LibTorch build for Windows, right now we are facing a couple of issues

  1. The final lib file is too big, it is around 740MB. the corresponding builds in Mac/Ubuntu is around 64MB. Even though we switched off the USE_MKL
  2. It gives us a linking error while loading a touch script model. Error: DeviceGuardImpl for cpu is not available (static linking PyTorch) pytorch/pytorch#14367

It would be helpful if you have any suggestions to get a static library

@peterjc123
Copy link
Owner

peterjc123 commented May 26, 2020

@janakg

  1. The final lib file is too big, it is around 740MB. the corresponding builds in Mac/Ubuntu is around 64MB. Even though we switched off the USE_MKL

The available options are:

  1. Use the CMake configuration MinSizeRel (maybe you are already using it)
  2. Use clang on Windows (This significantly lowers the size because it tried to do more inlining, but it has some issues if you raise some exceptions at runtime. Please refer to [TEST] Clang-cl build without annotations pytorch/pytorch#35145 )

2. It gives us a linking error while loading a touch script model. pytorch/pytorch#14367

From the latest comment, it seems that you'll need to use /WHOLEARCHIVE in the linker args.

BTW, I'm just curious about your use case. What do you use LibTorch mobile build on Windows for?

@janakg
Copy link
Author

janakg commented May 27, 2020

Thanks! @peterjc123 for the response. We'll try

We use client-side inference for semantic text analysis, and we package it with the desktop application. Currently using OpenCV DNN for image models and works perfectly. But for text and a few advanced use cases DNN doesn't support a few network layers so trying to bring LibTorch in our system.

We have built a static LibTorch Mobile library for Linux and Mac and it works. Windows is something new for us and also a bit tricky.

@santhiya-v
Copy link

@peterjc123 We have slightly modified pytorch/scripts/build_mobile.sh to run in Windows.

torch_cpu.lib size for Release is 711MB and MinSizeRel reduced it to 645MB
But running with clang has not reduced it further. Are we missing something here?

Set-up
C and CXX complier --> clang-cl
Generator --> Visual Studio 16 2019
We are currently not using Ninja. Do you have any suggestion for us?

@peterjc123
Copy link
Owner

peterjc123 commented May 29, 2020

As for Clang builds, you could try http://blog.llvm.org/2018/11/30-faster-windows-builds-with-clang-cl_14.html and make sure you use lld-link over link.exe.

@peterjc123
Copy link
Owner

peterjc123 commented May 29, 2020

Also could you please check that you have turned off all the debugging flags like /Z7 or /Zi or /DEBUG:FULL in the build?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants