-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cmake v3.30.2 cudart link error #154
Comments
I guess the reason probably what I really need to do is add tell it to link the library dir just need to remember which cmake convention to use for that… |
Apologies for this; I dropped the ball on the 2.4 release; I'll build those wheels tonight. I've always had bad experience with FindCUDA, and unfortunately it's difficult to link with libtorch through cmake without including theirs, and that's when everything goes wrong. Every time I've figured out a way around it it's been a hack, but somehow torch's docker images and NGC images aren't affected. So I don't think it's anything wrong with your environment, rather just FindCUDA being annoying as usual. Also, if you know which version of CUDA toolkit your local torch was compiled with I can just build that binary first and post the link here -- building wheels take a while now that 2.4 supports 3 different CTK versions and 5 python versions (together that's 15 CUDA wheels and 5 CPU.) |
no worries, there's always too much to be done! I'm pretty much done for the night but I think my last idea might get it building locally. for some reason CXXFLAGS='-L/usr/local/cuda/lib64' CUDACXX=/usr/local/cuda/bin/nvcc NATTEN_CUDA_ARCH=8.9 NATTEN_VERBOSE=1 NATTEN_IS_BUILDING_DIST=1 NATTEN_WITH_CUDA=1 NATTEN_N_WORKERS=8 python setup.py bdist_wheel -d out/wheels/cu121/torch/240 and by "didn't work" I mean that it didn't introduce any so I modified if(${NATTEN_WITH_CUDA})
target_link_libraries(natten PUBLIC c10 torch torch_cpu torch_python cudart c10_cuda torch_cuda)
+ message("Adding to target 'natten', link directory: ${CUDA_TOOLKIT_ROOT_DIR}/lib64")
+ target_link_directories(natten PUBLIC ${CUDA_TOOLKIT_ROOT_DIR}/lib64) And this seems to have succeeded in adding a =====
Thanks! Is this it?
|
Yeah the find cuda module is a big pain; I've sometimes been successful in going around it but never wrote it down 😅 .
Yes perfect! I'll post that wheel here when it builds. |
ah! my local build succeeded. NATTEN now working with torch 2.4.0. in the end, all I needed was that |
Oh nice; feel free to drop the diff here or even open a PR; I wouldn't rule out NATTEN's cmake config doing something wrong. I guess if the actual issue was a linking error in the end it makes sense; I originally thought FindCUDA was just blocking everything. Anyway I'll try and redo the cmake config soon; I hacked it together one time last year when we made the switch and haven't looked at it since. |
As there wasn't a torch 2.4.0 wheel, I tried building NATTEN myself. It didn't go as smoothly as usual.
Most problems were due to cmake giving misleading/incomplete error messages. These are the various errors I hit along the way:
Birch-san/sdxl-play#3 (comment)
Ultimately I think most problems here were just "my gcc and g++ alternatives didn't point anywhere after Ubuntu upgrade", but there is one change I had to make to setup.py to get it to build, and I'm not sure why cmake wasn't able to figure this out automatically, or try it as a guess:
setup.py
f"-DNATTEN_CUDA_ARCH_LIST={cuda_arch_list_str}", + f"-DCUDA_CUDART_LIBRARY=/usr/local/cuda/lib64/libcudart.so",
Perhaps the reason things have changed is because the newer cmake demises FindCUDA?
Anyway, passing in the
CUDA_CUDART_LIBRARY
option persuaded it to try compiling.Unfortunately it looks like that wasn't what it wanted… linking failed at the end of all of that.
seems like a perfectly typical value for
CUDA_CUDART_LIBRARY
though. and the library certainly exists:any idea what I'm doing wrong? the errors don't seem rational…
The text was updated successfully, but these errors were encountered: