-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Go API - [WIP] #212
Go API - [WIP] #212
Conversation
…nagedTensor function.
/ok to test |
/ok to test |
1 similar comment
/ok to test |
/ok to test |
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@benfred do you know how to install/find dlpack?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like we got past the dlpack error - but now I'm seeing
cagra/cagra.go:109:11: could not determine kind of name for C.BITSET
cagra/cagra.go:104:11: could not determine kind of name for C.NO_FILTER
cagra/cagra.go:60:39: could not determine kind of name for C.cuvsCagraExtend
cagra/cagra.go:91:13: could not determine kind of name for C.cuvsFilter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems like it installs the previous release from the conda channels, where some features aren't available yet which it relies on. Is there any way to do this through conda or would cmake be better? @benfred
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm - it should be grabbing the built cpp artifacts from S3 from the conda cpp build, but it looks like its installing 24.12 libcuvs instead:
2025-02-04T04:56:30.6535505Z + libraft 24.12.00 cuda12_241211_geaf9cc72_0 rapidsai 2MB
2025-02-04T04:56:30.6537604Z + libcuvs 24.12.00 cuda12_241211_g0ce6a03_0 rapidsai 595MB
(from the build-go GHA log ).
fwiw I just checked the rust build, which is set up very similar - and seems to be pulling the right libcuvs package.
2025-02-04T04:56:49.5413345Z + libcuvs 25.02.00a184 cuda12_250204_g35bc1f1_184 /tmp/cpp_channel 864MB
I'm not quite sure why this is going wrong here - @bdice or @vyasr do you have any ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking at the difference in build scripts between build_go.sh and build_rust.sh - and I think it might just be that the rust build is specifying the rapids version.
$ diff ci/build_rust.sh ci/build_go.sh
13c13
< --file-key rust \
---
> --file-key go \
16c16
< rapids-mamba-retry env create --yes -f env.yaml -n rust
---
> rapids-mamba-retry env create --yes -f env.yaml -n go
21c21
< conda activate rust
---
> conda activate go
26,29c26,28
< # we need to set up LIBCLANG_PATH to allow rust bindgen to work,
< # grab it from the conda env
< export LIBCLANG_PATH=$(dirname $(find /opt/conda -name libclang.so | head -n 1))
< echo "LIBCLANG_PATH=$LIBCLANG_PATH"
---
> export CGO_CFLAGS="-I${CONDA_PREFIX}/include"
> export CGO_LDFLAGS="-L${CONDA_PREFIX}/lib -lcudart -lcuvs -lcuvs_c"
> export CC=clang
37,38c36,38
< "libcuvs=${RAPIDS_VERSION}" \
< "libraft=${RAPIDS_VERSION}"
---
> libcuvs \
> libraft \
> cuvs
40c40
< bash ./build.sh rust
---
> bash ./build.sh go
going to try updating the go build to match
/ok to test |
/ok to test |
/ok to test |
/ok to test |
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm! thanks for all the hard work on this @ajit283
uses: rapidsai/shared-workflows/.github/workflows/[email protected] | ||
with: | ||
build_type: pull-request | ||
node_type: "gpu-v100-latest-1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not able to give a full CI/packaging review at the moment, but wanted to mention this.
Build jobs must be CPU only unless the build is less than 3-4 minutes. Otherwise please create binary artifacts from the CPU build, upload them, and use them for GPU testing in a separate job. We are too lax about this already and the GPU CI load is going up with all the Rust/Java/Go language bindings being added to cuVS.
It appears that the Go jobs pass in ~6 minutes but if that changes in the future we may need to work on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving for packaging and CI.
I'll echo @bdice 's comment about needing to keep GPU workflow runtimes down, but for now this is reasonably fast.
We may also want to skip the go-build in the future depending on changed-files
(e.g. don't build the go
package if we're only updating jupyter notebooks, etc) but those can all be done as followups
/merge |
This PR adds a Go API. It's far from completion and still work in progress. Feel free to suggest improvements!