Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reduce wnn runtime by fetching the precomputed no_batch_* #102

Open
bio-la opened this issue Sep 15, 2023 · 0 comments
Open

reduce wnn runtime by fetching the precomputed no_batch_* #102

bio-la opened this issue Sep 15, 2023 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@bio-la
Copy link
Collaborator

bio-la commented Sep 15, 2023

in the current version of the integration , if wnn is run on no-batch corrected modalities, it will run neighbours on each modality on the flight in a "no_batch" way (i.e. on precomputed dimred such as PCA or LSI if specified) with the same param as specified for each of the no_batch unimodal analyses.
it's a different behaviour when wnn is calc on pre-batch corrected unimodal data, cause in that case the pipeline expects each batch corrected object to exist and it's correctly reflected in the decorators flow.

we need to modify wnn to fetch precomputed no_batch instead of running on the flight to reduce the runtime (currently runs nobatch twice per modality if wnn is called on no_batch)

@bio-la bio-la added the enhancement New feature or request label Sep 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants