You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had this problem when running the embed of bio_embedding,
ERROR:bio_embeddings.embed.embedder_interfaces:Error processing batch of 3 sequences: CUDA out of memory. Tried to allocate 972.00 MiB (GPU 1; 7.80 GiB total capacity; 4.91 GiB already allocated; 717.31 MiB free; 4.92 GiB reserved in total by PyTorch). You might want to consider adjusting the `batch_size` parameter. Will try to embed each sequence in the set individually on the GPU.
Although the final result is calculated, I am not sure if it calculated it correctly.
Is there any option that can be set to avoid this, e.g. reduce batch_size size, use multiple GPU operations.
I did not find the relevant options in ````examples/parameters_blueprint.yml```
The text was updated successfully, but these errors were encountered:
I had this problem when running the embed of bio_embedding,
Although the final result is calculated, I am not sure if it calculated it correctly.
Is there any option that can be set to avoid this, e.g. reduce batch_size size, use multiple GPU operations.
I did not find the relevant options in ````examples/parameters_blueprint.yml```
The text was updated successfully, but these errors were encountered: