You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm trying to precompute the cache for 2019 pretrained model with a google collab GPU (12 GB) and I always get a CUDA alllocating memory error
How much memory we need to run the script?
regards
Maurizio
The text was updated successfully, but these errors were encountered:
I've spent some time playing around with the script (06_precompute_cache.py) options to work around the CUDA memory allocation error. I ended up with -g 0 -c 1000000 -b 100, which means limit the search to 1 million top rows in batches of 100.
I had to sign up for Google Colab Pro and use GPU+High RAM runtime type, otherwise it was too unstable.
Anyway, here is the link to the zipped cache file: cache.zip. Copy it to the 2019 vectors folder and it will be picked up automatically.
Hi,
I'm trying to precompute the cache for 2019 pretrained model with a google collab GPU (12 GB) and I always get a CUDA alllocating memory error
How much memory we need to run the script?
regards
Maurizio
The text was updated successfully, but these errors were encountered: