You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a huge collection of 116 million passages. I am trying to create a colbert index for them using the indexing code given on the README. To manage the huge size, I am indexing them in batches of 1000 passages. However, the indexing step seems to be stuck at the encoding stage:
I have a huge collection of 116 million passages. I am trying to create a colbert index for them using the indexing code given on the README. To manage the huge size, I am indexing them in batches of 1000 passages. However, the indexing step seems to be stuck at the encoding stage:
Is this supposed to take so much time? Not sure if I am doing something wrong.
The text was updated successfully, but these errors were encountered: