You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your interest in our work! For retrieval, we use a multi-vector retrieval model ColBERT-v2 to build index and search over wikipedia dump. The overall duration for indexing and searching takes about ten-plus hours.
Sure, I used pyserini for retrieval and found that it requires 200+ hours. Can the processed data be provided directly via git lfs or some cloud service?
Hi, @Hannibal046 How long did it take to perform this retrieval on all the training data?
The text was updated successfully, but these errors were encountered: