Skip to content
This repository has been archived by the owner on Mar 1, 2024. It is now read-only.

Biencoder with GPU RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select #110

Open
acadTags opened this issue Feb 17, 2022 · 3 comments

Comments

@acadTags
Copy link

acadTags commented Feb 17, 2022

Hi, thanks for your code! When I set the no_cuda in biencoder_wiki_large.json as True, then I run python blink/run_benchmark.py, it returns an error, please see below, is there anything that I missed? Best regards, A.

Traceback (most recent call last):
  File "blink/run_benchmark.py", line 81, in <module>
    ) = main_dense.run(args, logger, *models)
  File "/home/username/BLINK/blink/main_dense.py", line 429, in run
    biencoder, dataloader, candidate_encoding, top_k, faiss_indexer
  File "/home/username/BLINK/blink/main_dense.py", line 251, in _run_biencoder
    context_input, None, cand_encs=candidate_encoding  # .to(device)
  File "/home/username/BLINK/blink/biencoder/biencoder.py", line 160, in score_candidate
    token_idx_ctxt, segment_idx_ctxt, mask_ctxt, None, None, None
  File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/username/BLINK/blink/biencoder/biencoder.py", line 63, in forward
    token_idx_ctxt, segment_idx_ctxt, mask_ctxt
  File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/username/BLINK/blink/common/ranker_base.py", line 30, in forward
    token_ids, segment_ids, attention_mask
  File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 707, in forward
    embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids)
  File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 251, in forward
    words_embeddings = self.word_embeddings(input_ids)
  File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 126, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "/home/username/anaconda3/envs/blink37/lib/python3.7/site-packages/torch/nn/functional.py", line 1814, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select

Originally posted by @acadTags in #83 (comment)

@zhn1010
Copy link

zhn1010 commented Mar 11, 2022

I have the same problem. Have you found the solution?

@acadTags
Copy link
Author

I have the same problem. Have you found the solution?

Not solved yet. Maybe inference with biencoder was just designed to run with CPU - it was around 1 hour with the pre-computed entity embeddings.

@zhn1010
Copy link

zhn1010 commented Mar 11, 2022

I found the answer in this pull request

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants