Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Unknown builtin op: aten::scaled_dot_product_attention. #134

Open
zeng798473532 opened this issue Sep 20, 2024 · 3 comments
Open

Comments

@zeng798473532
Copy link

I run python -m apps.infer with pytorch==1.13, python==3.8, and get the below error:

  File "/nfs/codes/PIFu-series/ECON/apps/infer.py", line 193, in <module>
    sapiens_normal = sapiens_normal_net.process_image(
  File "/nfs/codes/PIFu-series/ECON/apps/sapiens.py", line 65, in process_image
    normal_model = ModelManager.load_model(Config.CHECKPOINTS[normal_model_name], self.device)
  File "/nfs/codes/PIFu-series/ECON/apps/sapiens.py", line 37, in load_model
    model = torch.jit.load(checkpoint_path)
  File "/home/zlz/miniconda3/envs/econ/lib/python3.8/site-packages/torch/jit/_serialization.py", line 162, in load
    cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
RuntimeError: 
Unknown builtin op: aten::scaled_dot_product_attention.
Here are some suggestions: 
        aten::_scaled_dot_product_attention

The original call is:
  File "code/__torch__/mmpretrain/models/utils/attention.py", line 29
    k = torch.select(qkv0, 0, 1)
    v = torch.select(qkv0, 0, 2)
    x = torch.scaled_dot_product_attention(q, k, v)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    input = torch.reshape(torch.transpose(x, 1, 2), [_0, _2, 1536])
    _5 = (proj_drop).forward((proj).forward(input, ), )

how to solve it?

@YuliangXiu
Copy link
Owner

@zeng798473532 You can check the installation document of https://github.com/facebookresearch/sapiens

@learning-mamba
Copy link

I encountered the same issue. Do you have a solution?

@chill781
Copy link

chill781 commented Oct 8, 2024

You shouldn't use PyTorch 1.13.1; you need at least PyTorch 2.0 or higher because the scaled_dot_product_attention function is only available in version 2.0. Additionally, you need to update your CUDA and PyTorch3D versions so that they are compatible with each other, and that should fix the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants