Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for the node ArgMax_1094:ArgMax(11) #10068

Closed
dengfenglai321 opened this issue Dec 17, 2021 · 4 comments

Comments

@dengfenglai321
Copy link

hi i run
onnx_session1 = onnxruntime.InferenceSession("./pretrained/textmodel.onnx")
and generate error as below:

File "D:\anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 312, in _create_inference_session
sess.initialize_session(providers, provider_options)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for the node ArgMax_1094:ArgMax(11)

ONNX - 1.7.0
onnxruntime - 1.7.0

could you help me to solve this problem?

and the onnx model i transfered here

企业微信截图_16397227177148

@ytaous
Copy link
Contributor

ytaous commented Dec 17, 2021

Hi, can u pls check what's the type you are using for ArgMax. Likely it's not supported for opset 11.
ref - #9760

search for ArgMax for either CPU/CUDA provider:
ref - https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md

If CUDA, there's an PR in progress - #9700

@dengfenglai321
Copy link
Author

Hi, can u pls check what's the type you are using for ArgMax. Likely it's not supported for opset 11. ref - #9760

search for ArgMax for either CPU/CUDA provider: ref - https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md

If CUDA, there's an PR in progress - #9700

the type using for ArgMax is int64. and only can be int64, could yuo give me some advices?
which opset can support int64 for ArgMax?

@dengfenglai321
Copy link
Author

I solve it , only change ArgMax's input to int32, everything else stays the same, and it ok!!

@greyovo
Copy link

greyovo commented Oct 25, 2024

To make it clearer: it's a matter of converting the input vector to int32 when converting the model, and then executing torch.onnx.export, i.e.:

def export_text_encoder():
    text_encoder.eval()
    text = "A Diagram"
    input_tensor: Tensor = tokenizer(text)
    input_tensor = input_tensor.to(torch.int32)   # <---- here!
    model_text ='mobileclip-text-encoder.onnx'
    torch.onnx.export(text_encoder, input_tensor, model_text)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants