You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is supposed to provide faster inference than the original pytorch model. Neither the onnx and nor the openvino runtime improve speed, in fact the inference time increases by manifold.
The text was updated successfully, but these errors were encountered:
Thanks for letting us know about this. As you know the support of BigBird architectures export was removed in huggingface/optimum#778 (following huggingface/optimum#754), and won't be supported anymore by optimum-intel for that reason. Support will be added back once huggingface/optimum#754 is solved.
System Info
Installed packages:
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I converted the summarizer model to onnx and then ran it:
I also tried the openvino runtime
Expected behavior
This is supposed to provide faster inference than the original pytorch model. Neither the onnx and nor the openvino runtime improve speed, in fact the inference time increases by manifold.
The text was updated successfully, but these errors were encountered: