-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Cannot export ESPNetV2 models #2732
Comments
Please upgrade the paddle to develop and try again. |
Hmmm, disc is almost full and I do not speak Chinese. Is there an English version of the web page? |
No changes for version 2.7.0 |
It works if you do not set the input shape. |
I first tried to infer images with version 2.7.0: Fails
Tried to export with input size: Fails
Export without input size and export to ONNX: crashes
|
|
I am not sure what do you mean in your third question. |
It's not a question. Rather help for you. You guessed it might be related to image size and I have now given you a picture that works and one that does not work when resized. Including the resized image (the original is in the repository) and the error message generated when processing the resized image. |
Thank you. We will update you as soon as the bug is fixed. |
Please see #3003 and the bug is fixed. Btw, we assigned input shape in export process. Therefore it is necessary to keep the image shape the same as the export shape during prediction to get the correct result. |
This bug has been fixed. |
System:
Windows 10
Python 3.9.13
paddle-bfloat 0.1.7
paddle2onnx 1.0.1
paddlefsl 1.1.0
paddlehub 2.3.0
paddlenlp 2.4.1
paddlepaddle 2.3.2
paddleseg 2.6.0
Executing the following command to export ESPNetV2:
leads to the following error message:
The text was updated successfully, but these errors were encountered: