-
Notifications
You must be signed in to change notification settings - Fork 484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNX export questions #50
Comments
Just ran into the same issue - would be great to see a guide for how to do this or a fix to the ONNX export! |
I too would like a test code where I can see how the ONNX model from hugginface demo performs! Please share if anyone has written such code.. |
@RohaanA : import onnxruntime as ort
import numpy as np
import cv2
session = ort.InferenceSession('/content/yolow-l.onnx')
image = cv2.imread('/content/test.jpg')
image = cv2.resize(image, (640, 640)) # Resize to the input dimension expected by the YOLO model
image = image.astype(np.float32) / 255.0 # Normalize the image
image = np.transpose(image, (2, 0, 1)) # Change data layout from HWC to CHW
image = np.expand_dims(image, axis=0) # Add batch dimension
input_name = session.get_inputs()[0].name
output_names = [o.name for o in session.get_outputs()]
outputs = session.run(output_names, {input_name: image})
output_image = cv2.imread('test.jpg')
output_image = cv2.resize(output_image, (640, 640)) # Resize to the input dimension expected by the YOLO model
class_ids = outputs[0][0] # Adjusted for your outputs' structure
bbox = outputs[1][0]
scores = outputs[2][0]
additional_info = outputs[3][0] # Adjusted for your outputs' structure
score_threshold = 0.2
for i, score in enumerate(scores):
if score > score_threshold and (additional_info[i] != -1): # Adjusted the condition
x_min, y_min, x_max, y_max = bbox[i]
start_point = (int(x_min), int(y_min))
end_point = (int(x_max), int(y_max))
color = (0, 255, 0)
cv2.rectangle(output_image, start_point, end_point, color, 2)
class_id = class_ids[i] # Adjusted for scalar access
label = f"Class: {class_id}, Score: {score:.2f}"
cv2.putText(output_image, label, (int(x_min), int(y_min)-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
cv2.imshow("Detected Objects", image)
cv2.waitKey(0)
cv2.destroyAllWindows() |
@jainrahulsethi Did you use the ONNX export from the Hugging Face Space? I just tried this code in a colab notebook and got the same error. |
Hey, yes i tried the onnx export from the hugging face space and it works perfectly for me with the above code. |
@jainrahulsethi Would you mind sharing your colab notebook? I'm scratching my head trying to figure out what could be different between our two environments. I assume all packages (numpy, onnxruntime) are the latest version. |
Here you go:- In my case here the model detected only red cars |
@jainrahulsethi It does work! Thank you! |
I also tried the code of @jainrahulsethi on 2,8 GHz Quad-Core Intel Core i7, i have the last 1.16.3 version of onnxruntime (when linux got 1.17) is it why i got the error ?(Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running NonMaxSuppression node. Name:'/NonMaxSuppression' Status Message: non_max_suppression.cc:91 PrepareCompute boxes and scores should have same spatial_dimension) |
Can you try it out on colab and find out the differences wrt your machine
Thanks & Regards,
Rahul Jain
…On Tue, 20 Feb 2024 at 3:04 PM, pierre1618 ***@***.***> wrote:
I also tried the code of @jainrahulsethi
<https://github.com/jainrahulsethi> on 2,8 GHz Quad-Core Intel Core i7, i
have the last 1.16.3 version of onnxruntime (when linux got 1.17) is it why
i got the error ?(Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status
code returned while running NonMaxSuppression node.
Name:'/NonMaxSuppression' Status Message: non_max_suppression.cc:91
PrepareCompute boxes and scores should have same spatial_dimension)
—
Reply to this email directly, view it on GitHub
<#50 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGLPFJBBTIVHJATP4P2TCJTYURU3TAVCNFSM6AAAAABDKMX66OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJTHAYTIOBRHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@jainrahulsethi do you know how to provide the classes to detect as runtime input? I assume your example embedded them on the model at export time, right? |
@pierre1618 This will sound odd, but for me the issue was in the export step. In order to get a working ONNX model, I had to do the following in the demo:
|
Indeed @csmithxc answer fixed my problem too |
How many classes it will support ? |
thank @csmithxc that was actually the problem (my model did not work on the colab either as i did not follow your steps on the demo) |
Hi all (@csmithxc, @lxfater, @jainrahulsethi, @RohaanA @miguelaeh), you can export the ONNX model through the |
This issue will be closed since there is no further update related to the main topic. Besides, the error has been fixed already. Thanks for your interest. If you have any questions about YOLO-World in the future, you're welcome to open a new issue. |
Hello i've been trying to toy with the onnx export from the huggingface demo
i spinned a quick ORT test code but seems to get a problem when executing :
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running NonMaxSuppression node. Name:'/NonMaxSuppression' Status Message: non_max_suppression.cc:91 onnxruntime::NonMaxSuppressionBase::PrepareCompute boxes and scores should have same spatial_dimension.
Seems like the box and score lists are mismatched in length during the NMS so this should be very close to the end of the execution so i assume that if i messed up with the inputs the model should have crashed much earlier
Can you provide some information if you have any idea what's going on ? I can provide more info on the test code if needed but it's very basic (load image, resize to 640x640, reshape in 1x3x640x640, create input dict with "images" as input name and run the model)
I tried with and without standard RGB normalization as i wasn't sure if we had to do it but both give the same error.
The text was updated successfully, but these errors were encountered: