-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Align behavior of ReduceL2-11, 13, 18 with original framework #22741
Conversation
Hello @LucaTamSapienza, is this PR ready for review? |
Hi @p-wysocki. The code compiles, but I have some pending issues. I have described my problems in the PR specification. I was waiting for someone to provide me with some clarifications on the doubts I encountered. I took a look at softmax.cpp file and to this conversation to see how to implement the features of the different versions. Unfortunately, I still have some doubts about |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @LucaTamSapienza, thanks for your contribution!
Please, rebase your solution to a fresh master, because I've merged a change #22462 with fix for all Reduce* operations which may affect your test cases.
Also, could you enable tests here
"OnnxBackendNodeModelTest.test_reduce_l2_do_not_keepdims_example_cpu", |
They should work after rebasing to a fresh master
Sure, i'll sync && pull immediatly.
Do you mean for v 11, 13 ,18? |
@gkrivor , Thank you very much for your help and linking me to this PR, I have properly aligned versions 13 and 18. I noticed that you left a comment under version 11. Perhaps I should remove the call from reduce.cpp file and simply use the same implementation used in v1 by calling it in reduce.hpp within v11? Something like
For this correction, I took inspiration from the prototxt of the
I've deleted the ReduceL2 references from ticket 99962 && 99968 inside the init.py and removed for 99968 declaration about reduce_l2. I just have some doubts about this |
Hi @LucaTamSapienza, thanks for the update! Regarding opset-11. Actually, I think it is a good idea, I also thought about something like this. But currently, we have an internal discussion about "following the standards". I suggest to keep it as is. When we will decide how strictly we want to support standards - I could easily extend/reduce code base. Also, I don't recommend to touch this place because several developers are working on reduce* operations right now and you may introduce additional merge-conflicts between you and them. About "OnnxBackendNodeModelTest.test_reduce_l2_empty_set_cpu" - depends on a test result. If it become passing (you will see in test results), and leave it if it is still xfailed. |
You're right @gkrivor , I've just resolved some merge conflicts I had, now everything should be fine.
I ran |
build_jenkins |
build_jenkins |
build_jenkins |
This PR will be closed in a week because of 2 weeks of no activity. |
…penvinotoolkit#22741) ### Details: - I've aligned the ReduceL2 operation with opset 11, 13, and 18. I have some doubts about how to implement support for the tensor bfloat16 for opset 13 and also some doubts about opset 18. I've registered the function inside the ops_bridge.cpp, created test models, and added them inside onnx_import.in.cpp. ### Tickets: - Closes openvinotoolkit#20560 --------- Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com> Co-authored-by: Georgy Krivoruchko <georgy.krivoruchko@intel.com>
Details:
Tickets: