-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: ivy.linalg.matrix_norm() for paddle backend #28500
Conversation
4e1ecba
to
a850d14
Compare
Hey @vedpatwardhan , new observation (which might be a coincidence but might also point to the issue with paddle backend)- whenever EDIT: |
I'd say let's create a minimal example of the incorrect gradients and create an issue on the paddle repo regarding this, but probably what we'll get as the reply is that |
Hey @vedpatwardhan , I implemented the workaround, it works- with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Kacper-W-Kozdon, seems like there's a few other test failures that have arised as a result of these changes (e.g. this), could you please take a look at this and other failures that are pointed in the workflow if any of them are related? Thanks 😄
Oh! My bad, thanks for checking that thoroughly- the changes to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, there don't seem to be any additional failures introduced. Feel free to merge the PR @Ishticode, thanks @Kacper-W-Kozdon 😄
PR Description
dtype
argument through the@infer_dtype
decorator andivy.astype(x, dtype).as_native()
.dtype
argument to array and container methods.float32
for theout
argument passed down to torch backend-out
argument has to be of the same type as the return oftorch.linalg.matrix_norm()
(float32
).Current issues:
max()
andmin()
functions (I will write just formax()
) due to different implementations between backends (look uppaddle.max()
vspaddle.amax()
ortorch.max()
vstorch.amax()
). Tensorflow, numpy, torch and jax pass all the tests alike.Related Issue
Continuation of the issue #28314 and PR #28323
Closes #28314
Checklist
Socials