-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EHN: adding bitwise funtion in paddle tensor.py #17416
Conversation
Frontend Task ChecklistIMPORTANT NOTICE 🚨:The Ivy Docs represent the ground truth for the task descriptions and this checklist should only be used as a supplementary item to aid with the review process. LEGEND 🗺:
CHECKS 📑:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job @MuhammadNizamani 🚀🚀
I've made a small comment, maybe I might be mistaken about the to_ivy_and_back
thing so I'm also asking if there is any internal changes that I'm not aware about. Please look at my comment I'd like to hear your thoughts
"paddle", | ||
) | ||
def bitwise_xor(self, y, out=None, name=None): | ||
y_ivy = _to_ivy_array(y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there is a specific reason why we use _to_ivy_array
? I remember that we used @to_ivy_arrays_and_back()
for this conversion but I can't find it in the rest of the file so I'm checking why it isn't there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KareemMAX we are implementing method in Tensor class here and we take our first input using self
and by using self
where can convert our input into ivy array so no need of @to_ivy_arrays_and_back()
.
I am using _to_ivy_array
because if i use @to_ivy_arrays_and_back()
then it will became redundant with this self._ivy_array
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, I got mixed up here
But I'm still curious that in other functions it's calling paddle_frontend
like in tensor.multiply()
https://github.com/unifyai/ivy/pull/17416/files#diff-4b9f6f2107068e5624a6f5d66bae5dae58c467a34f8c5aad65df3478ce4a5abbR187
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
def multiply(self, y, name=None):
return paddle_frontend.multiply(self, y)
The implementation appears to be incorrect based on my understanding. According to my understanding, paddle.tensor.Tensor
should receive functions from the backend, rather than the frontend. However, the above function is using paddle.tensor.tensor.math.py
to access the backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
paddle_frontend
is the ivy implementation of paddle frontend. We are not calling any paddle functions if that is what do you mean.
I've asked @hmahmood24 and he said the following:
since instance methods operate on the instance of the frontend class (
paddle.Tensor
in this case) which have a_ivy_array
attribute that holds theivy.Array
already) and there are only 2 ways we go about writing these instance methods:Calling
ivy.<func>
directly
Calling a frontend function of the same functionalitypaddle_frontend.<func>
That's why I think this implementation is wrong and multiply()
is have the correct implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KareemMAX he mentioned that you can also implement like ivy.func so I used that implemention .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KareemMAX done
"paddle", | ||
) | ||
def bitwise_xor(self, y, out=None, name=None): | ||
y_ivy = _to_ivy_array(y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, I got mixed up here
But I'm still curious that in other functions it's calling paddle_frontend
like in tensor.multiply()
https://github.com/unifyai/ivy/pull/17416/files#diff-4b9f6f2107068e5624a6f5d66bae5dae58c467a34f8c5aad65df3478ce4a5abbR187
@KareemMAX sir, when you are going to merge this PR ? |
@MuhammadNizamani once we fix the current issue your PR will be good to merge after running tests. Please refer to the latest comment I made on the unresolved conversation |
LGTM! merging now Thanks for your work @MuhammadNizamani 🚀🚀🚀 |
Co-authored-by: KareemMAX <kreem.morsy@hotmail.com>
Adding bitwise_xor, Closes #17414