Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aml 20867 paddle frontend var #20932

Merged

Conversation

Aml-Hassan-Abd-El-hamid
Copy link
Contributor

@Aml-Hassan-Abd-El-hamid Aml-Hassan-Abd-El-hamid commented Jul 28, 2023

close #20867

@ivy-leaves ivy-leaves added the Paddle Frontend Developing the Paddle Frontend, checklist triggered by commenting add_frontend_checklist label Jul 28, 2023
@Aml-Hassan-Abd-El-hamid
Copy link
Contributor Author

Aml-Hassan-Abd-El-hamid commented Jul 31, 2023

Hi @hirwa-nshuti ,

I'm so sorry to bother you, but I see that you worked on the same file before me and that's why I'm asking for your help with this, hope that's ok with you.

There are some failing tests and I'd really appreciate your feedback on this,
what are the changes that I should be making? can you please help me with that?

Thank you.

Copy link
Contributor

@fnhirwa fnhirwa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @Aml-Hassan-Abd-El-hamid

Sure no problem I took a look and pointed out the cause of failures hope yyou'll update your PR to get the tests passing 😊

backend_fw,
test_flags,
):
input_dtype, x, axis = dtype_and_x
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is here when unpacking values this helper function returns tuple of 4 variables but you're unpacking only 3

Suggested change
input_dtype, x, axis = dtype_and_x
input_dtype, x, axis, _ = dtype_and_x

@@ -45,3 +45,13 @@ def median(x, axis=None, keepdim=False, name=None):
else ivy.astype(x, ivy.float32)
)
return ivy.median(x, axis=axis, keepdims=keepdim)


@with_supported_dtypes({"2.5.0 and below": ("float16", "float32", "float64")}, "paddle")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The version value should be 2.5.1

@@ -45,3 +45,13 @@ def median(x, axis=None, keepdim=False, name=None):
else ivy.astype(x, ivy.float32)
)
return ivy.median(x, axis=axis, keepdims=keepdim)


@with_supported_dtypes({"2.5.0 and below": ("float16", "float32", "float64")}, "paddle")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The version value should be 2.5.1

@Aml-Hassan-Abd-El-hamid
Copy link
Contributor Author

Thank you very much for your help @hirwa-nshuti , I did the requested changes but I still have failing tests -they are now more than before- so is that ok or do I need to change more things?

@ZoeCD ZoeCD assigned fnhirwa and unassigned ZoeCD Aug 7, 2023
@ZoeCD
Copy link
Contributor

ZoeCD commented Aug 7, 2023

I tried my best but It kept failing in run_tests (1),run_tests (58) and run_tests (75) so can you please take a look and tell me what changes I need to do?

Hello! You don't need to worry about failing tests that are not related to your PR, so you can ignore run_tests (58) and run_tests (75). Regarding your failing tests, I get the following error for all backends:

RuntimeError: (NotFound) The kernel with key (CPU, Undefined(AnyLayout), float16) of kernel `mean` is not registered. Selected wrong DataType `float16`. Paddle support following DataTypes: float64, complex128, float32, complex64, bool.
[Hint: Expected kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU != true, but received kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU:1 == true:1.] (at /paddle/paddle/phi/core/kernel_factory.cc:199)

Honestly, I've never seen that before. @hirwa-nshuti do you know what might be happening?

@samthakur587
Copy link
Contributor

samthakur587 commented Aug 8, 2023

hii @ZoeCD @hirwa-nshuti i am sam . i am also working on var function to paddle.Tensor frontend method (#18302). and i am also getting this error. and i found that the error is coming because the backend of the var function is calling the _std function (mean ) and the mean doesn't sport the float16 dtype as it is only sport the float32 and float64 dtype.

and i changed the dtype after but still it can't work. i don't know how but when i try the following code it passes all the test.

#paddle.tensor.stat.py
@with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def var(x, axis=None, unbiased=True, keepdim=False, *, out=None):
    return ivy.var(x, axis=axis, correction=int(unbiased), keepdims=keepdim, out=out)
# var
@handle_frontend_test(
    fn_tree="paddle.var",
    dtype_and_x=_statistical_dtype_values(
        function="var",
        min_value=-1e04,
        max_value=1e04,
    ),
    keepdims=st.booleans(),
)
def test_paddle_var(
    *,
    dtype_and_x,
    keepdims,
    on_device,
    fn_tree,
    backend_fw,
    frontend,
    test_flags,
):
    input_dtype, x, axis, correction = dtype_and_x
    helpers.test_frontend_function(
        input_dtypes=input_dtype,
        frontend=frontend,
        test_flags=test_flags,
        fn_tree=fn_tree,
        on_device=on_device,
        backend_to_test=backend_fw,
        x=x[0],
        axis=axis,
        unbiased=bool(correction),
        keepdim=keepdims,
    )

this is the result

image

@samthakur587
Copy link
Contributor

i think there must a problem with the paddle backend implementation of var function. because i worked on var method on jax frontend there also all the test is passing except the paddle backend.

can you please check if the code that i provided is correct implementation then you can merge it so i can call this var in my paddle.Tensor directly and it should work there also.

thank you

@Aml-Hassan-Abd-El-hamid
Copy link
Contributor Author

Hi @samthakur587
Thank you very much for your help on this.

I'm curious about something -I'm still learning about ivy and its structure-, You said that the backend var for paddle is the one which got a problem so why don't we fix this one instead of changing the frontend function?

I looked at other var function implementations (TensorFlow, Torch, Jax) to try and learn from them and they seem much more complicated and different than the var backend for paddle, also the PR merge that was responsible for it seems to have a lot of failing tests.

Copy link
Contributor

@fnhirwa fnhirwa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically the paddle.var function doesn't support float16 even it was mentioned here in the docs https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/var_en.html#var

import paddle
x = paddle.to_tensor([-1., -1.], dtype="float16")
print(paddle.var(x, axis=None, unbiased=True, keepdim=False, name=None))

will throw runtime error for unsupported dtype
Removing float16 from supported dtypes will fix the issue.
Also resolve the merge conflicts

Copy link
Contributor

@fnhirwa fnhirwa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Thanks for contributing😊

@fnhirwa fnhirwa merged commit ee24083 into ivy-llc:main Aug 16, 2023
262 of 266 checks passed
@Aml-Hassan-Abd-El-hamid Aml-Hassan-Abd-El-hamid deleted the Aml-20867-paddle-frontend-var branch August 16, 2023 10:16
sushmanthreddy pushed a commit to sushmanthreddy/ivy that referenced this pull request Aug 17, 2023
Co-authored-by: hirwa-nshuti <hirwanshutiflx@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Paddle Frontend Developing the Paddle Frontend, checklist triggered by commenting add_frontend_checklist
Projects
None yet
Development

Successfully merging this pull request may close these issues.

var
5 participants