Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added asin function and test for frontend #15802

Merged
merged 8 commits into from
May 24, 2023

Conversation

VladZetta
Copy link
Contributor

@VladZetta VladZetta commented May 23, 2023

close #15799

@VladZetta
Copy link
Contributor Author

Hi @hello-fri-end !
Could you please review my PR? I get some checks failed ( 1 for intelligent-test-pr(6) and 1 for formatting). I just cannot find an error. Sorry for running too much test, but I really want to find the error :)

Thanks!

Copy link
Contributor Author

@VladZetta VladZetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed Dtypes

@hello-fri-end
Copy link
Contributor

Hi @hello-fri-end ! Could you please review my PR? I get some checks failed ( 1 for intelligent-test-pr(6) and 1 for formatting). I just cannot find an error. Sorry for running too much test, but I really want to find the error :)

Thanks!

Hey @VladZetta ! The documentation of paddle.asin seems to be wrong as it's throwing an error when float16 is used. You can confirm the same by running the following code:

dtypes_list = ["int8", "int16", "int32", "int64", "uint8", "float16", "float32", "float64", "complex64", "complex128", "bool",]
unsupported = []
supported = []
for dtype in dtypes_list:
    try:
        a = paddle.to_tensor([1.,2,3],dtype=dtype)
        # insert the function you want to test in the following line
        #paddle.normal(a,a)
        paddle.asin(a)
        supported.append(a.dtype)
    except:
        # use this to generate native dtype
        unsupported.append(a.dtype)
        # use this to generate string dtype
        #unsupported.append(dtype)
print(unsupported)
print(supported)

If you remove float16 from the supported dtypes, I think the tests will pass. Let me know if you have any other questions :)

P.S: Could you please link the issue in the description that this PR closes.

@VladZetta
Copy link
Contributor Author

VladZetta commented May 24, 2023

Hi @hello-fri-end ! Could you please review my PR? I get some checks failed ( 1 for intelligent-test-pr(6) and 1 for formatting). I just cannot find an error. Sorry for running too much test, but I really want to find the error :)
Thanks!

Hey @VladZetta ! The documentation of paddle.asin seems to be wrong as it's throwing an error when float16 is used. You can confirm the same by running the following code:

dtypes_list = ["int8", "int16", "int32", "int64", "uint8", "float16", "float32", "float64", "complex64", "complex128", "bool",]
unsupported = []
supported = []
for dtype in dtypes_list:
    try:
        a = paddle.to_tensor([1.,2,3],dtype=dtype)
        # insert the function you want to test in the following line
        #paddle.normal(a,a)
        paddle.asin(a)
        supported.append(a.dtype)
    except:
        # use this to generate native dtype
        unsupported.append(a.dtype)
        # use this to generate string dtype
        #unsupported.append(dtype)
print(unsupported)
print(supported)

If you remove float16 from the supported dtypes, I think the tests will pass. Let me know if you have any other questions :)

P.S: Could you please link the issue in the description that this PR closes.

@hello-fri-end
Well, indeed float16 is not supported, changed it! and added link in description to the issue

But tests are still failing. I don't get why you suggested using get_dtypes("valid"). Is there a way to make it return only float32 and float64 ? I'm just guessing

Copy link
Contributor

@hello-fri-end hello-fri-end left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good to me! The errors in the CI are unrelated to your PR. Thanks for the contribution :)

@VladZetta
Copy link
Contributor Author

VladZetta commented May 24, 2023

The changes look good to me! The errors in the CI are unrelated to your PR. Thanks for the contribution :)

Thanks for helping out! So PR will be accepted? Because it seems like I need to send the link of PR to go further :)

@hello-fri-end
Copy link
Contributor

Hi @hello-fri-end ! Could you please review my PR? I get some checks failed ( 1 for intelligent-test-pr(6) and 1 for formatting). I just cannot find an error. Sorry for running too much test, but I really want to find the error :)
Thanks!

Hey @VladZetta ! The documentation of paddle.asin seems to be wrong as it's throwing an error when float16 is used. You can confirm the same by running the following code:

dtypes_list = ["int8", "int16", "int32", "int64", "uint8", "float16", "float32", "float64", "complex64", "complex128", "bool",]
unsupported = []
supported = []
for dtype in dtypes_list:
    try:
        a = paddle.to_tensor([1.,2,3],dtype=dtype)
        # insert the function you want to test in the following line
        #paddle.normal(a,a)
        paddle.asin(a)
        supported.append(a.dtype)
    except:
        # use this to generate native dtype
        unsupported.append(a.dtype)
        # use this to generate string dtype
        #unsupported.append(dtype)
print(unsupported)
print(supported)

If you remove float16 from the supported dtypes, I think the tests will pass. Let me know if you have any other questions :)
P.S: Could you please link the issue in the description that this PR closes.

@hello-fri-end Well, indeed float16 is not supported, changed it! and added link in description to the issue

But tests are still failing. I don't get why you suggested using get_dtypes("valid"). Is there a way to make it return only float32 and float64 ? I'm just guessing

get_dtypes("valid") automatically finds all the dtypes that are supported by the function, so, even if the function only supports floats, that's what valid will return :)

@hello-fri-end
Copy link
Contributor

hello-fri-end commented May 24, 2023

Frontend Task Checklist

IMPORTANT NOTICE 🚨:

The Ivy Docs represent the ground truth for the task descriptions and this checklist should only be used as a supplementary item to aid with the review process.

LEGEND 🗺:

  • ❌ : Check item is not completed.
  • ✅ : Check item is ready for review.
  • 🆘 : Stuck/Doubting implementation (PR author should add comments explaining why).
  • ⏩ : Check is not applicable to function (skip).
  • 🆗 : Check item is implemented and does not require any edits.

CHECKS 📑:

    • ❌: The function/method definition is not missing any of the original arguments.
    • ❌: In case the function/method to be implemented is an alias of an existing function/method:
        • ❌: It is being declared as such by setting fun1 = fun2, rather than being re-implemented from scratch.
        • ❌: The alias is added to the existing function/method's test in the aliases parameter of handle_frontend_test/handle_frontend_method.
    • ❌: The naming of the function/method and its arguments exactly matches the original.
    • ❌: No defined argument is being ignored in the function/method's implementation.
    • ❌: In special cases where an argument's implementation should be pending due to an incomplete superset of an ivy function:
        • ❌: A ToDo comment has been added prompting to pass the frontend argument to the ivy function whose behavior is to be extended.
    • ❌: In case a frontend function is being added:
        • ❌: It is a composition of ivy functions.
        • ❌: In case the needed composition is long (using numerous ivy functions), a Missing Function Suggestion issue has been opened to suggest a new ivy function should be added to shorten the frontend implementation.
        • ❌: @to_ivy_arrays_and_back has been added to the function.
    • ❌: In case a frontend method is being added:
        • ❌: It is composed of existing frontend functions or methods.
        • ❌: If a required frontend function has not yet been added, the method may be implemented as a composition of ivy functions, making sure that:
          • ❌: @to_ivy_arrays_and_back has been added to the method.
          • ❌: A ToDo comment has been made prompting to remove the decorator and update the implementation as soon as the missing function has been added.
    • ❌: The function/method's test has been added (except in the alias case mentioned in <2>):
        • ❌: All supported arguments are being generated in handle_frontend_test/handle_frontend_method and passed to test_frontend_function/test_frontend_method.
        • ❌: The argument generation covers all possible supported values. Array sizes, dimensions, and axes adhere to the full supported set of the original function/method.
        • ❌: The available_dtypes parameter passed to the helper generating the function/method's input array is set to helpers.get_dtypes("valid"). If there are unsupported dtypes that cause the test to fail, they should be handled by adding @with_supported_dtypes/@with_unsupported_dtype to the function/method.
    • ❌: The PR is not introducing any test failures.
        • ❌: The lint checks are passing.
        • ❌: The implemented test is passing for all backends.
    • ❌: The PR closes a Sub Task issue linked to one of the open frontend ToDo lists.
    • ❌: The function/method and its test have been added to the correct .py files corresponding to the addressed ToDo list.
    • ❌: The PR only contains changes relevant to the addressed subtask.

@hello-fri-end hello-fri-end merged commit 7d190e0 into ivy-llc:master May 24, 2023
@hello-fri-end
Copy link
Contributor

The changes look good to me! The errors in the CI are unrelated to your PR. Thanks for the contribution :)

Thanks for helping out! So PR will be accepted? Because it seems like I need to send the link of PR to go further :)

Merged it 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

asin
2 participants