Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add lcm function and its tests to NumPy Frontend #15711

Merged
merged 1 commit into from
May 22, 2023

Conversation

shauryabhawaker
Copy link
Contributor

Close #15700

@ivy-leaves ivy-leaves added the NumPy Frontend Developing the NumPy Frontend, checklist triggered by commenting add_frontend_checklist label May 20, 2023
Copy link
Contributor

@iamjameskeane iamjameskeane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Test failures are due to code elsewhere in the project.

Thanks! :)

@iamjameskeane iamjameskeane merged commit d26af49 into ivy-llc:master May 22, 2023
@shauryabhawaker
Copy link
Contributor Author

@jkeane508 First error is due to other function,

available_dtypes=helpers.get_dtypes("int"),

could the error be due to this?

This is what im getting in the git test results. But it is only for the paddle backend.
________ test_numpy_lcm[cpu-ivy.functional.backends.paddle-False-False] ________
ivy_tests/test_ivy/test_frontends/test_numpy/test_mathematical_functions/test_miscellaneous.py:680: in test_numpy_lcm
fn_tree="numpy.lcm",
ivy_tests/test_ivy/helpers/hypothesis_helpers/array_helpers.py:490: in dtype_and_values
available_dtypes = draw(available_dtypes)
/opt/miniconda/envs/multienv/lib/python3.8/site-packages/hypothesis/internal/conjecture/data.py:950: in draw
return strategy.do_draw(self)
/opt/miniconda/envs/multienv/lib/python3.8/site-packages/hypothesis/strategies/_internal/core.py:1510: in do_draw
return self.definition(data.draw, *self.args, **self.kwargs)
ivy_tests/test_ivy/helpers/hypothesis_helpers/dtype_helpers.py:173: in get_dtypes
valid_dtypes = set(retrieval_fn(test_globals.CURRENT_BACKEND(), kind))
ivy_tests/test_ivy/helpers/hypothesis_helpers/dtype_helpers.py:35: in _get_fn_dtypes
return test_globals.CURRENT_RUNNING_TEST.supported_device_dtypes[framework.backend][
E KeyError: 'int'
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.75.3', b'AA==') as a decorator on your test case

G544 added a commit to G544/ivy that referenced this pull request May 24, 2023
* temp changes to handle_out_arguement decorator, torch.linear, and to_wrap function for testing out argument of mixed functions

* Update func_wrapper.py

* Update layers.py

* Update layers.py

* Update data_type.py

* Update data_type.py

* fix affine grid implementation

* fix linting

* ensure ivy ccompositional functions working

* affine grid working with 4d output and align corners = true, fixed linting

* add align corners=True mode, fix linting

* updated the handle_out_in_ivy flag in handle_out_argument decorator to check if the module of the function is ivy.functional.ivy in which the function can handle the out argument on its own

* added noqa E501 to the lambda function of interpolate to silence flake8 line too long warning

* removed the dependency of mixed_function attribute from _wrap_function

* removed the mixed function attribute from all mixed functions

* changed is_compositional_function to is_compos_fn inside handle_out_argument decorator

* removed the mixed_function attribute check from _wrap_function:

* updated wrap_function to add to_native_arrays_and_back decorator to mixed functions in a clean way

* feat: adds training and dataformat flags

* docs: updated bn docstring

* docs: redundant docstring removed

* docs: default dtype added

* Added pytorch frontend frexp (ivy-llc#14187)

Also changed return type of torch backend for frexp to be consistent with other frameworks

* fixed fronend test file names to avoid duplication

* remove outdated decorator from torch linspace

* Fixing dtype promotions for paddle's scatter_nd

* Updated the versions in supported dtypes (ivy-llc#15685)

This commit aligns with the recent update on unsupported dtypes decorator, the unsupported dtypes decorator is now receiving the latest framework version. The next milestone should be to update values as there might be support for mentioned dtypes with the latest version

* abs (ivy-llc#15644)

Co-authored by: nassimberrada

* Update tensor.py

adding not_equal PyTorch Frontend function

* added std instance method to pytorch frontend (ivy-llc#15003)

Co-authored-by: Felix Hirwa Nshuti <hirwanshutiflx@gmail.com>

* removed unnecessary `with_unsupported_dtypes` decorators in experimental functions (ivy-llc#15706)

* Update test_reduction_ops.py

tests.

* fix masked array tests (ivy-llc#15708)

* 🤖 Lint code

* prevent unnecessary promotion in dropout

* remove affine_grid implementation

* various fixes for experimental activation API

* add gelu activation function for the mx backend

* add softmax, sigmoid, relu and leaky_relu activation funcs in the mxnet backend

* add softplus with handling for beta and threshold in mx backend

* remove get and deserailize functions from mx backend to be handle by functional/ivy

* fix return statement in activations

* Added instance method tofile(), tested it (ivy-llc#15020)

* Added instance method tofile(), tested it

* Added ivy.Array instance method to_file()

* instance method to_file()

* instance method to_file()

* instance method to_file()

* add sum function to mx backend considering the superset behaviour

* Add addcmul and addcmul_ to torch tensor (ivy-llc#14987)

* Add addcmul and addcmul_ to torch tensor

* update versioning for dtype spec

---------

Co-authored-by: Ishtiaq Hussain <53497039+Ishticode@users.noreply.github.com>

* add floor, ceil and floor divide to mx back in elementwise submod

* add prod function to the mxnet backend with dtype casting

* implement copy_array for mx backend

* add comparison functions (greater, greater_equal, less and less_equal) in the elementwise sub for mx

* remove det from the wrrongly place general submod and implement in linalg for mx

* add placeholder for the gather function to avoid recursion error in mx back

* various fixes for experimental creation API and revert ivy-llc#14999

* added support for 1d input for paddle meshgrid, fix for expand dims when axis is empty list,

* add log10p, log2, and logaddexp function for mx

* add multiply, negative, square and power function added to mx

* add subtract to mx backend

* add divide function for mx backend

* add squeeze func for mx

* add clip function for mx

* return False for tpu_is_available in mx

* implement num_gpus in mx backend in device submod

* add astype dtype casting for for mx

* implement is_variable based on mx NDArray as it doesn't require explicit req_grad flag. Implemented expm1

* cleanup general submodule in mx and remove duplicated linalg funcs

* Update stat.py

* Update __init__.py

* Update test_paddle_stat.py

* 🤖 Lint code

* Add paddle.tensor.math Mathematical Functions to PaddlePaddle Frontend  (ivy-llc#15729)

* cleanup experimentallayers submodule by removing the creation funcs and placing the placeholder for layer functions in mx back

* move triu_indices function to stable api from experimental
tested with over 500 examples

* fix the passing of the binary_cross_entropy_with_logits in static container method

* jax.random.shuffle, jax.random.exponential (ivy-llc#15598)

* move lcm function from experimental to stable api

* add the missing binary_cross_entropy_with_logits method

* Update tensor.py

Changed instance fames to self, other and added *

* Update test_reduction_ops.py

removed the test of not_equal frontend function from this file

* Update test_tensor.py

added a test for function not_equal with a @handle_frontend_test decorator

* Update test_tensor.py

adjusted the test for not_equal

* Update tensor.py

changed a out=out to out=None for non_equal function

* simplify _type_conversion functions in jax nn front

* slightly reduce and improve the code for _batch_promotion in jax front nn
collapse condition and isinstance

* reduce the condtionals to one return statements in _len and one_hot for jax front nn funcs

* remove the extra variable ret and return the result directly for one_hot function in jax front nn

* fix the logic statement in polymul to take the a2 input into account

* change the name of test helper that was detected as a test

* Implemented cummax on backend (ivy-llc#14240)

Co-authored-by: Ziad Amerr <70425741+ZiadAmerr@users.noreply.github.com>

* simplify DeviceArray init to avoid set _ivy_array without negation

* update dtype spec version for cummax in torch backend to 2.0.1

* delete the models submod added in f2f588d
to avoid intelligent test failures

* update and imrpve dtype checks in cummax using an in tuple check

* fix tests for repeat by updating the function unsupported dtype spec in torch backend

* Update jax front nn activation test to exclude complex dtype test until the complex input for activations have been settled upon and implemented in the task https://trello.com/c/QMZMexxE/1851-complex-input-activations

* fix for paddle abs 0d input

* updated the mixed functions section of the deepdive to add information about the partial_mixed_handler attribute

* updated the out argument handling deep dive of mixed functions

* added information about partial_mixed_handler attribute to the superset section of the deepdive

* 🤖 Lint code

* adding the is_complex instance method

* Fixed matrix rank for paddle backend (ivy-llc#15745)

* adding the is_complex instance method

* adding the is_complex instance method

* fixed failing test when dtype=float16

* imag is not ufunc (ivy-llc#15749)

* Update test_tensor.py

added on_device

* docs: updated links in reformat_checklist

* Update tensor.py

ne = not_equal

* Update test_tensor.py

added test for not_equal function

* Update tensor.py

* Update test_tensor.py

* Update test_tensor.py

* Update test_tensor.py

* Update test_tensor.py

* update nanvar in jax front discarding redundant variables and simplifying the logic with passing tests

* remove handling of axis as a list in jnp frontend nanvar as the native jnp.nanvar throws ValueError when axis is passed as a list. >>> In the Frontends, we don't add functionality that the native doesn't provide.

* fix test_jax_numpy_add where pos only were passed as kw when frontend is set the backend in frontend test function

* Add lcm function and its tests to NumPy Frontend (ivy-llc#15711)

Co-authored-by: jkeane508 <jameskeane508@gmail.com>

* add missing frontend aliases

* revert commit breaking ivy.asarray

* Frontend PR checklist (ivy-llc#15753)

* Fix torch.divide tests (ivy-llc#15734)

* fix jax.numpy frontend tests for arctan, arctan2, cos and cosh by ensuring the pos only args are specified in accordance with the signature in native

* fix test_jax_numpy_inner for torch specifying unsupported float16

* fix several tests in jax front for trig math function by specifying pos only args as in the native fw

* fix test_jax_numpy_gcd for paddle backend specifying uint8 as unsupported

* add safety factors to test_jax_numpy_lcm to avoid inconsistent overflow behaviour between different backends and specify unsupported int8 and uint8 for paddle backend

* Add copysign instance method to PyTorch Frontend (ivy-llc#15754)

Co-authored-by: paulaehab <eng.paulaehab@gmail.com>

* fix jax numpy tests for ceil, float_power, deg2rad, radians, exp2 by avoidin pos only kw passed as kw when frontend is set as backend and matching the signature with native jax.numpy funcs

* slight shortening in nanmin and nanmax code and removing the needless variable ret_nanmean

* some improvements in the code for average and bincount in jax frontend

* Fix-raw_ops.test_tensorflow_FFT (ivy-llc#15767)

* fix test_jax_numpy_outer by ensuring frontend arrays are converted to ivy arrays

* fix tests for several math functions in jax front by avoiding pos args passed as kw when frontend is set as backend in frontend function testing

* fix several log related jax front tests by correct postioning of args when front is set as backend in frontend function testing

* fix test_jax_numpy_around by removing the incorrect dim expansion in frontend implementation

* Update test_tensor.py

testing

* Update test_tensor.py

testing

* Update test_tensor.py

* Update test_tensor.py

* fixing lint

* fixing errors

* testing

* updating tensor.py

* Add paddle.tensor.creation.arange to PaddlePaddle frontend (ivy-llc#15766)

* updating tensor.py

* updating tensor.py

* updating tensor.py

* fix jax.nn.softmax with paddle backend

* 🤖 Lint code

* adding an exception in `conv_general_dilated` in tf backend for `feature_group_count`

* Fixes Typo (ivy-llc#15726)

Co-authored-by: hmahmood24 <hmahmood.bee16@gmail.com>

* Adds is_finite to jax frontend api with tests (ivy-llc#15476)

Co-authored-by: hmahmood24 <hmahmood.bee16@gmail.com>

* ensure small/large safety factors and tolerance in test_jax_numpy_divide to avoid result mismatches in torch

* adding max pool 1d in paddle backend (ivy-llc#15790)

Co-authored by MahmoudAshraf97 <hassouna97.ma@gmail.com>

* fixing pydocstyle error

* Update codeStyleConfig.xml

* Update ivy.iml

* Update misc.xml

* Update vcs.xml

* revert adding extra tf frontend modules

* Restructuring TensorFlow frontend function flatten (ivy-llc#15090)

Co-authored-by: Shehryar Tariq <sherrytst30@gmail.com>

* remove unnecessary jax frontend module after compiler wrapping improvements

* Added asin function and test for frontend

* Add Test diff profile for Hypothesis, remove old flag.

* Added new line

* Add frontend checklist

* Add as_list instance method to ivy.Shape

* fix several tests in jax.numpy front based on pos only args not read correctly when frontend is set as backend in frontend function test and to match the signature with native jax

* resize Manipulation routine for Numpy Frontend (ivy-llc#13829)

* changes to the reize frontend

* added inplace update

* Update changing_array_shape.py

* tried fixing test api

* numpy implementation of resize

* Update test_changing_array_shape.py

* converted the output to ivy array

* converted the numpy implementation into ivy

* added pre commit

* commit

* Revert "added pre commit"

This reverts commit e8966bf.

* reformatting

* lint formatting

* lint

* Update test_changing_array_shape.py

* fixed lint issue and gitignore

* Update test_changing_array_shape.py

* removed the .gitignore

* Changed Supported Data types

* Typo

* Update actions version

* Changed Paddle Version in data types

* fn_tree changed path to function

* Yaml syntax changed

* fix test_torch_addr by excluding outer in paddle backend to have manual broadcasting applied in elementwise helper as it flattens inputs and doesn't req broadcasting

* updating tensor.py

* updating tensor.py

* updating tensor.py

* Changed ged_dtypes

* fixing `ivy.is_native_dtype` by checking its type

* Add max pool1d to torch frontend (ivy-llc#15496)

Failing jax backend due to issue with jax's backend implementation not supporting data_format flag

* Deleted float16

* fix torch matmul out arg (ivy-llc#15813)

* fixed tf scatter_nd error 'no attribute dtype' occurring when updates is a single scalar or bool

* update test_jax_nn_sigmoid to exclude test for integers This is because neither does the native jax fw support int for sigmoid (so our frontend doesn't need to) and neither the current implementations in our backend support int for sigmoid. Will need test for complex types for which a todo note is added.

* fix test_jax_nn_log_sigmoid for paddle backend by ensuring calls are made with ivy wrapping as calling ivy.functional.backends.paddle.<function> misses the wrapping and array to native conversions. log_sigmoid uses softplus internally

* Refactored jax's reduce_window to fix bugs and improve efficiency (ivy-llc#15803)

* Remove multi-version code from testing pipeline (ivy-llc#15801)

* Remove multi-version code from `function_testing.py`

* Remove multi-version code from `globals.py` and remove `config.py`

* Remove multi-version code from `test_parameter_flags.py`

* Reverted tf scatter_nd commit for single scalar or bool updates as it's not an intended use case

* Fixed pad tf error when trying to assign scalar or bool with scatter_nd

* fix test_jax_nn_hard_tanh dtype mismatches and remove ivy.asarray conv which should be part of wrapping. Add a todo note to run tests for all valid jax types for tanh as the native does

* minor fixes to paddle creation functions

* fix the broadcsting issue in paddle backend in inner (ivy-llc#15831)

* various fixes to backend experimental statistical fns

* adding `filter_format` argument in `conv_general_dilated`

* Fix ContextManager (ivy-llc#15833)

* Update set_backend to return global ivy.

* Update ContextManager to return global ivy.

* Incorporate Paddle Backend in the CI [skip ci]

* Update CI Documentation [skip ci]

---------

Co-authored-by: hello-fri-end <shahanwaar.khalid8@gmail.com>
Co-authored-by: Ved Patwardhan <54766411+VedPatwardhan@users.noreply.github.com>
Co-authored-by: Carlos Sandoval <71140233+clsandoval@users.noreply.github.com>
Co-authored-by: Rishab Mallick <rishabmallick6@gmail.com>
Co-authored-by: Anwaar Khalid <anwaar@lets-unify.ai>
Co-authored-by: Tejas Jogi <tdjogi010@gmail.com>
Co-authored-by: MahmoudAshraf97 <hassouna97.ma@gmail.com>
Co-authored-by: Felix Hirwa Nshuti <hirwanshutiflx@gmail.com>
Co-authored-by: Oluwamayowa Olawumi <mayor.michael19@gmail.com>
Co-authored-by: Moustafa Ahmed <81536586+mustafaa7med@users.noreply.github.com>
Co-authored-by: William Mwine <75728189+willsketch@users.noreply.github.com>
Co-authored-by: ivy-branch <ivy.branch@lets-unify.ai>
Co-authored-by: unknown <clsandoval@up.edu.ph>
Co-authored-by: Ishtiaq Hussain <53497039+Ishticode@users.noreply.github.com>
Co-authored-by: cressidasuphina <127023793+cressidasuphina@users.noreply.github.com>
Co-authored-by: Binh Le <binhnd234@gmail.com>
Co-authored-by: Santhosh Kakarla <65003974+KSANTHOSH200@users.noreply.github.com>
Co-authored-by: Shubham <99467100+TheGreatShubham@users.noreply.github.com>
Co-authored-by: Nouran Khaled Soliman <nouran.youssef99@eng-st.cu.edu.eg>
Co-authored-by: TVS Harish Naidu <tvsharish@gmail.com>
Co-authored-by: Ziad Amerr <70425741+ZiadAmerr@users.noreply.github.com>
Co-authored-by: Bloemenstraat <del.crabman@gmail.com>
Co-authored-by: akshatvishu <akshatnayak197@gmail.com>
Co-authored-by: Muhammad Saeed <mohammud.saeed.batekh@gmail.com>
Co-authored-by: shauryabhawaker <133897848+shauryabhawaker@users.noreply.github.com>
Co-authored-by: jkeane508 <jameskeane508@gmail.com>
Co-authored-by: Matt <matthew-barrett@hotmail.co.uk>
Co-authored-by: AnnaTz <111577222+AnnaTz@users.noreply.github.com>
Co-authored-by: Paula Ehab Alfy <42894502+paulaehab@users.noreply.github.com>
Co-authored-by: Amin Mraidi <79520397+NotEnoughSnow@users.noreply.github.com>
Co-authored-by: paulaehab <eng.paulaehab@gmail.com>
Co-authored-by: mustafaa7med <mustafaa7med@gmail.com>
Co-authored-by: manavshah-boop <51806434+manavshah-boop@users.noreply.github.com>
Co-authored-by: theRealBird <75845929+theRealBird@users.noreply.github.com>
Co-authored-by: Shehryar Tariq <sherrytst30@gmail.com>
Co-authored-by: Sarvesh Kesharwani <sarveshkumar5513@gmail.com>
Co-authored-by: hmahmood24 <hmahmood.bee16@gmail.com>
Co-authored-by: Simranjeet Singh <sjsingh9090@gmail.com>
Co-authored-by: Muhammad Abdullah <69182153+abdulasiraj@users.noreply.github.com>
Co-authored-by: Jono Tassia <jonotassia@gmail.com>
Co-authored-by: VladZetta <rypsa2012@gmail.com>
Co-authored-by: CatB1t <skytedits@gmail.com>
Co-authored-by: Ngo Viet Hoai Bao <xoiga123@outlook.com.vn>
Co-authored-by: Haris Mahmood <70361308+hmahmood24@users.noreply.github.com>
Co-authored-by: Adewole Adekunle <32862005+kunley2@users.noreply.github.com>
Co-authored-by: Heitor Felix <heitorcfelix@gmail.com>
Co-authored-by: JG <67888047+juliagsy@users.noreply.github.com>
Co-authored-by: Mostafa Hani <71686115+CatB1t@users.noreply.github.com>
Co-authored-by: RashulChutani <rashul.chutani@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NumPy Frontend Developing the NumPy Frontend, checklist triggered by commenting add_frontend_checklist
Projects
None yet
Development

Successfully merging this pull request may close these issues.

lcm
4 participants