Skip to content

Commit

Permalink
Merge branch 'unifyai:main' into max_pool3d
Browse files Browse the repository at this point in the history
  • Loading branch information
progs2002 authored Sep 5, 2023
2 parents 637d52e + 39c8e31 commit 3ceddf1
Show file tree
Hide file tree
Showing 154 changed files with 6,088 additions and 1,397 deletions.
2 changes: 1 addition & 1 deletion .devcontainer/build_multiversion/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
"dockerfile": "../../docker/DockerfileMultiversion",
"context": "../..",
"args": {
"fw": ["numpy/1.24.2 tensorflow/2.11.0 tensorflow/2.12.0 jax/0.4.10 jax/0.4.8"]
"fw": ["numpy/1.24.2 tensorflow/2.11.0 tensorflow/2.12.0 jax/0.4.10 jax/0.4.8"]

}
},
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,6 @@ repos:
# Exclude everything in frontends except __init__.py, and func_wrapper.py
exclude: 'ivy/functional/(frontends|backends)/(?!.*/func_wrapper\.py$).*(?!__init__\.py$)'
- repo: https://github.com/unifyai/lint-hook
rev: 27646397c5390f644a645f439535b1061b9c0105
rev: 2ea80bc854c7f74b09620151028579083ff92ec2
hooks:
- id: ivy-lint
11 changes: 7 additions & 4 deletions docker/DockerfileMultiversion
Original file line number Diff line number Diff line change
@@ -1,9 +1,6 @@
FROM debian:buster
WORKDIR /ivy




ARG fw
ARG pycon=3.8.10
# Install miniconda
Expand All @@ -29,6 +26,7 @@ RUN apt-get update && \
apt-get install -y rsync && \
apt-get install -y libusb-1.0-0 && \
apt-get install -y libglib2.0-0 && \
pip3 install pip-autoremove && \
pip3 install --upgrade pip && \
pip3 install setuptools==58.5.3

Expand All @@ -42,10 +40,15 @@ RUN git clone --progress --recurse-submodules https://github.com/unifyai/ivy --d

COPY /docker/multiversion_framework_directory.py .
COPY /docker/requirement_mappings_multiversion.json .
COPY /docker/multiversion_testing_requirements.txt .

# requirement mappings directs which dependency to be installed and where
SHELL ["/bin/bash", "-c"]
RUN python3 multiversion_framework_directory.py $fw
RUN python3 multiversion_framework_directory.py $fw && \
pip install -r multiversion_testing_requirements.txt && \
pip-autoremove torch -y && \
pip-autoremove tensorflow -y && \
pip-autoremove jax -y


ENV PATH=/opt/miniconda/envs/multienv/bin:$PATH
Expand Down
10 changes: 5 additions & 5 deletions docker/multiversion_framework_directory.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,7 @@ def install_deps(pkgs, path_to_json, base="/opt/fw/"):
# check to see if this pkg has specific version dependencies
with open(path_to_json, "r") as file:
json_data = json.load(file)
print(json_data.keys())
for keys in json_data[fw]:
print(keys, "here")
# check if key is dict
if isinstance(keys, dict):
# this is a dep with just one key
Expand All @@ -70,7 +68,8 @@ def install_deps(pkgs, path_to_json, base="/opt/fw/"):
)
else:
subprocess.run(
f"pip3 install {keys} --target"
"pip3 install "
f" {keys} {f'-f https://data.pyg.org/whl/torch-{ver}%2Bcpu.html' if keys=='torch-scatter' else ''} --target"
f" {path} --default-timeout=100 --no-cache-dir",
shell=True,
)
Expand All @@ -79,8 +78,9 @@ def install_deps(pkgs, path_to_json, base="/opt/fw/"):
if __name__ == "__main__":
arg_lis = sys.argv

json_path = ( # path to the json file storing version specific deps
"requirement_mappings_multiversion.json"
json_path = os.path.join( # path to the json file storing version specific deps
os.path.dirname(os.path.realpath(sys.argv[0])),
"requirement_mappings_multiversion.json",
)

directory_generator(arg_lis[1:])
Expand Down
17 changes: 13 additions & 4 deletions docker/multiversion_testing_requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,7 @@ pymongo==4.3.3
redis==4.3.4
matplotlib==3.5.2
opencv-python==4.6.0.66 # mod_name=cv2
tensorflow-probability==0.17.0 # mod_name=tensorflow_probability
functorch==0.1.1
scipy==1.8.1
dm-haiku==0.0.6 # mod_name=haiku
pydriller
tqdm
coverage
Expand All @@ -20,4 +17,16 @@ colorama
packaging
nvidia-ml-py<=11.495.46 # mod_name=pynvml
paddle-bfloat
jsonpickle
jsonpickle
ml_dtypes
diskcache
google-auth # mod_name=google.auth
requests
pyvis
dill
scikit-learn # mod_name=sklearn
pandas
pyspark
autoflake # for backend generation
snakeviz # for profiling
cryptography
2 changes: 1 addition & 1 deletion docker/requirement_mappings_multiversion.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@

{
"tensorflow": [
"tensorflow-probability"
{"tensorflow-probability":{"2.12.0":"0.20.0","2.11.0":"0.19.0"}}
],
"jax": ["dm-haiku", "flax",{"jaxlib": {"0.4.10": "0.4.10","0.4.8": "0.4.7"}}],
"numpy": ["numpy"],
Expand Down
4 changes: 2 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@
:caption: Users

overview/background.rst
overview/design.rst
overview/related_work.rst
overview/extensions.rst

Expand All @@ -30,8 +29,9 @@
:maxdepth: -1
:caption: Contributors

overview/deep_dive.rst
overview/design.rst
overview/contributing.rst
overview/deep_dive.rst


.. toctree::
Expand Down
9 changes: 5 additions & 4 deletions docs/overview/contributing/the_basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ c. Comment on the ToDo list issue with a reference to your new issue like so:
At some point after your comment is made, your issue will automatically be added to the ToDo list and the comment will be deleted.
No need to wait for this to happen before progressing to the next stage. Don’t comment anything else on these ToDo issues, which should be kept clean with comments only as described above.

d. Start working on the task, and create a PR as soon as you have a full or partial solution, and then directly reference the issue in the pull request by adding the following content to the description of the PR:
d. Start working on the task, and open a PR as soon as you have a full or partial solution, when you open the PR make sure to follow the `conventional commits format <https://www.conventionalcommits.org/en/v1.0.0/>`_, and then directly reference the issue in the pull request by adding the following content to the description of the PR:

:code:`Close #Issue_number`

Expand Down Expand Up @@ -532,7 +532,7 @@ with PyCharm
1. Click the gutter at the executable line of code where you want to set the breakpoint or place the caret at the line and press :code:`Ctrl+F8`

.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/the_basics/getting_most_out_of_IDE/adding_breakpoint.png?raw=true
:aligh: center
:align: center

2. Enter into the debug mode:
1. Click on Run icon and Select **Debug test** or press :code:`Shift+F9`.
Expand Down Expand Up @@ -577,10 +577,11 @@ with PyCharm
1. Select the breakpoint-fragment of code, press :code:`Alt+shift+E` Start debugging!

.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/the_basics/getting_most_out_of_IDE/console_coding.png?raw=true
:aligh: center
:align: center


5. Using **try-except**:
1. PyChram is great at pointing the lines of code which are causing tests to fail.
1. PyCharm is great at pointing the lines of code which are causing tests to fail.
Navigating to that line, you can add Try-Except block with breakpoints to get in depth understanding of the errors.

.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/the_basics/getting_most_out_of_IDE/try_except.png?raw=true
Expand Down
8 changes: 4 additions & 4 deletions docs/overview/deep_dive/array_api_tests.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,10 +35,10 @@ Instead, the change must be made to the array-api repository directly and then o
# to initialise local config file and fetch + checkout submodule (not needed everytime)
git submodule update --init --recursive
# pulls changes from upstream remote repo and merges them
# pulls changes from the upstream remote repo and merges them
git submodule update --recursive --remote --merge
Sometimes you will face strange behaviour when staging changes from Ivy main repo which includes submodule updates.
Sometimes you will face strange behaviour when staging changes from Ivy's main repo which includes submodule updates.
And this is being caused by your submodule being out of date because we update the submodule iteratively. You can get around this by running the following command:

.. code-block:: none
Expand Down Expand Up @@ -71,7 +71,7 @@ Using the terminal, you can run all array-api tests in a given file for a certai
# /ivy
/bin/bash -e ./run_tests_CLI/test_array_api.sh jax test_linalg
You can change the argument with any of our supported frameworks - tensorflow, numpy, torch or jax - and the individual test function categories in :code:`ivy/ivy_tests/array_api_testing/test_array_api/array_api_tests`, e.g. *test_set_functions*, *test_signatures* etc.
You can change the argument with any of our supported frameworks - tensorflow, numpy, torch, or jax - and the individual test function categories in :code:`ivy/ivy_tests/array_api_testing/test_array_api/array_api_tests`, e.g. *test_set_functions*, *test_signatures* etc.

You can also run a specific test, as often running *all* tests in a file is excessive.
To make this work, you should set the backend explicitly in the `_array_module.py` file, which you can find in the `array_api_tests` submodule.
Expand Down Expand Up @@ -160,7 +160,7 @@ You may also need to include the hypothesis import of `reproduce_failure` as sho
The test should then include the inputs which led to the previous failure and recreate it.
If you are taking the :code:`@reproduce_failure` decorator from a CI stack trace and trying to reproduce it locally, you may find that sometimes the local test unexpectedly passes.
This is usually caused by a discrepancy in your local source code and ivy-main, so try pulling from main to sync the behaviour.
This is usually caused by a discrepancy in your local source code and ivy-main, so try pulling from the main to sync the behaviour.

Test Skipping
-------------
Expand Down
4 changes: 2 additions & 2 deletions docs/overview/deep_dive/arrays.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Arrays
.. _`wrapped logic`: https://github.com/unifyai/ivy/blob/6a729004c5e0db966412b00aa2fce174482da7dd/ivy/func_wrapper.py#L95
.. _`NumPy's`: https://numpy.org/doc/stable/user/basics.dispatch.html#basics-dispatch
.. _`PyTorch's`: https://pytorch.org/docs/stable/notes/extending.html#extending-torch
There are two types of array in Ivy, there is the :class:`ivy.NativeArray` and also the :class:`ivy.Array`.
There are two types of arrays in Ivy, there is the :class:`ivy.NativeArray` and also the :class:`ivy.Array`.

Native Array
------------
Expand All @@ -44,7 +44,7 @@ All functions in the Ivy functional API which accept *at least one array argumen
The only exceptions to this are functions in the `nest <https://github.com/unifyai/ivy/blob/906ddebd9b371e7ae414cdd9b4bf174fd860efc0/ivy/functional/ivy/nest.py>`_ module and the `meta <https://github.com/unifyai/ivy/blob/906ddebd9b371e7ae414cdd9b4bf174fd860efc0/ivy/functional/ivy/meta.py>`_ module, which have no instance method implementations.

The organization of these instance methods follows the same organizational structure as the files in the functional API.
The :class:`ivy.Array` class `inherits`_ from many category-specific array classes, such as `ArrayWithElementwise`_, each of which implement the category-specific instance methods.
The :class:`ivy.Array` class `inherits`_ from many category-specific array classes, such as `ArrayWithElementwise`_, each of which implements the category-specific instance methods.

Each instance method simply calls the functional API function internally, but passes in :code:`self._data` as the first *array* argument.
`ivy.Array.add`_ is a good example.
Expand Down
2 changes: 1 addition & 1 deletion docs/overview/deep_dive/backend_setting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ It's helpful to look at an example:
In the last example above, the moment any backend is set, it will be used over the `implicit_backend`_.
However when the current backend is set to the previous using the :func:`ivy.previous_backend`, the `implicit_backend`_ will be used as a fallback, which will assume the backend from the last run.
While the `implicit_backend`_ functionality gives more freedom to the user, the recommended way of doing things would be set the backend explicitly.
While the `implicit_backend`_ functionality gives more freedom to the user, the recommended way of doing things would be to set the backend explicitly.
In addition, all the previously set backends can be cleared by calling :func:`ivy.unset_backend`.

Dynamic Backend Setting
Expand Down
26 changes: 13 additions & 13 deletions docs/overview/deep_dive/building_the_docs_pipline.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ To build the docs through docker you use this command:
docker run -v /path/to/project:/project unifyai/doc-builder
You can as well add options described in the :ref:`The convenience script` section.
You can also add options described in the :ref:`The convenience script` section.

.. code-block:: bash
Expand All @@ -94,7 +94,7 @@ How Ivy's docs is structured
-----------------------------

Looking at `Ivy docs <https://github.com/unifyai/ivy/tree/main/docs>`_, we can see
that it structured like this:
that it is structured like this:

.. code-block:: bash
Expand Down Expand Up @@ -148,7 +148,7 @@ Here is a segment of the file:
You can see here different reStructuredText directives. The first one is ``include``,
which simply includes the main README file of the project, this is a good place if you
want to make the rendered docs looks different from the README, or simply include it as
want to make the rendered docs look different from the README, or simply include it as
is.

The second directive is ``toctree``, which is used to create a table of contents. The
Expand All @@ -167,15 +167,15 @@ The last directive is ``autosummary``, which is used to automatically generate a
of contents for a module, as well as the documentation itself automatically by
discovering the docstrings of the module. This is a custom directive, built on the original
`autosummary`_
extension. We will explain in details how did we change it, in :ref:`Custom Extensions`.
extension. We will explain in detail how did we change it, in :ref:`Custom Extensions`.

``partial_conf.py``
~~~~~~~~~~~~~~~~~~~

This is a partial `Sphinx configuration file`_. Which is being imported in the
`conf.py <https://github.com/unifyai/doc-builder/blob/main/docs/conf.py#L150>`_,
it's used to customize options that are specific to the project being documented.
While importing common configuration such as the theme, the extensions, etc in the
While importing common configurations such as the theme, the extensions, etc in the
original ``conf.py``.

This is a part of ``partial_conf.py``:
Expand All @@ -202,7 +202,7 @@ customize the title of the table of contents for each module.
This is an optional file, which is executed before the docs are built. This is useful
if you need to install some dependencies for the docs to build. In Ivy's case, we
install ``torch`` then ``torch-scatter`` sequentially to avoid a bug in
``torch-scatter``'s setup. And if we want to do any changes to the docker container
``torch-scatter``'s setup. And if we want to make any changes to the docker container
before building the docs.

Custom Extensions
Expand All @@ -222,7 +222,7 @@ This extension is a modified version of the original `autosummary`_, which is us
discover and automatically document the docstrings of a module. This is done by
generating "stub" rst files for each module listed in the ``autosummary`` directive,
you can add a template for these stub files using the ``:template:`` option. Which can
inturn include the ``autosummary`` directive again, recursing on the whole module.
in turn include the ``autosummary`` directive again, recursing on the whole module.

Unfortunately, the original ``autosummary`` extension is very limited, forcing you to
have a table of contents for each modules.
Expand All @@ -233,9 +233,9 @@ We'll go through each option or configuration value added to the original ``auto
""""""""""""""""

As the name suggests, the original behavior of ``autosummary`` is to generate a table
of contents for each module. And it generate stub files only if ``:toctree:`` option is
of contents for each module. And it generates stub files only if the ``:toctree:`` option is
specified. As we only need the ``toctree`` this option hides the table of contents, but
it require the ``:toctree:`` option to be specified.
it requires the ``:toctree:`` option to be specified.

``discussion_linker``
~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -250,7 +250,7 @@ The directive is included like this:
.. discussion-links:: module.foo
First it will look for ``discussion_channel_map`` configuration, in Ivy it looks like
First it will look for the ``discussion_channel_map`` configuration, in Ivy it looks like
this:

.. code-block:: python
Expand All @@ -264,7 +264,7 @@ this:
The key is the module name, if it's not found the ``discussion-link`` directive will
render an empty node. The first and only value in the list is the channel id of the
module, it is in a list as we used to have forums as will but they are removed now.
module, it is in a list as we used to have forums as well but they are removed now.

The output string is generated by a series of replaces on template strings, which are
customizable using the config. To understand how it works, let's look at the default
Expand Down Expand Up @@ -350,8 +350,8 @@ in ``ivy.functional.ivy``, it will replace the module to ``ivy.`` instead of

It's used instead of simply using ``ivy.<data atribute>`` because data attributes have
no ``__doc__`` atribute, instead docs are discovered by parsing the source code itself.
So for Sphinx to find the required docs, it need to be supplied the full module name,
then using ``autoivydata`` directive will replace the module name to ``ivy.``.
So for Sphinx to find the required docs, it needs to be supplied the full module name,
then using the ``autoivydata`` directive will replace the module name to ``ivy.``.

Please refer to the `auto documenter guide in sphinx documentation
<https://www.sphinx-doc.org/en/master/development/tutorials/autodoc_ext.html>`_ for more
Expand Down
Loading

0 comments on commit 3ceddf1

Please sign in to comment.