Skip to content

Commit

Permalink
synced with upstream main
Browse files Browse the repository at this point in the history
  • Loading branch information
progs2002 committed Aug 27, 2023
2 parents 0eab6bb + 8e4eb62 commit 0695e61
Show file tree
Hide file tree
Showing 320 changed files with 15,476 additions and 13,137 deletions.
31 changes: 31 additions & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@

<!--
This template will help you to have a meaningful PR, please follow it and do not leave it blank.
-->

# PR Description

<!--
If there is no related issue, please add a short description about your PR.
-->

## Related Issue

<!--
Please use this format to link other issues with their numbers: Close #123
https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword
-->

Close #

## Checklist

- [ ] Did you add a function?
- [ ] Did you add the tests?
- [ ] Did you follow the steps we provided?

### Socials:

<!--
If you have Twitter, please provide it here otherwise just ignore this.
-->
12 changes: 8 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
repos:
- repo: https://github.com/psf/black
rev: 23.3.0
rev: 23.7.0
hooks:
- id: black
language_version: python3
args:
- "--preview"
- repo: https://github.com/PyCQA/autoflake
rev: v2.1.1
rev: v2.2.0
hooks:
- id: autoflake
- repo: https://github.com/pycqa/flake8
rev: 6.0.0
rev: 6.1.0
hooks:
- id: flake8
exclude: ^.*__init__.py$
- repo: https://github.com/PyCQA/docformatter
rev: v1.6.3
rev: v1.7.5
hooks:
- id: docformatter
- repo: https://github.com/pycqa/pydocstyle
Expand All @@ -25,3 +25,7 @@ repos:
- id: pydocstyle
# Exclude everything in frontends except __init__.py, and func_wrapper.py
exclude: 'ivy/functional/(frontends|backends)/(?!.*/func_wrapper\.py$).*(?!__init__\.py$)'
- repo: https://github.com/unifyai/lint-hook
rev: b9a103a9f7991fec0ed636a2bcd4497691761e78
hooks:
- id: ivy-lint
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,11 +94,11 @@
------------------------------------------------------------------------

Ivy is both an ML transpiler and a framework, currently supporting JAX,
TensorFlow, PyTorch and Numpy.
TensorFlow, PyTorch, and Numpy.

Ivy unifies all ML frameworks 💥 enabling you not only to **write code
that can be used with any of these frameworks as the backend**, but also
to **convert 🔄 any function, model or library written in any of them to
to **convert 🔄 any function, model, or library written in any of them to
your preferred framework!**

You can check out [Ivy as a transpiler](#ivy-as-a-transpiler) and [Ivy
Expand Down Expand Up @@ -211,7 +211,7 @@ All of the functionalities in Ivy are exposed through the
`Ivy functional API` and the `Ivy stateful API`. All functions in the
[Functional
API](https://unify.ai/docs/ivy/overview/design/building_blocks.html#ivy-functional-api)
are **Framework Agnostic Functions**, which mean that we can use them
are **Framework Agnostic Functions**, which means that we can use them
like this:

``` python
Expand Down Expand Up @@ -263,7 +263,7 @@ class Regressor(ivy.Module):

If we put it all together, we\'ll have something like this. This example
uses PyTorch as the backend, but this can easily be changed to your
favorite framework, such as TensorFlow, or JAX.
favorite frameworks, such as TensorFlow, or JAX.

``` python
import ivy
Expand Down Expand Up @@ -322,7 +322,7 @@ The model\'s output can be visualized as follows:
<img width="50%" class="dark-light" src="https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/regressor_lq.gif">
</div>

Last but not least, we are also working on specific extension totally
Last but not least, we are also working on specific extensions totally
written in Ivy and therefore usable within any framework, covering
topics like [Mechanics](https://github.com/unifyai/mech), [Computer
Vision](https://github.com/unifyai/vision),
Expand Down Expand Up @@ -1408,7 +1408,7 @@ your code in their framework of choice!
``` python
import ivy

# a simple image classification model
# A simple image classification model
class IvyNet(ivy.Module):
def __init__(
self,
Expand Down Expand Up @@ -1438,7 +1438,7 @@ class IvyNet(ivy.Module):
)

self.classifier = ivy.Sequential(
# since padding is "SAME", this would be image_height x image_width x output_channels
# Since the padding is "SAME", this would be image_height x image_width x output_channels
ivy.Linear(self.h_w[0] * self.h_w[1] * self.output_channels, 512),
ivy.GELU(),
ivy.Linear(512, self.num_classes),
Expand Down Expand Up @@ -1562,7 +1562,7 @@ def train(images, classes, epochs, model, device, num_classes=10, batch_size=32)
if device != "cpu":
xbatch, ybatch = xbatch.to_device("gpu:0"), ybatch.to_device("gpu:0")

# since the cross entropy function expects the target classes to be in one-hot encoded format
# Since the cross entropy function expects the target classes to be in one-hot encoded format
ybatch_encoded = ivy.one_hot(ybatch, num_classes)

# update model params
Expand Down
34 changes: 13 additions & 21 deletions docker/multicuda_framework_directory.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,39 +38,31 @@ def directory_generator(req, base="/opt/fw/"):
def install_pkg(path, pkg, base="fw/"):
if pkg.split("==")[0] if "==" in pkg else pkg == "torch":
subprocess.run(
(
f"yes |pip3 install --upgrade {pkg} --target"
f" {path} --default-timeout=100 --extra-index-url"
" https://download.pytorch.org/whl/cu118 --no-cache-dir"
),
f"yes |pip3 install --upgrade {pkg} --target"
f" {path} --default-timeout=100 --extra-index-url"
" https://download.pytorch.org/whl/cu118 --no-cache-dir",
shell=True,
)
elif pkg.split("==")[0] if "==" in pkg else pkg == "jax":
subprocess.run(
(
f"yes |pip install --upgrade --target {path} 'jax[cuda11_local]' -f"
" https://storage.googleapis.com/jax-releases/jax_cuda_releases.html "
" --no-cache-dir"
),
f"yes |pip install --upgrade --target {path} 'jax[cuda11_local]' -f"
" https://storage.googleapis.com/jax-releases/jax_cuda_releases.html "
" --no-cache-dir",
shell=True,
)
elif pkg.split("==")[0] if "==" in pkg else pkg == "paddle":
subprocess.run(
(
"yes |pip install "
f" paddlepaddle-gpu=={get_latest_package_version('paddlepaddle')}.post117"
f" --target {path} -f"
" https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html "
" --no-cache-dir"
),
"yes |pip install "
f" paddlepaddle-gpu=={get_latest_package_version('paddlepaddle')}.post117"
f" --target {path} -f"
" https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html "
" --no-cache-dir",
shell=True,
)
else:
subprocess.run(
(
f"yes |pip3 install --upgrade {pkg} --target"
f" {path} --default-timeout=100 --no-cache-dir"
),
f"yes |pip3 install --upgrade {pkg} --target"
f" {path} --default-timeout=100 --no-cache-dir",
shell=True,
)

Expand Down
42 changes: 15 additions & 27 deletions docker/multiversion_framework_directory.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,28 +19,22 @@ def directory_generator(req, base="/opt/fw/"):
def install_pkg(path, pkg, base="fw/"):
if pkg.split("==")[0] if "==" in pkg else pkg == "torch":
subprocess.run(
(
f"pip3 install --upgrade {pkg} --target {path} --default-timeout=100"
" --extra-index-url https://download.pytorch.org/whl/cpu "
" --no-cache-dir"
),
f"pip3 install --upgrade {pkg} --target {path} --default-timeout=100"
" --extra-index-url https://download.pytorch.org/whl/cpu "
" --no-cache-dir",
shell=True,
)
elif pkg.split("==")[0] == "jax":
subprocess.run(
(
f"pip install --upgrade {pkg} --target {path} -f"
" https://storage.googleapis.com/jax-releases/jax_releases.html "
" --no-cache-dir"
),
f"pip install --upgrade {pkg} --target {path} -f"
" https://storage.googleapis.com/jax-releases/jax_releases.html "
" --no-cache-dir",
shell=True,
)
else:
subprocess.run(
(
f"pip3 install --upgrade {pkg} --target {path} --default-timeout=100 "
" --no-cache-dir"
),
f"pip3 install --upgrade {pkg} --target {path} --default-timeout=100 "
" --no-cache-dir",
shell=True,
)

Expand All @@ -63,27 +57,21 @@ def install_deps(pkgs, path_to_json, base="/opt/fw/"):
# check if version is there in this
if ver in keys[dep].keys():
subprocess.run(
(
"pip3 install --upgrade"
f" {dep}=={keys[dep][ver]} --target"
f" {path} --default-timeout=100 --no-cache-dir"
),
"pip3 install --upgrade"
f" {dep}=={keys[dep][ver]} --target"
f" {path} --default-timeout=100 --no-cache-dir",
shell=True,
)
else:
subprocess.run(
(
f"pip3 install {dep} --target"
f" {path} --default-timeout=100 --no-cache-dir"
),
f"pip3 install {dep} --target"
f" {path} --default-timeout=100 --no-cache-dir",
shell=True,
)
else:
subprocess.run(
(
f"pip3 install {keys} --target"
f" {path} --default-timeout=100 --no-cache-dir"
),
f"pip3 install {keys} --target"
f" {path} --default-timeout=100 --no-cache-dir",
shell=True,
)

Expand Down
2 changes: 1 addition & 1 deletion docs/compiler/compiler.rst
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ removed soon!
compile function.
2. **Non-framework-specific code**: As the compiler traces the function using the
functional API of the underlying framework, any piece of code inside the model that
is not from said framework will not be correctly registered, this includes other
is not from the said framework will not be correctly registered, this includes other
frameworks code (such as NumPy statements inside a torch model) or python statements
such as len().
3. **Incorrectly cached parts of the graph**: There are certain cases where compilation
Expand Down
8 changes: 4 additions & 4 deletions docs/compiler/transpiler.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Transpiler API
:param params_v: Parameters of a haiku model, as when transpiling these, the parameters
need to be passed explicitly to the function call.
:rtype: ``Union[Graph, LazyGraph, ModuleType, ivy.Module, torch.nn.Module, tf.keras.Model, hk.Module]``
:return: A transpiled ``Graph`` or a non-initialized ``LazyGraph``. If the object is an native trainable module, the corresponding module in the target framework will be returned. If the object is a ``ModuleType``, the function will return a copy of the module with every method lazily transpiled.
:return: A transpiled ``Graph`` or a non-initialized ``LazyGraph``. If the object is a native trainable module, the corresponding module in the target framework will be returned. If the object is a ``ModuleType``, the function will return a copy of the module with every method lazily transpiled.

.. py:function:: ivy.unify(*objs, source = None, args = None, kwargs = None, **transpile_kwargs,)
Expand All @@ -74,7 +74,7 @@ Transpiler API
:param transpile_kwargs: Arbitrary keyword arguments that will be passed to ``ivy.transpile``.

:rtype: ``Union[Graph, LazyGraph, ModuleType, ivy.Module]``
:return: A transpiled ``Graph`` or a non-initialized ``LazyGraph``. If the object is an native trainable module, the corresponding module in the target framework will be returned. If the object is a ``ModuleType``, the function will return a copy of the module with every method lazily transpiled.
:return: A transpiled ``Graph`` or a non-initialized ``LazyGraph``. If the object is a native trainable module, the corresponding module in the target framework will be returned. If the object is a ``ModuleType``, the function will return a copy of the module with every method lazily transpiled.

Using the transpiler
--------------------
Expand Down Expand Up @@ -196,7 +196,7 @@ another, at the moment we support ``torch.nn.Module`` when ``to="torch"``,
Ivy.unify
~~~~~~~~~

As mentioned above, ``ivy.unify`` is an alias to transpilation to Ivy, so you can use it
As mentioned above, ``ivy.unify`` is an alias for transpilation to Ivy, so you can use it
exactly in the same way to convert framework specific code to Ivy.

.. code-block:: python
Expand Down Expand Up @@ -230,7 +230,7 @@ still working on some rough edges. These include:
model from another framework that only takes ``kwargs`` is transpiled to keras,
you'll need to pass a ``None`` argument to the transpiled model before the
corresponding ``kwargs``.
3. **Haiku transform with state**: As of now, we only support transpilation of
3. **Haiku transform with state**: As of now, we only support the transpilation of
transformed Haiku modules, this means that ``transformed_with_state`` objects will
not be correctly transpiled.
4. **Array format between frameworks**: As the compiler outputs a 1-to-1 mapping of the
Expand Down
4 changes: 2 additions & 2 deletions docs/overview/background/ml_explosion.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ As new future frameworks become available, backend-specific code quickly becomes
:align: center
:width: 80%

If our desire is to provide a new framework which simultaneously supports all of the modern frameworks in a simple and scalable manner, then we must determine exactly where the common ground lies between them.
If our desire is to provide a new framework that simultaneously supports all of the modern frameworks in a simple and scalable manner, then we must determine exactly where the common ground lies between them.

Finding common ground between the existing frameworks is essential in order to design a simple, scalable, and universal abstraction.

Expand All @@ -31,6 +31,6 @@ The functional APIs of all existing ML frameworks are all cut from the same clot

**Round Up**

Hopefully this has painted a clear picture of how many different ML frameworks have exploded onto the scene 🙂
Hopefully, this has painted a clear picture of how many different ML frameworks have exploded onto the scene 🙂

Please reach out on `discord <https://discord.gg/sXyFF8tDtm>`_ if you have any questions!
12 changes: 6 additions & 6 deletions docs/overview/background/standardization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Skepticism

With our central goal being to unify all ML frameworks, you would be entirely forgiven for raising an eyebrow 🤨

“You want to try and somehow unify: TensorFlow, PyTorch, JAX, NumPy and others, all of which have strong industrial backing, huge user momentum, and significant API differences?”
“You want to try and somehow unify: TensorFlow, PyTorch, JAX, NumPy, and others, all of which have strong industrial backing, huge user momentum, and significant API differences?”

Won’t adding a new “unified” framework just make the problem even worse…

Expand Down Expand Up @@ -39,11 +39,11 @@ The reason we can “build” custom computers is thanks to many essential stand

For software, `HTML <https://en.wikipedia.org/wiki/HTML>`_ enables anyone to design and host a website, `TCP/IP <https://en.wikipedia.org/wiki/Internet_protocol_suite#>`_ enables different nodes to communicate on a network, `SMTP <https://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol>`_ makes it possible to send from Gmail to Outlook, `POP <https://en.wikipedia.org/wiki/Post_Office_Protocol>`_ enables us to open this email and `IEEE 754 <https://en.wikipedia.org/wiki/IEEE_754>`_ allows us to do calculations.
These are all essential standards which our modern lives depend on.
Most of these standards did not arise until there was substantial innovation, growth and usage and in the relevant area, making standardization a necessity so that all parties could easily engage.
Most of these standards did not arise until there was substantial innovation, growth, and usage in the relevant area, making standardization a necessity so that all parties could easily engage.

With regards to array libraries in Python, NumPy was effectively the standard until ~2015.
Since then, array libraries have seen an explosion alongside innovations in Machine Learning.
Given this recent time-frame, we are in a much less mature state than all of the preceding standards mentioned, most of which arose in the 70s, 80s and 90s.
Given this recent time-frame, we are in a much less mature state than all of the preceding standards mentioned, most of which arose in the 70s, 80s, and 90s.
An effort to standardize at this stage is completely natural, and like in all other cases mentioned, this will certainly bring huge benefits to users!

The Array API Standard
Expand All @@ -64,21 +64,21 @@ Further, the consortium is sponsored by `LG Electronics <https://mail.google.com
:align: center
:width: 100%

Together, all major ML frameworks are involved in the the Array API standard in one way or another.
Together, all major ML frameworks are involved in the Array API standard in one way or another.
This is a promising sign in the pursuit of unification.

.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/logos/supported/frameworks.png?raw=true
:align: center
:width: 60%


Clearly a lot of time, thought and careful attention has gone into creating the `first version <https://data-apis.org/array-api/latest/>`_ of the standard, such that it simplifies compatibility as much as possible for all ML frameworks.
Clearly, a lot of time, thought and careful attention has gone into creating the `first version <https://data-apis.org/array-api/latest/>`_ of the standard, such that it simplifies compatibility as much as possible for all ML frameworks.

We are very excited to be working with them on this standard, and bringing Ivy into compliance, with the hope that in due time others also follow-suit!


**Round Up**

Hopefully this has given some clear motivation for why standardization in ML frameworks could be a great thing, and convinced you that we should celebrate and encourage the foundational work by the Array API Standard 🙂
Hopefully, this has given some clear motivation for why standardization in ML frameworks could be a great thing, and convinced you that we should celebrate and encourage the foundational work by the Array API Standard 🙂

Please reach out on `discord <https://discord.gg/sXyFF8tDtm>`_ if you have any questions!
Loading

0 comments on commit 0695e61

Please sign in to comment.