Skip to content

Commit

Permalink
feat!: this is now an automatic differentiation library
Browse files Browse the repository at this point in the history
  • Loading branch information
c0dearm committed May 24, 2022
1 parent 01ca605 commit 99846e2
Show file tree
Hide file tree
Showing 21 changed files with 987 additions and 471 deletions.
60 changes: 56 additions & 4 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
on: [push, pull_request]
on: [push]
name: CI

jobs:
Expand All @@ -25,6 +25,8 @@ jobs:
test:
name: Test
runs-on: ubuntu-latest
env:
AF_VER: 3.8.0
steps:
- name: Checkout sources
uses: actions/checkout@v2
Expand All @@ -36,11 +38,35 @@ jobs:
toolchain: stable
override: true

- name: Cache ArrayFire
uses: actions/cache@v1
id: arrayfire
with:
path: afbin
key: ${{ runner.os }}-af-${{ env.AF_VER }}

- name: Download ArrayFire
# Only download and cache arrayfire if already not found
if: steps.arrayfire.outputs.cache-hit != 'true'
run: |
wget --quiet http://arrayfire.s3.amazonaws.com/${AF_VER}/ArrayFire-v${AF_VER}_Linux_x86_64.sh
chmod +x ./ArrayFire-v${AF_VER}_Linux_x86_64.sh
mkdir afbin
./ArrayFire-v${AF_VER}_Linux_x86_64.sh --skip-license --exclude-subdir --prefix=./afbin
rm ./afbin/lib64/libcu*.so*
rm ./afbin/lib64/libafcuda*.so*
rm ./ArrayFire-v${AF_VER}_Linux_x86_64.sh
- name: Export ArrayFire paths
run: |
echo "AF_PATH=${GITHUB_WORKSPACE}/afbin" >> $GITHUB_ENV
echo "LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${AF_PATH}/lib64" >> $GITHUB_ENV
- name: Run cargo test
uses: actions-rs/cargo@v1
with:
command: test
args: --all
args: --all-features

lints:
name: Lints
Expand All @@ -67,11 +93,13 @@ jobs:
uses: actions-rs/clippy-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
args: --all --all-features -- -D warnings
args: --all-features -- -D warnings

coverage:
name: Coverage
runs-on: ubuntu-latest
env:
AF_VER: 3.8.0
steps:
- name: Checkout sources
uses: actions/checkout@v2
Expand All @@ -83,10 +111,34 @@ jobs:
toolchain: stable
override: true

- name: Cache ArrayFire
uses: actions/cache@v1
id: arrayfire
with:
path: afbin
key: ${{ runner.os }}-af-${{ env.AF_VER }}

- name: Download ArrayFire
# Only download and cache arrayfire if already not found
if: steps.arrayfire.outputs.cache-hit != 'true'
run: |
wget --quiet http://arrayfire.s3.amazonaws.com/${AF_VER}/ArrayFire-v${AF_VER}_Linux_x86_64.sh
chmod +x ./ArrayFire-v${AF_VER}_Linux_x86_64.sh
mkdir afbin
./ArrayFire-v${AF_VER}_Linux_x86_64.sh --skip-license --exclude-subdir --prefix=./afbin
rm ./afbin/lib64/libcu*.so*
rm ./afbin/lib64/libafcuda*.so*
rm ./ArrayFire-v${AF_VER}_Linux_x86_64.sh
- name: Export ArrayFire paths
run: |
echo "AF_PATH=${GITHUB_WORKSPACE}/afbin" >> $GITHUB_ENV
echo "LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${AF_PATH}/lib64" >> $GITHUB_ENV
- name: Run cargo-tarpaulin
uses: actions-rs/tarpaulin@v0.1
with:
args: '--exclude-files example mushin_derive -- --test-threads 1'
args: '-- --test-threads 1'

- name: Upload to codecov.io
uses: codecov/codecov-action@v1
Expand Down
25 changes: 19 additions & 6 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,19 @@
[workspace]
members = [
"mushin",
"mushin_derive",
"example",
]
[package]
name = "mushin"
version = "0.2.0"
authors = ["Aitor Ruano <codearm@pm.me>"]
edition = "2021"
description = "Computational graphs with reverse automatic differentation in the GPU"
homepage = "https://github.com/c0dearm/mushin"
repository = "https://github.com/c0dearm/mushin"
readme = "README.md"
keywords = ["machine-learning", "automatic", "differentiation", "cuda", "opencl", "compute", "gpu", "cpu"]
categories = ["algorithms", "mathematics", "science"]
license = "MIT/Apache-2.0"

[badges]
maintenance = { status = "actively-developed" }
codecov = { repository = "c0dearm/mushin" }

[dependencies]
arrayfire = "3.8"
83 changes: 23 additions & 60 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,85 +11,48 @@

## Description

Mushin allows the developer to build neural networks at compile-time, with preallocated arrays with well defined sizes. This has mainly three very important benefits:
**Mushin** is to `Rust` what `Tensorflow` is to `Python`. A library to build computational graphs and compute the gradients of the outputs with respect to a given set of variables using [reverse automatic differentatiation](https://en.wikipedia.org/wiki/Automatic_differentiation).

1. **Compile-time network consistency check**: Any defect in your neural network (i.e. mismatching layers inputs/outputs) will be raised at compile-time. You can enjoy your coffee while your network inference or training process never fails!
2. **Awesome Rust compiler optimizations**: Because the neural network is completely defined at compile-time, the compiler is able
to perform smart optimizations, like unrolling loops or injecting [SIMD](https://en.wikipedia.org/wiki/SIMD) instructions.
3. **Support for embedded**: The `std` library is not required to build neural networks so it can run on any target that Rust supports.
Internally it uses the [arrayfire](https://crates.io/crates/arrayfire) crate to provide parallel computations on specialized hardware, such as Nvidia CUDA GPUs, Intel MKL CPUs... For details on what devices are available and installation instructions for your OS, please checkout the `arrayfire` crate documentation. **The installation of the `arrayfire` binaries is required for `Mushin` to work.**

One clear benefit of this crate versus `Tensorflow` is Rust's strong type system. All operations performed on tensors during the graph build are checked at compile time for mathematical soundness, which means no runtime error after an hour of model training. **If it compiles, it works**. If at some point while building your horribly nested computational graph you make a mistake on the shape of a tensor you'll be stopped before feeling stupid.

## Usage

Add this to your `Cargo.toml`:
First, install the arrayfire binaries as indicated by the [arrayfire](https://crates.io/crates/arrayfire) crate.

Then, add **Mushin** as one of your dependencies:

```toml
[dependencies]
mushin = "0.1"
mushin_derive = "0.1"
mushin = "0.2"
```

And this is a very simple example to get you started:
The following is a self-explanatory example of the basic usage of **Mushin**, for more details, please check the crate [docs](https://docs.rs/mushin/latest/mushin/).

```rust
use rand::distributions::Uniform;

use mushin::{activations::ReLu, layers::Dense, NeuralNetwork};
use mushin_derive::NeuralNetwork;

// Builds a neural network with 2 inputs and 1 output
// Made of 3 feed forward layers, you can have as many as you want and with any name
#[derive(NeuralNetwork, Debug)]
struct MyNetwork {
// LayerType<ActivationType, # inputs, # outputs>
input: Dense<ReLu, 2, 4>,
hidden: Dense<ReLu, 4, 2>,
output: Dense<ReLu, 2, 1>,
}

impl MyNetwork {
// Initialize layer weights with a uniform distribution and set ReLU as activation function
fn new() -> Self {
let mut rng = rand::thread_rng();
let dist = Uniform::from(-1.0..=1.0);

MyNetwork {
input: Dense::random(&mut rng, &dist),
hidden: Dense::random(&mut rng, &dist),
output: Dense::random(&mut rng, &dist),
}
}
}
use mushin::{Context, Values, Class, Gradients, add, matmul};

fn main() {
// Init the weights and perform a forward pass
let nn = MyNetwork::new();
println!("{:#?}", nn);

let input = [0.0, 1.0];
println!("Input: {:#?}", input);
let output = nn.forward(input);
println!("Output: {:#?}", output);
}
```
let ctx = Context::new();

You may wonder how the `forward` method works. The `NeuralNetwork` derive macro defines it for you, and it looks like this for this particular example:
let x = ctx.tensor::<1, 1, 2, 3>(Values::Eye(3.0), Class::Constant);
let w = ctx.tensor::<1, 1, 3, 2>(Values::Normal, Class::Persistent("weights"));
let b = ctx.tensor::<1, 1, 3, 3>(Values::Fill(0.0), Class::Persistent("bias"));
let z = add(&b, &matmul(&w, &x));

```rust
fn forward(&self, input: [f32; 2]) -> [f32; 1] {
self.output.forward(self.hidden.forward(self.input.forward[input]))
let grads = Gradients::compute(&z);
let dz_dw = grads.wrt(&w);
let dz_db = grads.wrt(&b);
}
```

Note how the forward method expects two input values because that's what the first (`input`) layer expects, and returns one single value because that's what the last layer (`output`) returns.

## Roadmap

- [x] Compile-time neural network consistency check
- [x] Docs, CI/CD & Benchmarks
- [ ] Backward pass
- [ ] More layer types (convolution, dropout, lstm...)
- [ ] More activation functions (sigmoid, softmax...)
- [ ] Maaaybeee, CPU and/or GPU concurrency
- [ ] Add more operations
- [ ] Allow for higher-order gradients
- [ ] Add benchmarks
- [ ] Add a cargo feature for deep learning, which adds layers, losses and activation functions (like `Keras`)

## Contributing

Expand All @@ -105,4 +68,4 @@ Mushin is distributed under the terms of both the MIT license and the
Apache License (Version 2.0).

See [LICENSE-APACHE](LICENSE-APACHE) and [LICENSE-MIT](LICENSE-MIT), and
[COPYRIGHT](COPYRIGHT) for details.
[COPYRIGHT](COPYRIGHT) for details.
12 changes: 0 additions & 12 deletions example/Cargo.toml

This file was deleted.

34 changes: 0 additions & 34 deletions example/src/main.rs

This file was deleted.

24 changes: 0 additions & 24 deletions mushin/Cargo.toml

This file was deleted.

51 changes: 0 additions & 51 deletions mushin/src/activations.rs

This file was deleted.

Loading

0 comments on commit 99846e2

Please sign in to comment.