Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
dfdx authored Oct 28, 2020
1 parent 5bc1ccd commit 27a983f
Showing 1 changed file with 17 additions and 17 deletions.
34 changes: 17 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Lilith
# Avalon

[![Build Status](https://travis-ci.org/dfdx/Lilith.jl.svg?branch=master)](https://travis-ci.org/dfdx/Lilith.jl)
[![Build Status](https://travis-ci.org/dfdx/Avalon.jl.svg?branch=master)](https://travis-ci.org/dfdx/Avalon.jl)


**Lilith** is a deep learning library in Julia with focus on **high performance** and **interoperability with existing DL frameworks**. Its main features include:
**Avalon** is a deep learning library in Julia with focus on **high performance** and **interoperability with existing DL frameworks**. Its main features include:

* tracing autograd engine - models are just structs, transformations are just functions
* optimizing code generator based on hackable computational graph
Expand All @@ -13,10 +13,10 @@

## Usage

To get you a feeling of what Lilith is like, here's a definition of a small convolutional neural network:
To get you a feeling of what Avalon is like, here's a definition of a small convolutional neural network:

```julia
using Lilith
using Avalon


mutable struct Net
Expand Down Expand Up @@ -44,7 +44,7 @@ function (m::Net)(x::AbstractArray)
end
```

For detailed explanation of this and other models see [the tutorial](https://github.com/dfdx/Lilith.jl/tree/master/tutorial). Some predefined models are also available in [the zoo](https://github.com/dfdx/Lilith.jl/tree/master/zoo).
For detailed explanation of this and other models see [the tutorial](https://github.com/dfdx/Avalon.jl/tree/master/tutorial). Some predefined models are also available in [the zoo](https://github.com/dfdx/Avalon.jl/tree/master/zoo).


## Performance
Expand All @@ -53,35 +53,35 @@ Performance comparison between different libraries is hard and benchmarks are ra

### Convolutional neural network

Code available [here](https://github.com/dfdx/Lilith.jl/tree/master/benchmarks/cnn)
Code available [here](https://github.com/dfdx/Avalon.jl/tree/master/benchmarks/cnn)

| | training 1 epoch | training total time* | prediction |
| ------------- | ---------------- | -------------------- | ---------- |
| Lilith (CPU) | 170 s | 1742 s | 39 ms |
| Avalon (CPU) | 170 s | 1742 s | 39 ms |
| Flux (CPU) | 250 s | 2515 s | 42 ms |
| ------------- | ---------------- | -------------------- | ---------- |
| Lilith (GPU) | 10 s | 164 s | 5 ms |
| Avalon (GPU) | 10 s | 164 s | 5 ms |
| Flux (GPU) | 12 s | 150 s | 5 ms |
| PyTorch (GPU) | 12 s | 120 s | 2 ms |

`*` - total time includes 10 epochs + compilation time

Note that in the test on GPU Lilith has longest compilation time and thus
Note that in the test on GPU Avalon has longest compilation time and thus
longest total training time _after 10 epochs_. However, time per epoch
is the lowest, so Lilith is typically the fastest one in longer run.
is the lowest, so Avalon is typically the fastest one in longer run.



### Variational Autoencoder

Code available [here](https://github.com/dfdx/Lilith.jl/tree/master/benchmarks/vae)
Code available [here](https://github.com/dfdx/Avalon.jl/tree/master/benchmarks/vae)

| | training 1 epoch | training total time | prediction |
| ------------- | ---------------- | -------------------- | ---------- |
| Lilith (CPU) | 50 s | 535 s | 395 μs |
| Avalon (CPU) | 50 s | 535 s | 395 μs |
| Flux (CPU) | 948 s | 158 min | 81 ms |
| ------------- | ---------------- | -------------------- | ---------- |
| Lilith (GPU) | 3 s | 93 s | 194 μs |
| Avalon (GPU) | 3 s | 93 s | 194 μs |
| Flux (GPU)** | --- | --- | --- |
| PyTorch (GPU) | 7 s | 66 s | 501 µs |

Expand All @@ -90,8 +90,8 @@ Code available [here](https://github.com/dfdx/Lilith.jl/tree/master/benchmarks/v

## API Stability

One of the central ideas behind Lilith is the ability to reuse existing code instead of writing everything from scratch.
To facilitate it, Lilith is committed to high, although not absolute backward compatibility. The following table
One of the central ideas behind Avalon is the ability to reuse existing code instead of writing everything from scratch.
To facilitate it, Avalon is committed to high, although not absolute backward compatibility. The following table
outlines stability level you should expect from various components of the library.

| Component | API Stable? |
Expand All @@ -106,7 +106,7 @@ outlines stability level you should expect from various components of the librar
| Device API | Yes |
| Fitting API | No** |

`*` - currently Lilith provides only basic implementations of vanilla RNN, LSTM and GRU; this implementation will be improved in future version and made more compatible with PyTorch version, but currently it cannot be considered stable
`*` - currently Avalon provides only basic implementations of vanilla RNN, LSTM and GRU; this implementation will be improved in future version and made more compatible with PyTorch version, but currently it cannot be considered stable

`**` - function `fit!()` provides a convenient shortcut for training supervised learning models, but in its current state it's too basic for most real use cases; for more durable code consider writing your own method for training using `fit!()` as a template

Expand Down

0 comments on commit 27a983f

Please sign in to comment.