Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding #![no_std] support via no_std_compat #244

Merged
merged 3 commits into from
Oct 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ keywords = [
features = ["nightly", "numpy"]

[dependencies]
no-std-compat = { version = "0.4.1", features = [ "alloc", "compat_hash" ] }
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@antimora Is no_std still useful even if alloc will be required?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coreylowman Awesome! Thank you for adding the support. Yes, it still will be very useful. Sorry, I missed your message earlier. I came back to see the status of this request and you added it! Great, thanks!

rand = { version = "0.8.5", features = ["std_rng"] }
rand_distr = { version = "0.4.3", features = [] }
matrixmultiply = { version = "0.3.2", features = [] }
Expand All @@ -32,9 +33,10 @@ cblas-sys = { version = "0.1.4", optional = true }
libc = { version = "0.2", optional = true }

[features]
default = ["numpy"]
default = ["std", "numpy"]
std = ["no-std-compat/std"]
nightly = []
numpy = ["dep:zip"]
numpy = ["dep:zip", "std"]
cblas = ["dep:cblas-sys", "dep:libc"]
intel-mkl = ["cblas"]

Expand Down
1 change: 1 addition & 0 deletions src/data.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
//! A collection of data utility classes such as [one_hot_encode()] and [SubsetIterator].

use rand::prelude::SliceRandom;
use std::vec::Vec;

use crate::arrays::HasArrayData;
use crate::tensor::{Tensor1D, Tensor2D, TensorCreator};
Expand Down
1 change: 1 addition & 0 deletions src/devices/allocate.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
use super::Cpu;
use crate::arrays::CountElements;
use std::alloc::{alloc_zeroed, Layout};
use std::boxed::Box;

/// Allocate an Nd array on the heap.
pub trait AllocateZeros {
Expand Down
4 changes: 3 additions & 1 deletion src/devices/broadcast_reduce/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,14 @@
mod accumulator;
mod indexing;

pub use accumulator::*;

use super::allocate::AllocateZeros;
use super::fill::FillElements;
use super::Cpu;
use crate::arrays::{AllAxes, Axes2, Axes3, Axes4, Axis, CountElements};
pub use accumulator::*;
use indexing::{BroadcastMut, BroadcastRef};
use std::boxed::Box;

/// Device level broadcasts & reduces of type `T` along axes `Axes`.
pub trait DeviceReduce<T: CountElements, Axes>:
Expand Down
1 change: 1 addition & 0 deletions src/devices/conv.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ use super::{AllocateZeros, Cpu};
use cblas_sys::{
cblas_sgemm as sgemm, CblasNoTrans as NoTr, CblasRowMajor as RowMajor, CblasTrans as Tr,
};
use std::boxed::Box;

/// **Requires nightly** 2d convolution with stride and padding specified at trait level.
///
Expand Down
1 change: 1 addition & 0 deletions src/devices/fill.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
use super::{AllocateZeros, Cpu};
use crate::arrays::CountElements;
use std::boxed::Box;

/// Fills all elements with the specified function
pub trait FillElements<T: CountElements>: Sized + AllocateZeros {
Expand Down
1 change: 1 addition & 0 deletions src/devices/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ mod pool2d;
#[cfg(feature = "nightly")]
pub use pool2d::*;

use std::boxed::Box;
use std::ops::*;

/// The CPU device
Expand Down
12 changes: 11 additions & 1 deletion src/feature_flags.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,15 @@
//! Information about the available feature flags
//!
//! # "std" **Default flag**
//!
//! Enables usage of the standard library. Otherwise [no_std_compat](https://crates.io/crates/no-std-compat)
//! is used.
//!
//! Note that allocations are necessary, so the no_std_compat dependency looks like:
//! ```toml
//! no-std-compat = { version = "0.4.1", features = [ "alloc", "compat_hash" ] }
//! ```
//!
//! # "intel-mkl"
//!
//! Enables using the `Intel MKL` libraries (assuming you installed it already) for matrix multiplication.
Expand All @@ -24,7 +34,7 @@
//!
//! `build.rs` will fail helpfully if you don't have the correct path/environment variables.
//!
//! # "numpy"
//! # "numpy" **Default flag**
//!
//! Enables saving and loading arrays to .npy files, and saving and loading nn to .npz files.
//!
Expand Down
1 change: 1 addition & 0 deletions src/gradients.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
//! Implementations of [GradientTape] and generic Nd array containers via [Gradients].

use std::collections::HashMap;
use std::{boxed::Box, vec::Vec};

use crate::arrays::HasArrayType;
use crate::devices::{AllocateZeros, HasDevice};
Expand Down
10 changes: 7 additions & 3 deletions src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
#![allow(incomplete_features)]
#![cfg_attr(feature = "nightly", feature(generic_const_exprs))]

//! Ergonomics & safety focused deep learning in Rust. Main features include:
//! 1. Const generic tensor library with tensors up to 4d!
//! 2. A large library of tensor operations (matrix multiplication, arithmetic, activation functions, etc).
Expand Down Expand Up @@ -94,6 +91,13 @@
//! opt.update(&mut model, gradients);
//! ```

#![no_std]
#![allow(incomplete_features)]
#![cfg_attr(feature = "nightly", feature(generic_const_exprs))]

extern crate alloc;
extern crate no_std_compat as std;

pub mod arrays;
pub mod data;
pub mod devices;
Expand Down
1 change: 1 addition & 0 deletions src/nn/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ mod npz_impls;
mod tests {
use crate::gradients::{GradientProvider, Gradients};
use crate::unique_id::HasUniqueId;
use std::boxed::Box;

#[derive(Default)]
pub struct SimpleGradients(pub Gradients);
Expand Down
1 change: 1 addition & 0 deletions src/nn/npz.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ use std::error::Error;
use std::{
io::{BufReader, BufWriter, Read, Seek, Write},
path::Path,
string::String,
};
use zip::{
result::{ZipError, ZipResult},
Expand Down
1 change: 1 addition & 0 deletions src/nn/npz_impls.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
use super::npz::{npz_fread, npz_fwrite, LoadFromNpz, SaveToNpz};
use crate::prelude::*;
use std::format;
use std::io::{Read, Seek, Write};
use zip::{result::ZipResult, ZipArchive, ZipWriter};

Expand Down
1 change: 1 addition & 0 deletions src/nn/repeated.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
use crate::gradients::{CanUpdateWithGradients, GradientProvider, UnusedTensors};
use crate::prelude::*;
use std::vec::Vec;

/// Repeats `T` `N` times. This requires that `T`'s input is the same as it's output.
///
Expand Down
3 changes: 3 additions & 0 deletions src/numpy/load.rs
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
//! Provides some generic functions to save Nd arrays in the .npy format.

use super::*;
use std::vec;
use std::{
fs::File,
io::{BufReader, Read},
path::Path,
string::String,
vec::Vec,
};

/// Loads data from a .npy file. This calls [read()].
Expand Down
5 changes: 5 additions & 0 deletions src/numpy/mod.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
//! Provides some generic functions to load & save Nd arrays in the [.npy](https://numpy.org/devdocs/reference/generated/numpy.lib.format.html)
//! format. See [load()] and [save()]

use std::{
string::{String, ToString},
vec::Vec,
};

mod load;
mod save;

Expand Down
2 changes: 1 addition & 1 deletion src/optim/adam.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ use crate::devices::ForEachElement;
use crate::gradients::{CanUpdateWithGradients, GradientProvider, Gradients};
use crate::prelude::*;
use crate::unique_id::HasUniqueId;
use std::marker::PhantomData;
use std::{boxed::Box, marker::PhantomData};

/// An implementation of the Adam optimizer from
/// [Adam: A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980)
Expand Down
1 change: 1 addition & 0 deletions src/optim/optimizer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ impl std::fmt::Display for UnusedParamsError {
}
}

#[cfg(feature = "std")]
impl std::error::Error for UnusedParamsError {}

#[allow(clippy::from_over_into)]
Expand Down
2 changes: 1 addition & 1 deletion src/optim/rmsprop.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ use crate::devices::{FillElements, ForEachElement};
use crate::gradients::{CanUpdateWithGradients, GradientProvider, Gradients};
use crate::prelude::*;
use crate::unique_id::HasUniqueId;
use std::marker::PhantomData;
use std::{boxed::Box, marker::PhantomData};

/// RMSprop As described in [Hinton, 2012](http://www.cs.toronto.edu/%7Etijmen/csc321/slides/lecture_slides_lec6.pdf).
///
Expand Down
2 changes: 1 addition & 1 deletion src/optim/sgd.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ use crate::devices::ForEachElement;
use crate::gradients::{CanUpdateWithGradients, GradientProvider, Gradients};
use crate::prelude::*;
use crate::unique_id::HasUniqueId;
use std::marker::PhantomData;
use std::{boxed::Box, marker::PhantomData};

/// Implementation of Stochastic Gradient Descent. Based on [pytorch's implementation](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html)
///
Expand Down
1 change: 1 addition & 0 deletions src/tensor/impl_tensor_creator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ use crate::prelude::*;
use crate::unique_id::unique_id;
use rand::prelude::Distribution;
use rand_distr::{num_traits::One, Standard, StandardNormal};
use std::boxed::Box;

/// Something that can be created - currently only implemented for tensors with no tapes.
pub trait TensorCreator: Sized + HasDevice {
Expand Down
1 change: 1 addition & 0 deletions src/tensor/into_tensor.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
use super::{Tensor0D, Tensor1D, Tensor2D, Tensor3D, Tensor4D, TensorCreator};
use std::boxed::Box;

/// Creates a tensor using the data based in. The return type is based
/// on the data you pass in. See [IntoTensor] for implementations.
Expand Down
2 changes: 2 additions & 0 deletions src/tensor_ops/utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@
//! 4. You can't really separate these operations since they are very inter-dependent. So it makes
//! sense to have a single unit for doing it.

use std::boxed::Box;

use crate::devices::{AllocateZeros, Device, ForEachElement};
use crate::gradients::{Gradients, Tape};
use crate::prelude::*;
Expand Down