Version 1.0.7 Release
- Fix link.
- Numpy-like basic indexing (negative indexing)
- Create ~/.ccache before prepare docker build.
- Delete try-catch-all logic.
Install the latest nnabla by:
pip install nnabla
pip install nnabla_ext_cuda # For CUDA users
Users with python <= 3.4 may experience errors with pip install nnabla
and pip install nnabla-ext-cuda
.
■ Workaround
Please install matplotlib == 2.2.3 and re-install nnabla, nnabla_ext_cuda.
pip install matplotlib==2.2.3
pip install nnabla
pip install nnabla_ext_cuda
Note that CUDA 9.2 and cuDNN 7.2 are set as default if versions are not specified. You can also install the cuda extension with specific versions from one of the following. See also FAQ
- nnabla-ext-cuda80 (CUDA 8.0 x cuDNN 7.2)
- nnabla-ext-cuda90 (CUDA 9.0 x cuDNN 7.2)
- nnabla-ext-cuda91 (CUDA 9.1 x cuDNN 7.1)
- nnabla-ext-cuda92 (CUDA 9.2 x cuDNN 7.2)
pip install nnabla
pip install nnabla_ext_cuda92 # For CUDA 9.2 x cuDNN 7.2 users
Additional setup may be required depending on your OS or environment. Please check Python Package Installation Guide for details.
To use C++ inference feature, follow the demonstration on MNIST inference in C++.
For distributed training, you need to build a binary from source. See the guide for building multi-GPU training package.