Releases: sony/nnabla-ext-cuda
Version 1.0.12 Release
Install the latest nnabla by:
pip install nnabla
pip install nnabla_ext_cuda # For CUDA users
Users with python <= 3.4 may experience errors with pip install nnabla
and pip install nnabla-ext-cuda
.
■ Workaround
Please install matplotlib == 2.2.3 and re-install nnabla, nnabla_ext_cuda.
pip install matplotlib==2.2.3
pip install nnabla
pip install nnabla_ext_cuda
Note that CUDA 9.2 and cuDNN 7.3 are set as default if versions are not specified. You can also install the cuda extension with specific versions from one of the following. See also FAQ
- nnabla-ext-cuda80 (CUDA 8.0 x cuDNN 7.1)
- nnabla-ext-cuda90 (CUDA 9.0 x cuDNN 7.3(win), 7.4(linux))
- nnabla-ext-cuda92 (CUDA 9.2 x cuDNN 7.3(win), 7.4(linux))
- nnabla-ext-cuda100 (CUDA 10.0 x cuDNN 7.3(win), 7.4(linux))
pip install nnabla
pip install nnabla_ext_cuda92 # For CUDA 9.2 x cuDNN 7.3 users
Additional setup may be required depending on your OS or environment. Please check Python Package Installation Guide for details.
To use C++ inference feature, follow the demonstration on MNIST inference in C++.
For distributed training, you need to build a binary from source. See the guide for building multi-GPU training package.
Version v1.0.11 Release
- Fix pointer arithmetic and handling of half type
- Add functions: IsInf, IsNaN, ResetNaN, ResetInf, and Where
- Feature/20190121 build with python35
- Print error for cuda ver less than 7
- Use dedicated function to determine workspace size for alogorithm.
- Add CuDNN max and average pooling for 3D case.
- Serialization of SolverState
- Fix CuDNN reduction for output shape equal input shape.
- Fix binary functions
Install the latest nnabla by:
pip install nnabla
pip install nnabla_ext_cuda # For CUDA users
Users with python <= 3.4 may experience errors with pip install nnabla
and pip install nnabla-ext-cuda
.
■ Workaround
Please install matplotlib == 2.2.3 and re-install nnabla, nnabla_ext_cuda.
pip install matplotlib==2.2.3
pip install nnabla
pip install nnabla_ext_cuda
Note that CUDA 9.2 and cuDNN 7.3 are set as default if versions are not specified. You can also install the cuda extension with specific versions from one of the following. See also FAQ
- nnabla-ext-cuda80 (CUDA 8.0 x cuDNN 7.1)
- nnabla-ext-cuda90 (CUDA 9.0 x cuDNN 7.3)
- nnabla-ext-cuda92 (CUDA 9.2 x cuDNN 7.3)
- nnabla-ext-cuda100 (CUDA 10.0 x cuDNN 7.3)
pip install nnabla
pip install nnabla_ext_cuda92 # For CUDA 9.2 x cuDNN 7.3 users
Additional setup may be required depending on your OS or environment. Please check Python Package Installation Guide for details.
To use C++ inference feature, follow the demonstration on MNIST inference in C++.
For distributed training, you need to build a binary from source. See the guide for building multi-GPU training package.
Version 1.0.10 Release
Version 1.0.8 Release
Merge pull request #113 from sony/feature/20181114-fix-multi-gpu-docker Add openmpi-bin to Multi-GPU docker.
Version 1.0.9 Release
- Add reflection padding and generalize for N-D input.
- Add CuDNN versions of sum/mean/prod reduction
- Add CUDA10 support
- Add implementation of arange function.
- Add option for choosing deterministic algorithms in cuDNN convolution
- [fix] Add include in sort.cu
- Add options for F.min() and F.max() to return indices
- [doc] Avoid unnecessary upgrade of dependent packages
- [function] Prune function
- Add the CUDA implementation for sort function.
- [format] Add auto-format make commands
Install the latest nnabla by:
pip install nnabla
pip install nnabla_ext_cuda # For CUDA users
Users with python <= 3.4 may experience errors with pip install nnabla
and pip install nnabla-ext-cuda
.
■ Workaround
Please install matplotlib == 2.2.3 and re-install nnabla, nnabla_ext_cuda.
pip install matplotlib==2.2.3
pip install nnabla
pip install nnabla_ext_cuda
Note that CUDA 9.2 and cuDNN 7.3 are set as default if versions are not specified. You can also install the cuda extension with specific versions from one of the following. See also FAQ
- nnabla-ext-cuda80 (CUDA 8.0 x cuDNN 7.1)
- nnabla-ext-cuda90 (CUDA 9.0 x cuDNN 7.3)
- nnabla-ext-cuda92 (CUDA 9.2 x cuDNN 7.3)
- nnabla-ext-cuda100 (CUDA 10.0 x cuDNN 7.3)
pip install nnabla
pip install nnabla_ext_cuda92 # For CUDA 9.2 x cuDNN 7.3 users
Additional setup may be required depending on your OS or environment. Please check Python Package Installation Guide for details.
To use C++ inference feature, follow the demonstration on MNIST inference in C++.
For distributed training, you need to build a binary from source. See the guide for building multi-GPU training package.
Version 1.0.7 Release
- Fix link.
- Numpy-like basic indexing (negative indexing)
- Create ~/.ccache before prepare docker build.
- Delete try-catch-all logic.
Install the latest nnabla by:
pip install nnabla
pip install nnabla_ext_cuda # For CUDA users
Users with python <= 3.4 may experience errors with pip install nnabla
and pip install nnabla-ext-cuda
.
■ Workaround
Please install matplotlib == 2.2.3 and re-install nnabla, nnabla_ext_cuda.
pip install matplotlib==2.2.3
pip install nnabla
pip install nnabla_ext_cuda
Note that CUDA 9.2 and cuDNN 7.2 are set as default if versions are not specified. You can also install the cuda extension with specific versions from one of the following. See also FAQ
- nnabla-ext-cuda80 (CUDA 8.0 x cuDNN 7.2)
- nnabla-ext-cuda90 (CUDA 9.0 x cuDNN 7.2)
- nnabla-ext-cuda91 (CUDA 9.1 x cuDNN 7.1)
- nnabla-ext-cuda92 (CUDA 9.2 x cuDNN 7.2)
pip install nnabla
pip install nnabla_ext_cuda92 # For CUDA 9.2 x cuDNN 7.2 users
Additional setup may be required depending on your OS or environment. Please check Python Package Installation Guide for details.
To use C++ inference feature, follow the demonstration on MNIST inference in C++.
For distributed training, you need to build a binary from source. See the guide for building multi-GPU training package.
Version 1.0.6 Release
Install the latest nnabla by:
pip install nnabla
pip install nnabla_ext_cuda # For CUDA users
Note that CUDA 9.2 and cuDNN 7.2 are set as default if versions are not specified. You can also install the cuda extension with specific versions from one of the following. See also FAQ
- nnabla-ext-cuda80 (CUDA 8.0 x cuDNN 7.2)
- nnabla-ext-cuda90 (CUDA 9.0 x cuDNN 7.2)
- nnabla-ext-cuda91 (CUDA 9.1 x cuDNN 7.1)
- nnabla-ext-cuda92 (CUDA 9.2 x cuDNN 7.2)
pip install nnabla
pip install nnabla_ext_cuda92 # For CUDA 9.2 x cuDNN 7.2 users
Additional setup may be required depending on your OS or environment. Please check Python Package Installation Guide for details.
To use C++ inference feature, follow the demonstration on MNIST inference in C++.
For distributed training, you need to build a binary from source. See the guide for building multi-GPU training package.
Version 1.0.5 Release
- Get cmake 3.x from EPEL repository.
- Discard MULTI_GPU_NAME and MULTI_GPU, and enable MULTI_GPU_SUFFIX and…
Install the latest nnabla by:
pip install nnabla
pip install nnabla_ext_cuda # For CUDA users
Note that CUDA 9.2 and cuDNN 7.2 are set as default if versions are not specified. You can also install the cuda extension with specific versions from one of the following. See also FAQ
- nnabla-ext-cuda80 (CUDA 8.0 x cuDNN 7.2)
- nnabla-ext-cuda90 (CUDA 9.0 x cuDNN 7.2)
- nnabla-ext-cuda91 (CUDA 9.1 x cuDNN 7.1)
- nnabla-ext-cuda92 (CUDA 9.2 x cuDNN 7.2)
pip install nnabla
pip install nnabla_ext_cuda92 # For CUDA 9.2 x cuDNN 7.2 users
Additional setup may be required depending on your OS or environment. Please check Python Package Installation Guide for details.
To use C++ inference feature, follow the demonstration on MNIST inference in C++.
For distributed training, you need to build a binary from source. See the guide for building multi-GPU training package.
Version 1.0.4 Release
- Feature/20180907 multi gpu package
- Resize function with interpolation (add only 2D bilinear)
- Fix
-u
option. - Multi-GPU docker build
- Remove unnecessary print
- LeakyReLU with inplace
- Update numpy version to 1.12 and install scikit-image by PIP instead of conda
- Remove MAKE_MANYLINUX_WHEEL from build-with-docker
- [c++] Enable Group in (De)ConvolutionCudaCudnn
- Fix missing SCL package issue
- Fix bug for division of collectives
Install the latest nnabla by:
pip install nnabla
pip install nnabla_ext_cuda # For CUDA users
Note that CUDA 9.2 and cuDNN 7.2 are set as default if versions are not specified. You can also install the cuda extension with specific versions from one of the following. See also FAQ
- nnabla-ext-cuda80 (CUDA 8.0 x cuDNN 7.2)
- nnabla-ext-cuda90 (CUDA 9.0 x cuDNN 7.2)
- nnabla-ext-cuda91 (CUDA 9.1 x cuDNN 7.1)
- nnabla-ext-cuda92 (CUDA 9.2 x cuDNN 7.2)
pip install nnabla
pip install nnabla_ext_cuda92 # For CUDA 9.2 x cuDNN 7.2 users
Additional setup may be required depending on your OS or environment. Please check Python Package Installation Guide for details.
To use C++ inference feature, follow the demonstration on MNIST inference in C++.
For distributed training, you need to build a binary from source. See the guide for building multi-GPU training package.