-
Notifications
You must be signed in to change notification settings - Fork 73
Installation
To benefit from the native libraries you need a compatible CPU/OS and GPU. CPU native libraries were built for 64-bit IA (Intel Architecture) machines using the Intel Compiler on Windows, Linux and Mac. To use GPU acceleration you need an NVIDIA Fermi or later architecture (GTX 500 or higher) that supports CUDA. A number of advanced kernels only run on Kepler (GTX 600 or higher). We have tested BIDMat on
- Windows 7, 8, 8.1 and 10 64-bit
- Linux Redhat Enterprise 6.2, Amazon Linux and Ubuntu 14 and 16
- Mac OS 10.5 - 10.12
Make sure a supported version of CUDA is installed on the machine (9.2 now), and that the CUDA runtime libraries are in the library path
- LD_LIBRARY_PATH on Linux
- %PATH% on Windows
- On Mac OSX, link ~/Library/Java/Extensions to point to the CUDA library path
To build the libraries, you also need the CUDA compiler nvcc in your $PATH. $PATH should contain the cuda binary path for the version of CUDA you plan to use, e.g. /usr/local/cuda/bin. To check this, do
nvcc --versionwhich will print out the CUDA version for the compiler.
To use GPU-accelerated deep learning kernels, you should install CUDNN 7.6 for CUDA 9.2.
You will need a version of Java 8 JDK, either the Oracle JDK, or OpenJDK on Linux.
Note that you dont need Scala installed on your machine. Maven installs the dependencies in BIDMach's libraries and the bidmach script loads IScala.
BIDMach is built with maven 3.x
Due to the additional security features on recent Mac OSes, its no longer possible to pass the path to the CUDA libraries through DYLD_LIBRARY_PATH. The library path is hard-wired and cannot be changed. The easiest way to add the libraries is to make a symlink from one of the (unused) library extensions. i.e.
mkdir ~/Library/Java ln -s /usr/local/cuda/lib ~/Library/Java/Extensions
BIDMat source code is housed on Github at https://github.com/BIDData/BIDMat. Source code is included in the bundles above, but if you want to get the latest development version or contribute to it, do the following:
- If you haven't already: install
git
on your machine. If you're running windows, it will help to install cygwin. - Choose a parent directory where you would like to keep the code, and run
git clone https://github.com/BIDData/BIDMat.git
- This will create a directory BIDMat under the parent containing the source code.
- cd to the BIDMat directory and do
mvn clean installThis will download all the libraries (java and native jar files) for your platform and CUDA version. The script automatically determines the CUDA version by invoking the CUDA compiler "nvcc --version". Make sure your paths are set so that the right version of CUDA (if you have multiple versions) is first in $PATH. You can do
nvcc --versionto check this. After the libraries have loaded, you should be able to do
./bidmat
If all is well, the bidmat script should import BIDMat's classes and check the CUDA version and number of GPUs, which it reports something like this:
1 CUDA device found, CUDA version 8.0If you're on Linux, running the "bidmat" script should work. In windows there is a bidmat.cmd script that does similar work. Its a good idea to look in these scripts and make sure the options (e.g. JVM heap size) are set to reasonable values for your machine. You can also start a scala interpreter from maven with the command below, although there are terminal bugs in cygwin/windows that make it unusable there. You can instead start bidmat from a windows command prompt with the command below:
mvn scala:consoleNote that this invocation doesnt load the bidmat init file. You have to load it manually with:
scala> :load lib/bidmat_init.scala
You can also start a Jupyter notebook if you have Jupyter installed on your machine. To do this call:
./bidmat notebook
To install a specific version other than the default, edit the pom.xml file and set version/jcudaVersion to "1.1.0-cuda7.0/0.7.0a" or "1.1.0-cuda7.5/0.7.5b". Then run
mvn installTo install in your local repo.
You can modify the source code to customize your own version of BIDMat, although of course its better to include an unmodified bidmat dependency in your maven projects. The appropriate dependency code is in bidmat.pom. It will look something like this:
<repositories> <repository> <id>bintray-biddata-BIDData</id> <name>bintray</name> <url>http://dl.bintray.com/biddata/BIDData</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <dependencies> <dependency> <groupId>BIDMat</groupid> <artifactId>BIDMat</artifactid> <version>2.0.10-cuda8.0beta</version> <type>pom</type> </dependency> </dependencies>
Maven is the official build tool now because of its support for profiles and native code. BIDMach requires hardware-specific dependencies which are loaded automatically by maven. sbt cannot currently do this, so you first need to build using maven by doing "mvn clean install" in the main BIDMat directory. Then you can do:
./sbt packageand you can run it with "./sbt console".
CUDA works seamlessly from remote machines in Linux. Using CUDA remotely with Windows can be more challenging. High-end GPUs (Tesla series) run in "TCC" mode in windows, which means that they are running outside the graphics system. Other GPUs (e.g. GTX-5XX, -6XX, 7XX and Titan) can only run in "WDM" mode, which means that are using graphics drivers and are part of the graphics system. Remote desktop disables access to the graphics system on a remote machine, which means that you cannot use CUDA with a commodity GPU on a machine you are using with Remote Desktop. This is not a problem with Tesla devices. You can use another remote desktop technology to access a commodity GPU, and specifically most flavors of VNC (Virtual Network Computing). We use RealVNC for our own work on remote machines.
Finally, Commodity NVIDIA GPUs do an energy-saving shutdown if they are not connected to a monitor, independent of whether a compute process is trying to use them. This is true for Windows, Linux or MacOS. So commodity devices must always be connected to a physical monitor, or to a dummy load, to be used by CUDA. You can build dummy loads with a few pennies of hardware following these directions
BIDMat's native GPU code is in /BIDMat/jni/src
. To compile it you will need the matching CUDA SDK (currently 8.0) and a compiler supported by CUDA. These are listed on the CUDA SDK site. There is a simple configure script in that directory, and in most cases you can do:
./configure make make installcudalib
make install simply places a shared library in the src/main/resources/lib
subdirectory of your BIDMat tree. If this doesnt work, check the configure script for a missing or erroneous shell variable.
We cant provide much support for developing your own GPU code, but it is not difficult. You can see from BIDMat's own kernels it involves a CUDA main C function in a .cu file in BIDMat/jni/src
, a JNI wrapper in BIDMAT_CUMAT.cpp, and a native function declaration in Java in BIDMat/src/main/java/edu/berkeley/bid/CUMAT.java
. Arrays are passed from Java to C using JCuda's Pointer class.
BIDMat includes a number of CPU-accelerated routines that rely on Intel's compiler and the Intel Math Kernel Library (MKL). While it would be great to use free compilers, unfortunately only Intel's build toolchain includes acceleration of matrix operations, transcendental functions, random number generation, sparse matrix operations, and portable threading (opemMP) all of which are critical for performance. You therefore need a version of e.g. Intel's "C++ Composer XE" for your platform. Then do
> ./configure > make > make installcpulib
make install
just copies the libraries into BIDMat/src/main/resources/lib
. Then to package these into jars used by BIDMat, cd to the main BIDMat directory and do:
mvn -Dgpu install
We rarely add to these, so you'll probably find the pre-compiled libs in the bundles for BIDMat are enough. If you are going to build both CPU and GPU libraries (which means you have purchased the Intel compiler), do:
> ./configure > make > make install
The to wrap these as jars used by BIDMat, cd to the main BIDMat directory and do:
mvn -Dcpu -Dgpu install
Eclipse has good scala integration and you can download a bundle with Eclipse and the scala plugin preinstalled from here. You should receive .project and .classpath files when you fetch BIDMat from github, and these define the main project settings you need for eclipse. But you will also need to connect Eclipse to the native libraries. There are two ways to do this. First you can set the "VM arguments" in your run configuration something like this:
-Xmx12G -Xms128M -Dfile.encoding=UTF-8 -Djava.library.path=c:/code/BIDMach/lib;C:/PROGRA~1/NVIDIA~2/CUDA/v7.0/bin
the library path entry should point to both the library subdirectory of BIDMat, and the CUDA shared library path. The DOS-style 8-char pathnames without whitespace are necessary to get the arguments passed correctly to java.
Secondly you can add the shared library paths to the "Java Build Path" for the project. Under Project settings, select "Java Build Path", then "Source", and expand either the java or scala package. You will see an entry for "Native Library Location". Point it to either the BIDMat lib directory or the CUDA bin directory. Do the same for the other package so that both library path entries are in one of the packages.
BIDMat uses UTF-8 encoding in order to be able to use math characters as operators. You have to set the UTF-8 encoding for editors in Eclipse, which is under windows->Preferences->General->Workspace->Text Encoding. You will need UTF-8 support in your sbt build files. It is already there in the build files that come with the BIDMat distribution.
Not all fonts include UTF-8 math characters, or print well in command-line windows. We have found Deja-Vu fonts to be very good in both respects, and we strongly recommend you use them. They're available here. We use them in Eclipse, in Cygwin and Putty command windows, and in other editors (Emacs).