Skip to content

Commit

Permalink
Use pytorch to determine CUDA availability
Browse files Browse the repository at this point in the history
  • Loading branch information
SeanNaren committed Apr 26, 2018
1 parent 1dd0bb1 commit 33f850f
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 16 deletions.
9 changes: 0 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,6 @@ cmake ..
make
```

Otherwise, set `WARP_CTC_PATH` to wherever you have `libwarpctc.so`
installed. If you have a GPU, you should also make sure that
`CUDA_HOME` is set to the home cuda directory (i.e. where
`include/cuda.h` and `lib/libcudart.so` live). For example:

```
export CUDA_HOME="/usr/local/cuda"
```

Now install the bindings:
```
cd pytorch_binding
Expand Down
12 changes: 5 additions & 7 deletions pytorch_binding/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,16 @@
from setuptools import setup, find_packages

from torch.utils.ffi import create_extension
import torch

extra_compile_args = ['-std=c++11', '-fPIC']
warp_ctc_path = "../build"

if "CUDA_HOME" not in os.environ:
print("CUDA_HOME not found in the environment so building "
"without GPU support. To build with GPU support "
"please define the CUDA_HOME environment variable. "
"This should be a path which contains include/cuda.h")
enable_gpu = False
else:
if torch.cuda.is_available():
enable_gpu = True
else:
print("Torch was not built with CUDA support, not building warp-ctc GPU extensions.")
enable_gpu = False

if platform.system() == 'Darwin':
lib_ext = ".dylib"
Expand Down

0 comments on commit 33f850f

Please sign in to comment.