Skip to content

Commit

Permalink
Merge pull request SeanNaren#52 from miguelvr/torch-update
Browse files Browse the repository at this point in the history
Update to pytorch 0.4
  • Loading branch information
Sean Naren authored May 23, 2018
2 parents 34ab267 + 1b710af commit 7b11b16
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 17 deletions.
29 changes: 14 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This is an extension onto the original repo found [here](https://github.com/baid

## Installation

Install [PyTorch](https://github.com/pytorch/pytorch#installation).
Install [PyTorch](https://github.com/pytorch/pytorch#installation) v0.4.

`WARP_CTC_PATH` should be set to the location of a built WarpCTC
(i.e. `libwarpctc.so`). This defaults to `../build`, so from within a
Expand All @@ -21,13 +21,13 @@ make
```

Now install the bindings:
```
```bash
cd pytorch_binding
python setup.py install
```

If you try the above and get a dlopen error on OSX with anaconda3 (as recommended by pytorch):
```
```bash
cd ../pytorch_binding
python setup.py install
cd ../build
Expand All @@ -38,18 +38,17 @@ This will resolve the library not loaded error. This can be easily modified to w
Example to use the bindings below.

```python
import torch
from torch.autograd import Variable
from warpctc_pytorch import CTCLoss
ctc_loss = CTCLoss()
# expected shape of seqLength x batchSize x alphabet_size
probs = torch.FloatTensor([[[0.1, 0.6, 0.1, 0.1, 0.1], [0.1, 0.1, 0.6, 0.1, 0.1]]]).transpose(0, 1).contiguous()
labels = Variable(torch.IntTensor([1, 2]))
label_sizes = Variable(torch.IntTensor([2]))
probs_sizes = Variable(torch.IntTensor([2]))
probs = Variable(probs, requires_grad=True) # tells autograd to compute gradients for probs
cost = ctc_loss(probs, labels, probs_sizes, label_sizes)
cost.backward()
import torch
from warpctc_pytorch import CTCLoss
ctc_loss = CTCLoss()
# expected shape of seqLength x batchSize x alphabet_size
probs = torch.FloatTensor([[[0.1, 0.6, 0.1, 0.1, 0.1], [0.1, 0.1, 0.6, 0.1, 0.1]]]).transpose(0, 1).contiguous()
labels = torch.IntTensor([1, 2])
label_sizes = torch.IntTensor([2])
probs_sizes = torch.IntTensor([2])
probs.requires_grad_(True) # tells autograd to compute gradients for probs
cost = ctc_loss(probs, labels, probs_sizes, label_sizes)
cost.backward()
```

## Documentation
Expand Down
3 changes: 1 addition & 2 deletions pytorch_binding/warpctc_pytorch/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import torch
import warpctc_pytorch as warp_ctc
from torch.autograd import Function
from torch.autograd import Variable
from torch.nn import Module
from torch.nn.modules.loss import _assert_no_grad

Expand Down Expand Up @@ -38,7 +37,7 @@ def forward(ctx, acts, labels, act_lens, label_lens, size_average=False,
grads = grads / minibatch_size
costs = costs / minibatch_size

ctx.grads = Variable(grads, volatile=True)
ctx.grads = grads
return costs

@staticmethod
Expand Down

0 comments on commit 7b11b16

Please sign in to comment.