Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid argument: indices[0,0,2] = [0, 0, 2, -1] does not index into shape [10,79,18,6484] #7

Open
zzpDapeng opened this issue Dec 11, 2020 · 2 comments

Comments

@zzpDapeng
Copy link

zzpDapeng commented Dec 11, 2020

logits shape: (10, 79, 18, 6485)
labels shape (10, 17)
labels_length: tf.Tensor([ 2 14 9 9 9 13 17 9 9 17], shape=(10,), dtype=int64)
logit_length : tf.Tensor([20 47 36 35 41 58 64 38 45 78], shape=(10,), dtype=int64)

I got this error:
2020-12-11 10:48:09.516460: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at scatter_nd_op.cc:133 : Invalid argument: indices[0,0,2] = [0, 0, 2, -1] does not index into shape [10,79,18,6484]
Traceback (most recent call last):
File "/home/dapeng/PycharmProjects/convTT/train.py", line 68, in
train(model, train_set, optimizer, train_loss, epoch)
File "/home/dapeng/PycharmProjects/convTT/train.py", line 33, in train
label_length=labels_length)
File "/home/dapeng/anaconda3/envs/tf/lib/python3.7/site-packages/rnnt/rnnt.py", line 204, in rnnt_loss
return compute_rnnt_loss_and_grad(*args)
File "/home/dapeng/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/ops/custom_gradient.py", line 264, in call
return self._d(self._f, a, k)
File "/home/dapeng/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/ops/custom_gradient.py", line 218, in decorated
return _eager_mode_decorator(wrapped, args, kwargs)
File "/home/dapeng/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/ops/custom_gradient.py", line 412, in _eager_mode_decorator
result, grad_fn = f(*args, **kwargs)
File "/home/dapeng/anaconda3/envs/tf/lib/python3.7/site-packages/rnnt/rnnt.py", line 195, in compute_rnnt_loss_and_grad
result = compute_rnnt_loss_and_grad_helper(**kwargs)
File "/home/dapeng/anaconda3/envs/tf/lib/python3.7/site-packages/rnnt/rnnt.py", line 168, in compute_rnnt_loss_and_grad_helper
[batch_size, input_max_len, target_max_len, vocab_size - 1])
File "/home/dapeng/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 8842, in scatter_nd
indices, updates, shape, name=name, ctx=_ctx)
File "/home/dapeng/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 8885, in scatter_nd_eager_fallback
attrs=_attrs, ctx=ctx, name=name)
File "/home/dapeng/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0,2] = [0, 0, 2, -1] does not index into shape [10,79,18,6484] [Op:ScatterNd]

@zzpDapeng
Copy link
Author

zzpDapeng commented Dec 11, 2020

I run your Sample Train Script and found that you set lables=decoder_seqs + 1 in
return tf.reduce_mean(self.loss(log_probs, decoder_seqs + 1, decoder_lens, encoder_lens))
If i change your code to return tf.reduce_mean(self.loss(log_probs, decoder_seqs, decoder_lens, encoder_lens)), it lead to the same error.
I can't understand why you do that.
In my dictionary, I reserve index 0 for blank, and the word indexed from 1 to 6484 (6485 at all), so my decoder_seqs represent the real index of word, why plus 1?

@zzpDapeng
Copy link
Author

zzpDapeng commented Dec 11, 2020

my labels is padded with 0 as follows:
[[ 379 57 230 88 214 268 907 1061 62 142 2027 0 0 0 0 0] [ 588 459 357 547 209 102 101 86 3 96 87 89 32 0 0 0] [ 144 942 196 571 1956 219 82 908 46 363 1148 696 634 55 761 383] [ 624 316 378 716 1737 362 49 110 746 1157 346 380 570 0 0 0] [ 781 260 1311 1562 15 429 16 666 150 0 0 0 0 0 0 0] [ 809 411 310 178 541 1209 2 99 102 87 232 88 231 0 0 0] [ 704 6 3 570 167 151 3 290 101 102 87 89 2287 0 0 0] [ 3 645 146 3748 4067 4735 869 60 1407 95 1334 4735 869 0 0 0] [ 52 342 372 1020 440 441 628 440 441 628 1154 2121 378 233 766 0] [1180 478 1087 1687 2381 777 174 231 102 97 174 0 0 0 0 0]]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant