Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor #9

Open
robi56 opened this issue Aug 29, 2021 · 1 comment

Comments

@robi56
Copy link

robi56 commented Aug 29, 2021

python train.py

hyper-patameters:
HParams(vocab_size=7024, pad_idx=0, bos_idx=3, emb_size=256, hidden_size=512, context_size=512, latent_size=256, factor_emb_size=64, n_class1=3, n_class2=2, key_len=4, sens_num=4, sen_len=9, poem_len=30, batch_size=128, drop_ratio=0.15, weight_decay=0.00025, clip_grad_norm=2.0, max_lr=0.0008, min_lr=5e-08, warmup_steps=6000, ndis=3, min_tr=0.85, burn_down_tr=3, decay_tr=6, tau_annealing_steps=6000, min_tau=0.01, rec_warm_steps=1500, noise_decay_steps=8500, log_steps=200, sample_num=1, max_epoches=12, save_epoches=3, validate_epoches=1, fbatch_size=64, fmax_epoches=3, fsave_epoches=1, vocab_path='../corpus/vocab.pickle', ivocab_path='../corpus/ivocab.pickle', train_data='../corpus/semi_train.pickle', valid_data='../corpus/semi_valid.pickle', model_dir='../checkpoint/', data_dir='../data/', train_log_path='../log/mix_train_log.txt', valid_log_path='../log/mix_valid_log.txt', fig_log_path='../log/', corrupt_ratio=0.1, dae_epoches=10, dae_batch_size=128, dae_max_lr=0.0008, dae_min_lr=5e-08, dae_warmup_steps=4500, dae_min_tr=0.85, dae_burn_down_tr=2, dae_decay_tr=6, dae_log_steps=300, dae_validate_epoches=1, dae_save_epoches=2, dae_train_log_path='../log/dae_train_log.txt', dae_valid_log_path='../log/dae_valid_log.txt', cl_batch_size=64, cl_epoches=10, cl_max_lr=0.0008, cl_min_lr=5e-08, cl_warmup_steps=800, cl_log_steps=100, cl_validate_epoches=1, cl_save_epoches=2, cl_train_log_path='../log/cl_train_log.txt', cl_valid_log_path='../log/cl_valid_log.txt')
please check the hyper-parameters, and then press any key to continue >ok
ok
dae pretraining...
layers.embed.weight torch.Size([7024, 256])
layers.encoder.rnn.weight_ih_l0 torch.Size([1536, 256])
layers.encoder.rnn.weight_hh_l0 torch.Size([1536, 512])
layers.encoder.rnn.bias_ih_l0 torch.Size([1536])
layers.encoder.rnn.bias_hh_l0 torch.Size([1536])
layers.encoder.rnn.weight_ih_l0_reverse torch.Size([1536, 256])
layers.encoder.rnn.weight_hh_l0_reverse torch.Size([1536, 512])
layers.encoder.rnn.bias_ih_l0_reverse torch.Size([1536])
layers.encoder.rnn.bias_hh_l0_reverse torch.Size([1536])
layers.decoder.rnn.weight_ih_l0 torch.Size([1536, 512])
layers.decoder.rnn.weight_hh_l0 torch.Size([1536, 512])
layers.decoder.rnn.bias_ih_l0 torch.Size([1536])
layers.decoder.rnn.bias_hh_l0 torch.Size([1536])
layers.word_encoder.rnn.weight_ih_l0 torch.Size([256, 256])
layers.word_encoder.rnn.weight_hh_l0 torch.Size([256, 256])
layers.word_encoder.rnn.bias_ih_l0 torch.Size([256])
layers.word_encoder.rnn.bias_hh_l0 torch.Size([256])
layers.word_encoder.rnn.weight_ih_l0_reverse torch.Size([256, 256])
layers.word_encoder.rnn.weight_hh_l0_reverse torch.Size([256, 256])
layers.word_encoder.rnn.bias_ih_l0_reverse torch.Size([256])
layers.word_encoder.rnn.bias_hh_l0_reverse torch.Size([256])
layers.out_proj.weight torch.Size([7024, 512])
layers.out_proj.bias torch.Size([7024])
layers.map_x.mlp.linear_0.weight torch.Size([512, 768])
layers.map_x.mlp.linear_0.bias torch.Size([512])
layers.context.conv.weight torch.Size([512, 512, 3])
layers.context.conv.bias torch.Size([512])
layers.context.linear.weight torch.Size([512, 1024])
layers.context.linear.bias torch.Size([512])
layers.dec_init_pre.mlp.linear_0.weight torch.Size([506, 1536])
layers.dec_init_pre.mlp.linear_0.bias torch.Size([506])
params num: 31
building data for dae...
193461
34210
train batch num: 1512
valid batch num: 268
Traceback (most recent call last):
File "train.py", line 79, in
main()
File "train.py", line 74, in main
pretrain(mixpoet, tool, hps)
File "train.py", line 30, in pretrain
dae_trainer.train(mixpoet, tool)
File "/content/MixPoet/codes/dae_trainer.py", line 137, in train
self.run_train(mixpoet, tool, optimizer, logger)
File "/content/MixPoet/codes/dae_trainer.py", line 83, in run_train
batch_keys, batch_poems, batch_dec_inps, batch_lengths)
File "/content/MixPoet/codes/dae_trainer.py", line 58, in run_step
mixpoet.dae_graph(keys, poems, dec_inps, lengths)
File "/content/MixPoet/codes/graphs.py", line 337, in dae_graph
_, poem_state0 = self.computer_enc(poems, self.layers['encoder'])
File "/content/MixPoet/codes/graphs.py", line 232, in computer_enc
enc_outs, enc_state = encoder(emb_inps, lengths)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/content/MixPoet/codes/layers.py", line 59, in forward
input_lens, batch_first=True, enforce_sorted=False)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/utils/rnn.py", line 249, in pack_padded_sequence
_VF._pack_padded_sequence(input, lengths, batch_first)
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor

@hiroki-chen
Copy link

hiroki-chen commented May 2, 2022

You can change the following

packed = torch.nn.utils.rnn.pack_padded_sequence(embed_inps,
                input_lens, batch_first=True, enforce_sorted=False)

in file layer.py line 56 to

packed = torch.nn.utils.rnn.pack_padded_sequence(embed_inps,
                input_lens.cpu(), batch_first=True, enforce_sorted=False)

I think this is a bug introduced when you are using a newer version of PyTorch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants