Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation #5

Open
YoungSeng opened this issue Jun 10, 2022 · 1 comment

Comments

@YoungSeng
Copy link

Hello, thank you for your geart work!

Here is a question. When I set skeleton_aware=0,

 0%|                                                                                                                               | 0/15000 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "train.py", line 132, in <module>
    main()
  File "train.py", line 114, in main
    joint_train(reals, gens[:curr_stage], group_gan_models, lengths,
  File "/nfs7/y50021900/ganimator/models/architecture.py", line 71, in joint_train
    list(map(optimize_lambda, gan_models))
  File "/nfs7/y50021900/ganimator/models/architecture.py", line 68, in <lambda>
    optimize_lambda = lambda x: x.optimize_parameters(gen=True, disc=False, rec=False)
  File "/nfs7/y50021900/ganimator/models/gan1d.py", line 164, in optimize_parameters
    self.backward_G()
  File "/nfs7/y50021900/ganimator/models/gan1d.py", line 136, in backward_G
    loss_total.backward(retain_graph=True)
  File "/nfs7/y50021900/miniconda3/envs/ganimator/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/nfs7/y50021900/miniconda3/envs/ganimator/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
    Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [429, 256, 1, 5]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

How can I fix it?

@PeizhuoLi
Copy link
Owner

Hi, thanks for your question. The skeleton_aware option is part of the legacy code that is not maintained any more. Such an error is very difficult to fix and we will probably remove it in the next version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants