AttributeError: 'BoringModel' object has no attribute 'require_backward_grad_sync' when using manual optimization with TPU #6503
Labels
accelerator: tpu
Tensor Processing Unit
bug
Something isn't working
help wanted
Open to be worked on
priority: 0
High priority task
Milestone
🐛 Bug
Hello!
When using manual optimization with TPU, I am getting an AttributeError: 'BoringModel' object has no attribute 'require_backward_grad_sync'. When I replace self.manual_backward(loss) with loss.backward() things seem to work, but I am not sure if this is a safe or sustainable workaround. It seems the error happens at the self.manual_backward(loss) step in training_step. Any help would be much appreciated.
Please reproduce using the BoringModel
Here is the notebook reproducing the error:
https://colab.research.google.com/drive/1LPYgtUAiHd1OXuTK6I1WkRaCUQScxEPg?usp=sharing
Environment
WARNING:root:TPU has started up successfully with version pytorch-1.8
Installed torch-xla using:
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl
to match colab defaults
The text was updated successfully, but these errors were encountered: