Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid AttributeError: 'torch.dtype' object has no attribute 'type' #89

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

GoingMyWay
Copy link

Avoid AttributeError: 'torch.dtype' object has no attribute 'type' error. To make it compatible with new PyTorch versions

…error

avoid `AttributeError: 'torch.dtype' object has no attribute 'type'` error
@AVSurfer123
Copy link

+1, currently the Docker image fails due to this since it always installs the latest version of PyTorch.

@reubenwong97
Copy link

Hello, I ran into this issue as well, should have come by your PR sooner.

Not sure how performance compares, or which is best practice, but I edited the below in the logging file:

# defined a new method in the logging class
def average_list(self, k, window):
        array = []
        for stat in self.stats[k][-window:]:
            item = stat[1]
            if T.is_tensor(item):
                item = item.cpu().numpy()

            array.append(item)

        return np.mean(array)
# edited the line throwing the errors
item = "{:.4f}".format(self.average_list(k, window))

```Python
In [7]: import numpy as np                                                                                                                                

In [8]: np.prod((1, 2, 3, 4))                                                                                                                             
Out[8]: 24

In [9]: type(np.prod((1, 2, 3, 4)))                                                                                                                       
Out[9]: numpy.int64

In [10]: type(int(np.prod((1, 2, 3, 4))))                                                                                                                 
Out[10]: int

In [11]: import numbers                                                                                                                                   

In [12]: isinstance(np.prod((1, 2, 3, 4)), numbers.Integral)                                                                                              
Out[12]: True

In [13]: isinstance(np.prod((1, 2, 3, 4)), int)                                                                                                           
Out[13]: False
```
@reubenwong97
Copy link

Hi, I would like to mention that I tested the fix by @GoingMyWay because mine felt hacky, but it didn't work, still crashed when logging the stats. Reverted back to my fix.

@GoingMyWay
Copy link
Author

Hi, I would like to mention that I tested the fix by @GoingMyWay because mine felt hacky, but it didn't work, still crashed when logging the stats. Reverted back to my fix.

Can you check which item in the stats caused this problem. Actually, for old Pytorch versions, the grad_norm caused this problem.

 src/components/episode_buffer.py:103: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  v = th.tensor(v, dtype=dtype, device=self.device)
Copy link

@fyqqyf fyqqyf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants