Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError for the more-than-two-dimensional input of nn.Linear #18

Open
bighuang624 opened this issue Aug 5, 2019 · 6 comments
Open

Comments

@bighuang624
Copy link

Considering that the input of the linear layer is not necessarily only two-dimensional, I think these changes

  • change assert len(inp.size()) == 2 and len(out.size()) == 2 to assert len(inp.size()) >= 2 and len(out.size()) >= 2
  • change inp.size()[1] and out.size()[1] to inp.size()[-1] and out.size()[-1]

should be made in these functions

  • def compute_Linear_memory(module, inp, out) in compute_memory.py
  • def compute_Linear_madd(module, inp, out) in compute_madd.py
  • def compute_Linear_flops(module, inp, out) in compute_flops.py

I am not quite sure that such changes are correct. And sorry for my poor English.

@ZhaoLv
Copy link

ZhaoLv commented Nov 17, 2020

if I want to get the calculation analysis of a fc layer, how can I use this tool?

import torch.nn as nn
import torch
from torchstat import stat

class Net(nn.Module):

def __init__(self):
    nn.Module.__init__(self)
    self.fc = nn.Linear(144,4096)

def forward(self, x):
    x = self.fc(x)
    return x

net=Net()   
stat(net,(1,64,144))

I wrote this code, but assert len(inp.size()) == 2 and len(out.size()) == 2 cannot pass, how can i change the code?

@Mayurji
Copy link

Mayurji commented Jul 31, 2021

Any update on this Issue?

1 similar comment
@Chen-yu-Zheng
Copy link

Any update on this Issue?

@taomiao
Copy link

taomiao commented Jan 9, 2023

flops = (torch.prod(torch.LongTensor(out.size()))*inp.size(-1)).item()

@engin-work
Copy link

Maybe add the channels, otherwise the flops may be incorrect

@yycocl
Copy link

yycocl commented Jul 7, 2023

Thanks to bighuang624's answer. The reason why AssertionError "assert len(inp.size()) == 2 and len(out.size()) == 2" exists is "the input of the linear layer is not only two-dimensional".There is my solution:

  1. find the codes (......\Anaconda3\envs\envs_name\Lib\site-packages\torchstat)
  2. find compute_Linear_memory(module, inp, out) in compute_memory.py
    change its content as follows:
assert isinstance(module, nn.Linear)
assert len(inp.size()) >= 2 and len(out.size()) >= 2
batch_size = inp.size()[0]
mread = batch_size * (inp.size()[-1:].numel() + num_params(module))
mwrite = out.size().numel()
return (mread, mwrite)
  1. find compute_Linear_madd(module, inp, out) in compute_madd.py
    change its content as follows:
assert isinstance(module, nn.Linear)
assert len(inp.size()) >= 2 and len(out.size()) >= 2
num_in_features = inp.size()[-1]
num_out_features = out.size()[-1]
mul = num_in_features
add = num_in_features - 1
return num_out_features * (mul + add)
  1. find compute_Linear_flops(module, inp, out) in compute_flops.py
    change its content as follows:
assert isinstance(module, nn.Linear)
assert len(inp.size()) >= 2 and len(out.size()) >= 2
batch_size = inp.size()[0]
return batch_size * inp.size()[-1] * out.size()[-1]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants