Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to extract intermediate features? #21

Closed
KleinXin opened this issue Apr 22, 2022 · 5 comments
Closed

How to extract intermediate features? #21

KleinXin opened this issue Apr 22, 2022 · 5 comments

Comments

@KleinXin
Copy link

KleinXin commented Apr 22, 2022

I am trying to make some changes to RepLKNet. So I seperate the network into several parts by using ‘.children’. My codes are shown below

import torch
import torch.nn as nn
import torchvision.utils
import torchvision.models as tv_models

from networks import replknet 

if __name__ == "__main__":

   basenet = replknet.create_RepLKNet31B(small_kernel_merged=False,use_checkpoint=True)

   self_stem_block = list(basenet.children())[0]
   self_main_block_0 = list(basenet.children())[1][0]
   self_main_block_1 = list(basenet.children())[1][1]
   self_main_block_2 = list(basenet.children())[1][2]
   self_main_block_3 = list(basenet.children())[1][3]
   self_out_conv = list(basenet.children())[2]
   self_sync_bn = list(basenet.children())[3]
   self_avg_pool = list(basenet.children())[4]
   self_classifier = list(basenet.children())[5]

   x = torch.ones(1,3,224,224).cuda()

   x = self_stem_block(x)

Then it gives the error

Traceback (most recent call last):
  File "test_load_model.py", line 66, in <module>
    x = self_stem_block(x)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 201, in _forward_unimplemented
    raise NotImplementedError
NotImplementedError

It seems that the type of self_stem_block is nn.ModuleList(). So it does not have forward function.

Anyone knows how to extract intermediate features?

Any suggestion is appreciated.

@wdmwhh
Copy link

wdmwhh commented Apr 28, 2022

        x = self.stem[0](x)
        for stem_layer in self.stem[1:]:
            if self.use_checkpoint:
                x = checkpoint.checkpoint(stem_layer, x)     # save memory
            else:
                x = stem_layer(x)

https://github.com/DingXiaoH/RepLKNet-pytorch/blob/main/replknet.py#L259
I think it's helpful to you.

@KleinXin
Copy link
Author

KleinXin commented Apr 28, 2022

x = self.stem[0](x)

Thank you for your reply. Following your suggestion, I changed my codes so that each submodule of the modulelist can be acquired. But it gives me another error as below

import torch
import torch.nn as nn
import torchvision.utils
import torchvision.models as tv_models

from networks import replknet 

if __name__ == "__main__":

   basenet = replknet.create_RepLKNet31B(small_kernel_merged=False,use_checkpoint=True)

   self_stem_block = list(basenet.children())[0]
   self_main_block_0 = list(basenet.children())[1][0]
   self_main_block_1 = list(basenet.children())[1][1]
   self_main_block_2 = list(basenet.children())[1][2]
   self_main_block_3 = list(basenet.children())[1][3]
   self_out_conv = list(basenet.children())[2]
   self_sync_bn = list(basenet.children())[3]
   self_avg_pool = list(basenet.children())[4]
   self_classifier = list(basenet.children())[5]

   x = torch.ones(1,3,224,224).cuda()

  self_stem_block_0 = self_stem_block[0]

  self_stem_block_0.cuda()

  x = self_stem_block_0(x)
Traceback (most recent call last):
File "test_load_model_rlk.py", line 86, in <module>
  x = self_stem_block_0(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
  return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 141, in forward
  input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
  return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/batchnorm.py", line 732, in forward
  world_size = torch.distributed.get_world_size(process_group)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 845, in get_world_size
  return _get_group_size(group)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 306, in _get_group_size
  default_pg = _get_default_group()
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 411, in _get_default_group
  "Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.

@DingXiaoH
Copy link
Owner

I think this error is not related to our model, and I guess it results from your runtime. Do you have a cuda runtime? You may test our model with cpu only.

@yodhcn
Copy link

yodhcn commented Oct 28, 2022

I think this error is not related to our model, and I guess it results from your runtime. Do you have a cuda runtime? You may test our model with cpu only.

This problem may be caused by SyncBatchNorm
facebookresearch/detectron2#3972 (comment)
pytorch/pytorch#63662

@KleinXin
Copy link
Author

I think this error is not related to our model, and I guess it results from your runtime. Do you have a cuda runtime? You may test our model with cpu only.

This problem may be caused by SyncBatchNorm facebookresearch/detectron2#3972 (comment) pytorch/pytorch#63662

thx! I will have a try when I am not so busy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants