Skip to content

Extracting GraphNode Layers from Pretrained Network and converting them to nn.Module format. #154

Answered by leondgarse
Parikshit00 asked this question in Q&A
Discussion options

You must be logged in to vote
  • The output shape changing 4D -> 3D is introduced in update attention blocks using 3D input for better inference performance. Just some of my testing results show that 3D inputs perform better. Thus you may try versions before that like kecam==1.3.17.
  • Using the newest 1.3.24, comment off those lines in caformer.py in the above check-in if need 4D outputs. Just some Reshape layers.
  • Another issue in my previous reply is that we need to create a model from inputs till the layers that are required, not using the layer itself. Usage instance could be:
    import torch
    from torch import nn
    from keras_cv_attention_models import caformer
    from keras_cv_attention_models.backend import models
    
    backbone = 

Replies: 3 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@Parikshit00
Comment options

Answer selected by Parikshit00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants