You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 21, 2023. It is now read-only.
These days I try to compress MobileVit, and reduce the FLOPs and parameters, but the inference time almost doesn't change.
Only when I remove some MV2 blocks, the inference time reduce a little.
My method I used for compressing:
1: Try LinFormer.(This method reduces the FLOPs, but increases the length of sequence. It seems longer sequence means slower speed)
2: Decrease the channels of some layers.
3: Other methods, such as replacing conv33 with conv11(learning from you)
Could you please tell me the reason?
The text was updated successfully, but these errors were encountered:
Thanks for your great work!
These days I try to compress MobileVit, and reduce the FLOPs and parameters, but the inference time almost doesn't change.
Only when I remove some MV2 blocks, the inference time reduce a little.
My method I used for compressing:
1: Try LinFormer.(This method reduces the FLOPs, but increases the length of sequence. It seems longer sequence means slower speed)
2: Decrease the channels of some layers.
3: Other methods, such as replacing conv33 with conv11(learning from you)
Could you please tell me the reason?
The text was updated successfully, but these errors were encountered: