Skip to content
This repository has been archived by the owner on Oct 21, 2023. It is now read-only.

About the inference time #7

Open
Jerryme-xxm opened this issue Dec 17, 2022 · 1 comment
Open

About the inference time #7

Jerryme-xxm opened this issue Dec 17, 2022 · 1 comment

Comments

@Jerryme-xxm
Copy link

Thanks for your great work!

These days I try to compress MobileVit, and reduce the FLOPs and parameters, but the inference time almost doesn't change.

Only when I remove some MV2 blocks, the inference time reduce a little.

My method I used for compressing:
1: Try LinFormer.(This method reduces the FLOPs, but increases the length of sequence. It seems longer sequence means slower speed)
2: Decrease the channels of some layers.
3: Other methods, such as replacing conv33 with conv11(learning from you)

Could you please tell me the reason?

@Jerryme-xxm
Copy link
Author

Just now, I test a single conv11 and conv33.
conv11: FLOPs 49152 time 0.0471
conv3
3: FLOPs 442368 time 0.0520
Sad...

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant