Vision Foundation Model deployment in the edge cases - How adapters will be useful to deploy VFM in the edge device? #1273
solomonmanuelraj
started this conversation in
General
Replies: 2 comments 2 replies
-
Yes, when you call |
Beta Was this translation helpful? Give feedback.
2 replies
-
Hi @younesbelkada & @BenjaminBossan for owl-vit model (google/owlvit-base-patch32) is existing adapters are available? let me know the link? thanks |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi team,
I want to fine tune a vision foundation model using custom dataset and deploy it in an edge device which has resource constraints ( around 150 MB memory) .
( VFM - OWL-ViT Size - 611 MB ( https://huggingface.co/google/owlvit-base-patch16)). when i do peft finetuning additional tiny checkpoint is getting created and, in the inference time final model size ( base model checkpoint + tiny checkpoint) is greater than the base model size. In that case final model is not possible to deploy in the edge device because of limited edge device. Is there any way only tiny model checkpoint alone can be deployed in the edge device for the prediction? any other way is available so that custom fine-tuned foundation model can be deployed in the edge device?
with thanks
Beta Was this translation helpful? Give feedback.
All reactions