You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm an iOS app developer who is interesting in ML.
Firstly, thank you for sharing your awesome vision modeling project.
I want to support imgclsmob's pre-trained models on my repositories which provide baseline code for inferencing various ML models on iOS devices.
I converted all imgclsmob's pre-trained pose estimation model to tflite with (1,224,224,3) input shape. But I think each model has an optimized input shape that was used on training.
Nice work!
To answer your question, see the source code of each model, in particular, the default value of the in_size argument. In most cases it's (1, 256, 192, 3).
Hi, I'm an iOS app developer who is interesting in ML.
Firstly, thank you for sharing your awesome vision modeling project.
I want to support
imgclsmob
's pre-trained models on my repositories which provide baseline code for inferencing various ML models on iOS devices.I converted all
imgclsmob
's pre-trained pose estimation model to tflite with(1,224,224,3)
input shape. But I think each model has an optimized input shape that was used on training.Are there recommended input shape for each model?
Here is the issue what I'm working:
tucan9389/PoseEstimation-TFLiteSwift#59
Thank you.
The text was updated successfully, but these errors were encountered: