combining models and tflite export #84
-
Hi @leondgarse , I have EffdetLite4 and MobileVit models trained on similar but not identical datasets. MobileVit model achieved more than 94% accuracy. Could you show an example of how to replace classification head with the trained MobileVit model and export a combined model to tflite ? The MobileVit model will need the original input image, not the preprocessed one. max_output_size, iou_or_sigma and score_threshold for bbox predictions can be hardcoded. The intention is to use the combined model in tfjs. if #82 is also implemented, it would be fantastic. Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
Just fixed from keras_cv_attention_models import efficientdet, mobilevit, model_surgery
bb = mobilevit.MobileViT_V2_050(input_shape=(512, 512, 3), num_classes=0, pretrained=None)
mm = efficientdet.EfficientDetLite4(input_shape=(512, 512, 3), backbone=bb, pretrained=None)
# >>>> features: {'stack3_block3_output': (None, 64, 64, 128), 'stack4_block5_output': (None, 32, 32, 192), 'stack5_block4_output': (None, 16, 16, 256)}
converter = tf.lite.TFLiteConverter.from_keras_model(mm)
open(mm.name + ".tflite", "wb").write(converter.convert())
# 19078400 Converting a tflite is not hard, the hard part lies in writing your custom prediction decoder, also an issue for #82. Previously I've tried making |
Beta Was this translation helpful? Give feedback.
-
This project has DecodePredictions as a layer. Though I'm not sure it's exportable to tflite. https://github.com/Burf/TFDetection/blob/main/tfdet/model/postprocess/__init__.py |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
DecodePredictions
as a layer is supported. Usage has been updated in Readme#tflite-conversion. Also some tests can be found in colab kecam_test.ipynb, including saving model assaved_mode
andTFLite
conversion usage. The key parameter isuse_static_output=True
, that sets layer output shape fixed as[batch, max_output_size, 6]
, for TFLite seems not supporting dynamic shape on the second dimension.score_threshold
/iou_or_sigma
/max_output_size
still being custom while usingmodel.decode_predictions(...)
directly, and when converting TFLite, they can be set new value, likescore_threshold
usage in Readme#tflite-conversion. But after converting, I'm not sure how to set the…