-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Models converted from saved_model
to tfjs_graph_model
loose output signature information
#3942
Comments
@vladmandic Thank you for reporting this issue. There is no good way to track the original signature to the optimized nodes. Have you tried the tfhub model conversion? I think the signature will be preserved that way. |
Currently less than half of models posted on https://tfhub.dev/ have JS links (e.g. none of EfficientNet or EfficientDet or anything trained on OpenImages or ...), but they all have links to a saved model. Or is there a way to trigger TFHub model conversion that I'm missing? |
yes, you can convert the TFHub module directly using the converter. tensorflowjs_converter \
--input_format=tf_hub \
'https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/1' \
/mobilenet/web_model |
@pyu10055 I always considered that the same as downloading
{
'Identity_5:0': { name: 'Identity_5:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, [length]: 1 ] } },
'Identity_7:0': { name: 'Identity_7:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '49104' }, { size: '90' }, [length]: 3 ] } },
'Identity_2:0': { name: 'Identity_2:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity_6:0': { name: 'Identity_6:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '49104' }, { size: '4' }, [length]: 3 ] } },
'Identity:0': { name: 'Identity:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity_4:0': { name: 'Identity_4:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity_1:0': { name: 'Identity_1:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, { size: '4' }, [length]: 3 ] } },
'Identity_3:0': { name: 'Identity_3:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, { size: '90' }, [length]: 3 ] } }
} |
@vladmandic Thank you for trying, we will try to find a way that allows us to track the original signature name to the graph nodes after optimization. Will post any findings here. |
no need to track them during optimization, just re-map them correctly as a last step before writing
so in this example,
example outputs: {
detection_anchor_indices: { dtype: 'float32', name: 'StatefulPartitionedCall:0', shape: [Array] },
detection_boxes: { dtype: 'float32', name: 'StatefulPartitionedCall:1', shape: [Array] },
detection_classes: { dtype: 'float32', name: 'StatefulPartitionedCall:2', shape: [Array] },
detection_multiclass_scores: { dtype: 'float32', name: 'StatefulPartitionedCall:3', shape: [Array] },
detection_scores: { dtype: 'float32', name: 'StatefulPartitionedCall:4', shape: [Array] },
num_detections: { dtype: 'float32', name: 'StatefulPartitionedCall:5', shape: [Array] },
raw_detection_boxes: { dtype: 'float32', name: 'StatefulPartitionedCall:6', shape: [Array] },
raw_detection_scores: { dtype: 'float32', name: 'StatefulPartitionedCall:7', shape: [Array] }
} example outputs: {
'Identity_6:0': { name: 'Identity_6:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '49104' }, { size: '4' }, [length]: 3 ] } },
'Identity_1:0': { name: 'Identity_1:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, { size: '4' }, [length]: 3 ] } },
'Identity_3:0': { name: 'Identity_3:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, { size: '90' }, [length]: 3 ] } },
'Identity_2:0': { name: 'Identity_2:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity_5:0': { name: 'Identity_5:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, [length]: 1 ] } },
'Identity_7:0': { name: 'Identity_7:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '49104' }, { size: '90' }, [length]: 3 ] } },
'Identity_4:0': { name: 'Identity_4:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity:0': { name: 'Identity:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } }
} |
@vladmandic @pyu10055 any comments on this conversion for inputs. see issues #4861 |
@rohanmuplara yup, it's the same issue and it's a really annoying one i work around it by manually editing resulting |
@vladmandic can you describe a little more about what the executor versus signature stuff is. Also, can you explain how this code above edits the script in a way that is helfpul. To me, it just seems you are just iterating the input and just outputting it back out |
signature is a signature part of the model.json, but some models do not contain properly filled out signature section, so next option is to look at actual model execution via executor. yes, it's just iterating and outputing - but it allows for matching of saved vs graph model input/output nodes. |
Thanks makes sense. So you are saying you are going to manually copy the saved_model from the saved json files so the order stays the same. Another issue is I had is from the signature of the inputs are different than the names defined in the code but the model still works when I pass in the old names. https://share.descript.com/view/OX82xb6lY7q |
@rohanmuplara that's exactly why signature part exists - so tensor names can be associated with logical names without changing the model. |
I have two follow up questions.
1. I only see these signature names in the input.json but when I call
predict it doesn't seem to work with the signature names and only works
with original names used when defining the model initially.
I don't even see them in the json file.
https://share.descript.com/view/0yeqwBdDSKk
2. When we define the signature functions, how do we match them up to the
corresponding ones in the input. ie if two inputs tensors have the same
shape, how do you know which signature refers to which.
…On Sun, Mar 28, 2021 at 4:18 AM Vladimir Mandic ***@***.***> wrote:
@rohanmuplara <https://github.com/rohanmuplara> that's exactly why
signature part exists - so tensor names can be associated with logical
names without changing the model.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3942 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALG5Q5FFJYB2WO3ZMTGDBYLTF4F75ANCNFSM4RQKE7KQ>
.
|
This is IMO as there are no good docs :(
See this example: {
"signature": {
"inputs": { "input_1:0": {"name":"input_1:0","dtype":"DT_FLOAT","tensorShape":{"dim":[{"size":"1"},{"size":"3"},{"size":"416"},{"size":"416"}]}}},
},
"modelTopology": {
"node": [
{"name":"input_1","op":"Placeholder","attr":{"shape":{"shape":{"dim":[{"size":"1"},{"size":"3"},{"size":"416"},{"size":"416"}]}},"dtype":{"type":"DT_FLOAT"}}},
]
}
} I really wish TFJS team fixed |
Makes sense. thanks so much. |
hey @vladmandic #4861 great suggestion by tfjs team if you add names to input layers it seems to work. I also think most of these problems are on the regular tensorflow serializer side actually. |
@rohanmuplara good stuff, but doesn't help when i'm converting a pretrained model. |
Models converted from
saved_model
totfjs_graph_model
loose output signature information.This is not specific to any single model, it's a generic converter issue.
a) Saved model from https://tfhub.dev/tensorflow/efficientdet/d0/1?tf-hub-format=compressed
b) Same model converted to TFJS Graph model using
tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model . graph
This is cosmetic as all outputs are still present, but makes converted models extremely difficult to use.
Environment: Ubuntu 20.04 with NodeJS 14.11.0 and TFJS 2.4.0
The text was updated successfully, but these errors were encountered: