Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to export savedmodel #23

Open
FSet89 opened this issue Oct 18, 2018 · 8 comments
Open

Failed to export savedmodel #23

FSet89 opened this issue Oct 18, 2018 · 8 comments

Comments

@FSet89
Copy link

FSet89 commented Oct 18, 2018

I tried to export the model using the official guide (using savedmodel with estimators). I tried both approaches:

features = {'x': tf.placeholder(tf.float32, [224, 224, 3], name="x")}
input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(features, 1)
exported_model_path = estimator.export_savedmodel(os.path.join(args.model_dir, 'exported'), input_fn)
feature_spec = {'x': tf.FixedLenFeature([224, 224, 3], tf.float32)}
input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec, 1)
exported_model_path = estimator.export_savedmodel(args.model_dir, input_fn)

However I get the following error:

TypeError: Failed to convert object of type <type 'dict'> to Tensor. Contents: {'x': <tf.Tensor 'ParseExample/ParseExample:0' shape=(1, 224, 224, 3) dtype=float32>}. Consider casting elements to a supported type.

@omoindrot
Copy link
Owner

I'm not sure what the issue is, I don't think it is related to this project in particular.

You should ask on stackoverflow.

@FSet89
Copy link
Author

FSet89 commented Nov 9, 2018

Just in case someone needs it, I solved by replacing
images = features
with
images = features['input']
in model_fn.py

@batrlatom
Copy link

batrlatom commented Feb 25, 2019

@FSetragno Could you post here some more details and exact code snippets with solution?
I am trying to export it too, but without any success. I changed the build_model code in model_fn to
use mobilenet as cnn

def build_model(is_training, images, params):
import tensorflow_hub as hub
module = hub.Module("https://tfhub.dev/google/imagenet/mobilenet_v2_035_224/feature_vector/2")
tf_model = module(images)
with tf.variable_scope('fc_1'):
tf_model = tf.layers.dense(tf_model, params.embedding_size)
return tf_model

And tried to make own input_fn functions

def serving_input_receiver_fn():
feature_spec = {}
feature_spec['features'] = tf.placeholder(tf.float32, shape=[224,224,3], name='features')
serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(feature_spec)
return serving_input_fn

@FSet89
Copy link
Author

FSet89 commented Feb 25, 2019

This is the code I use to export the model. Please note that you have to modify model_fn.py as mentioned above.

estimator = tf.estimator.Estimator(model_fn, params=params, model_dir=args.model_dir)
features = {'input': tf.placeholder(tf.float32, shape=(1, 224, 224, 3), name="input")}
input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(features, 1)
exported_model_path = estimator.export_savedmodel(args.model_dir, input_fn)

@batrlatom
Copy link

batrlatom commented Feb 25, 2019

I tried to modify the model.fn code by placing this code into

images = features['input']

but I am getting an error:

File "/home/tom/Devel/AI/rclvenv/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 489, in _slice_helper
end.append(s + 1)
TypeError: must be str, not int

The code of model.fn itself is:


def model_fn(features, labels, mode, params):
    """Model function for tf.estimator

    Args:
        features: input batch of images
        labels: labels of the images
        mode: can be one of tf.estimator.ModeKeys.{TRAIN, EVAL, PREDICT}
        params: contains hyperparameters of the model (ex: `params.learning_rate`)

    Returns:
        model_spec: tf.estimator.EstimatorSpec object
    """
    is_training = (mode == tf.estimator.ModeKeys.TRAIN)

    images = features['input']

    images = tf.reshape(images, [-1, params.image_size, params.image_size, 3])
    assert images.shape[1:] == [params.image_size, params.image_size, 3], "{}".format(images.shape)

    # -----------------------------------------------------------
    # MODEL: define the layers of the model
    with tf.variable_scope('model'):
        # Compute the embeddings with the model
        embeddings = build_model(is_training, images, params)
        embeddings = tf.nn.l2_normalize(embeddings, axis=1)
    embedding_mean_norm = tf.reduce_mean(tf.norm(embeddings, axis=1))
    tf.summary.scalar("embedding_mean_norm", embedding_mean_norm)

    if mode == tf.estimator.ModeKeys.PREDICT:
        predictions = {'embeddings': embeddings}
        return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)

    labels = tf.cast(labels, tf.int64)

    # Define triplet loss
    if params.triplet_strategy == "batch_all":
        loss, fraction = batch_all_triplet_loss(labels, embeddings, margin=params.margin,
                                                squared=params.squared)
    elif params.triplet_strategy == "batch_hard":
        loss = batch_hard_triplet_loss(labels, embeddings, margin=params.margin,
                                       squared=params.squared)
    else:
        raise ValueError("Triplet strategy not recognized: {}".format(params.triplet_strategy))

    # -----------------------------------------------------------
    # METRICS AND SUMMARIES
    # Metrics for evaluation using tf.metrics (average over whole dataset)
    # TODO: some other metrics like rank-1 accuracy?
    with tf.variable_scope("metrics"):
        eval_metric_ops = {"embedding_mean_norm": tf.metrics.mean(embedding_mean_norm)}

        if params.triplet_strategy == "batch_all":
            eval_metric_ops['fraction_positive_triplets'] = tf.metrics.mean(fraction)

    if mode == tf.estimator.ModeKeys.EVAL:
        return tf.estimator.EstimatorSpec(mode, loss=loss, eval_metric_ops=eval_metric_ops)


    # Summaries for training
    tf.summary.scalar('loss', loss)
    if params.triplet_strategy == "batch_all":
        tf.summary.scalar('fraction_positive_triplets', fraction)

    tf.summary.image('train_image', images, max_outputs=1)

    # Define training step that minimizes the loss with the Adam optimizer
    optimizer = tf.train.AdamOptimizer(params.learning_rate)
    global_step = tf.train.get_global_step()
    if params.use_batch_norm:
        # Add a dependency to update the moving mean and variance for batch normalization
        with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
            train_op = optimizer.minimize(loss, global_step=global_step)
    else:
        train_op = optimizer.minimize(loss, global_step=global_step)

    return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)



@batrlatom
Copy link

I also tried to change
images = features['input']

with

images = features.get('input')

which looks like it works, but introduce a roblem with export outputs.

ValueError: export_outputs must be a dict and not<class 'NoneType'>

Did you encounter this problem?

@batrlatom
Copy link

batrlatom commented Feb 25, 2019

could you please also post the full model.fn here? I really need to solve this

@batrlatom
Copy link

Ok so I was able to manage it in this way.. Noting that your solution is for python2:


if mode == tf.estimator.ModeKeys.PREDICT:
        predictions = {'embeddings': embeddings}
        tf.estimator.export.PredictOutput(predictions['embeddings'])
        export_outputs={'embeddings': tf.estimator.export.PredictOutput((predictions['embeddings']))}

        for op in tf.get_default_graph().get_operations():
            print(str(op.name))

        return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions, export_outputs = export_outputs)


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants