I have a pre-processing pipeline setup for my model using the new Datasets API which works great during training. However, I would also like to utilize this same pipeline suing inference too. However, it seems difficult to wire this up using TensorFlow serving. Namely, I can't figure out how to get Serving to initialize the Iterator. I was able to partially retrofit it using the legacy_init_op parameter:
iterator = Iterator.from_structure(dataset.output_types, dataset.output_shapes)
next_example = iterator.get_next()
inferance_init_op = iterator.make_initializer(dataset)
...
#need to add an op here to init the dataset iterator
legacy_init_op = self.iterator_init_op
with self.session as sess:
_builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: prediction_signature},
clear_devices=True,
legacy_init_op=legacy_init_op)
However, I get this error b/c there is no data in the pipeline yet:
AbortionError: AbortionError(code=StatusCode.FAILED_PRECONDITION, details="GetNext() failed because the iterator has not been initialized. Ensure that you have run the initializer operation for this iterator before getting the next element.
[[Node: IteratorGetNext = IteratorGetNext[_output_shapes=[[?,224,224,3]], output_shapes=[[?,224,224,3]], output_types=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](Iterator)]]")
I did see something in master called `tf.contrib.data.get_single_element` which might be what I'm looking for, but it's not released yet.
What is the pattern for using DataSet API in serving?