--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-3-47a0483fec99> in <module> ----> 1 compiler_module = ireec.tf_load_saved_model(saved_model_path) 2 print("resnet v2 fp32 NCHW MLIR:", compiler_module.to_asm()) ~/.cache/bazel/_bazel_.../fe8923d58b1cc237c81293bac6bc7c01/execroot/iree_core/bazel-out/k8-opt/bin/colab/everything_for_colab.runfiles/iree_core/integrations/tensorflow/bindings/python/pyiree/tf/compiler/__init__.py in tf_load_saved_model(saved_model_dir, compiler_context, exported_names, pass_pipeline) 101 compiler_context = Context() 102 input_module = binding.load_saved_model( --> 103 compiler_context, saved_model_dir, exported_names=exported_names) 104 if pass_pipeline: 105 input_module.run_pass_pipeline(pass_pipeline) RuntimeError: Failed to load saved model '~/saved_models/resnet_v2_fp32_savedmodel_NCHW': Not found: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint SavedModel checkpoint does not contain object graph.
--
You received this message because you are subscribed to the Google Groups "iree-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to iree-discuss...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/iree-discuss/43bce5f2-73f7-4f7f-b682-b857bb60cccc%40googlegroups.com.
The error you are getting indicates that you are attempting to load a V1 TensorFlow saved model, whereas at present, IREE only supports V2 saved models (as produced by TensorFlow 2). Since adding that constraint, work has taken place upstream that may let us also load V1 saved models, but this has not been integrated into IREE (and is especially difficult in the OSS build because it has a very large dependency on a lot of TensorFlow kernels, making the OSS build quite difficult) -- we haven't yet decided whether to support it.The official docs for using TF2 (vs TF1) are here: https://www.tensorflow.org/guide/migrateThere are also some documents regarding loading and resaving here: https://www.tensorflow.org/api_docs/python/tf/saved_model/loadBeyond the official docs, if you share either the code that produced the model or the model itself, I can work out the incantation for you. It can be a little tricky depending on what you have (and the current constraint that we cannot accept dynamic dimensions).However, to forestall you doing a ton of work and then getting stuck, I'll tell you that there are a couple of op lowerings missing for resnet prediction. We have an internal bug tracking it and the last I heard, we aren't too far away from support for it. +Mahesh Ravishankar
On Wed, Mar 4, 2020 at 3:35 PM Stella Laurenzo <laur...@google.com> wrote:The error you are getting indicates that you are attempting to load a V1 TensorFlow saved model, whereas at present, IREE only supports V2 saved models (as produced by TensorFlow 2). Since adding that constraint, work has taken place upstream that may let us also load V1 saved models, but this has not been integrated into IREE (and is especially difficult in the OSS build because it has a very large dependency on a lot of TensorFlow kernels, making the OSS build quite difficult) -- we haven't yet decided whether to support it.The official docs for using TF2 (vs TF1) are here: https://www.tensorflow.org/guide/migrateThere are also some documents regarding loading and resaving here: https://www.tensorflow.org/api_docs/python/tf/saved_model/loadBeyond the official docs, if you share either the code that produced the model or the model itself, I can work out the incantation for you. It can be a little tricky depending on what you have (and the current constraint that we cannot accept dynamic dimensions).However, to forestall you doing a ton of work and then getting stuck, I'll tell you that there are a couple of op lowerings missing for resnet prediction. We have an internal bug tracking it and the last I heard, we aren't too far away from support for it. +Mahesh RavishankarYes, this is being worked on currently. Would be curious to know if you are looking to run it on just CPU or on an accelerator. We are currently working on supporting both and currently we think the outstanding work here once completed will enable both the CPU backend and GPU backend. If you want to track the progress by yourself, I can point you to tracking bugs on these.
To view this discussion on the web visit https://groups.google.com/d/msgid/iree-discuss/CAArwm2bJnrjzT5EBeu-tm%2BJhmp1mNM1h1v7RabGdKDhFUY_bmA%40mail.gmail.com.
To unsubscribe from this group and stop receiving emails from it, send an email to iree-d...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/iree-discuss/43bce5f2-73f7-4f7f-b682-b857bb60cccc%40googlegroups.com.
--Mahesh