Hi, tf-compression team
Thanks for providing a nice and convenient tool for deep image compression.
I appreciate the work done for this project.
My issue follows :
I trained a Hific model with tuned hyper parameter such as slight increase in target rate etc.
And, now I want to use the model I trained, rather than using provided versions: hific-hi , hific-lo etc.
Obviously, I found tfci.py doesnt provide any function that loads the custom trained model.
It seems it loads models like hific-hi using some metagraph, which should be fetched from some web storage I cannot access to.
Correct me if I am wrong.
I can use the source code(hific/evaluate.py) to check the quality of reconstructed image and important statistics like bpp PSNR but,
I can't find how to compress a given image into a compressed(tfci) file and I can't find the way to decompress the compressed file into reconstruction image.
So, I decided to modify the source code and
managed to separate out the encoder by saving bitstream_np into a tfci file.
And then I loaded the packed tensor from tfci and passed the first bitstring from packed tensor into
tfc.conditional_entropy to get decoder_in tensor, and then I passed the decoder_in tensor into _compute_reconstruction module provided in models.py. And then finally I clipped and casted to get a reconstruction image.
But, I obtained a very blurred reconstruction, which is quite different from the reconstruction image I can obtain from eval_trained_model module in hific/evaluate.py
I am having a doubt if my unpacking step was correct.
in order to use tfc packed.unpack, I need a list of tensors that should match the bitstream information.
But, when I used packed.unpack([t for t in bitstream_tensors]) where bitstream_tensors are the tensor I used in the encoder, then the image looks good, but
when I used different bitstream_tensors it reconstructs very lower quality image.
Can you specify what I am doing wrong here, or suggest any better idea for decompressing for my own trained model?
Thanks for your support.