Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Neural Image Compression

39 views
Skip to first unread message

Justin Tan

unread,
Sep 21, 2020, 12:29:50 AM9/21/20
to
Hi,

I'd like to share a side project I worked on which generalizes transform coding to the nonlinear case. Here the transforms are represented by neural networks, which learn the appropriate form of the transform. The result of the transform is then quantized using standard entropy coding.

Github: https://github.com/Justin-Tan/high-fidelity-generative-compression
Interactive Demo: https://colab.research.google.com/github/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/HiFIC_torch_colab_demo.ipynb

There are some obvious shortcomings to this method - such as, that it only caters for image data, cannot be adjusted to attain a variable bitrate, short of training a different model, and is unrealistically slow for practical applications.

I'm not a traditional compression expert, so would appreciate any insight about the deficiencies of this method from those who are. Note this is not my original idea and is a reimplementation.

Stephen Wolstenholme

unread,
Sep 21, 2020, 10:45:55 AM9/21/20
to
EasyNN has image mode built in. I don't know how well it compresses
images because the person who tested and validated image encoding has
retired. I wrote the code a long time ago but I forget how it works.
I'm getting old!

Steve

--
http://www.npsnn.com

Eli the Bearded

unread,
Sep 21, 2020, 1:50:34 PM9/21/20
to
In comp.compression, Justin Tan <justi...@coepp.org.au> wrote:
> I'd like to share a side project I worked on which generalizes transform
> coding to the nonlinear case. Here the transforms are represented by
> neural networks, which learn the appropriate form of the transform. The
> result of the transform is then quantized using standard entropy coding.
>
> Github: https://github.com/Justin-Tan/high-fidelity-generative-compression
> Interactive Demo:
> https://colab.research.google.com/github/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/HiFIC_torch_colab_demo.ipynb
>
> There are some obvious shortcomings to this method - such as, that it
> only caters for image data, cannot be adjusted to attain a variable
> bitrate, short of training a different model, and is unrealistically
> slow for practical applications.

Also:

Clone repo and grab the model checkpoint (around 2 GB).

If you need 2GB of data around to compress / decompress images, you need
a lot of images before this starts "winning".

> I'm not a traditional compression expert, so would appreciate any
> insight about the deficiencies of this method from those who are. Note
> this is not my original idea and is a reimplementation.

I'm no expert in compression, I just read this group for the occasional
insight.

Elijah
------
particularly interested in image compression
0 new messages