regarding animation...

108 views
Skip to first unread message

vlasta

unread,
Mar 26, 2013, 6:31:32 AM3/26/13
to webp-d...@webmproject.org
Hi all,

I am playing with the experimental 0.3.0 rc6 and trying to export webp animations from my editor.

I have two questions:
1. What should happen in the decoder when WEBP_MUX_DISPOSE_NONE is used? Shall the pixels from the upcoming frame be alpha-blended over the current frame or shall the rectangle be simply replaced? The spec does not say. If alpha-blending is the way, what about color profiles? Shall the source and destination color values be linearized before alpha-blending them? Or should we just fake it and do the often used (but wrong) linear blend of non-linear (for example sRGB) values? Also, if alpha-blending is the way, be aware that it is impossible to make part of the image transparent unless the whole frame is sent with WEBP_MUX_DISPOSE_BACKGROUND and transparent background color is used in the animation header - this can prevent optimizations of semitransparent animations. If the matter is not decided, I would vote for simple rectangle replacement instead of alpha-blending due all the difficulties and ambiguities stated above.

Also, I would like to express my concern regarding animation optimization and the lossy format. It seems like the unpredictable accumulation of errors from lossy encoding could lead to unexpected results or a very complicated optimizer if alpha-blending is used with WEBP_MUX_DISPOSE_NONE.

2. This one is easier. If I use mux to write a non-animated image without and additional chunks, will the result be a plain old webp image (the one produced without mux) or not?

Thanks for your answers,
V.

Urvang Joshi

unread,
Mar 27, 2013, 2:08:10 AM3/27/13
to webp-d...@webmproject.org
Hi Vlastimil,
Thanks for the feedback!

On Tue, Mar 26, 2013 at 3:31 AM, vlasta <vlastim...@gmail.com> wrote:
Hi all,

I am playing with the experimental 0.3.0 rc6 and trying to export webp animations from my editor.

I have two questions:
1. What should happen in the decoder when WEBP_MUX_DISPOSE_NONE is used? Shall the pixels from the upcoming frame be alpha-blended over the current frame or shall the rectangle be simply replaced? The spec does not say. If alpha-blending is the way, what about color profiles? Shall the source and destination color values be linearized before alpha-blending them? Or should we just fake it and do the often used (but wrong) linear blend of non-linear (for example sRGB) values? Also, if alpha-blending is the way, be aware that it is impossible to make part of the image transparent unless the whole frame is sent with WEBP_MUX_DISPOSE_BACKGROUND and transparent background color is used in the animation header - this can prevent optimizations of semitransparent animations. If the matter is not decided, I would vote for simple rectangle replacement instead of alpha-blending due all the difficulties and ambiguities stated above.

All excellent questions! The spec definitely needs more clarity on disposal method.
Instead of directly giving you the answers, let me point you to this proposed change: https://gerrit.chromium.org/gerrit/#/c/46611/1/doc/webp-container-spec.txt
Is the spec more clear now? Does it answer your questions?
Let me know.
 

Also, I would like to express my concern regarding animation optimization and the lossy format. It seems like the unpredictable accumulation of errors from lossy encoding could lead to unexpected results or a very complicated optimizer if alpha-blending is used with WEBP_MUX_DISPOSE_NONE.

I can understand the concern. That is one of the reasons why the gif2webp tool also use lossless compressing by default.
At the same time, it would be best for the format to be open to all options. Then one can choose the right encoding (lossy/lossless) as per the use-case.
 

2. This one is easier. If I use mux to write a non-animated image without and additional chunks, will the result be a plain old webp image (the one produced without mux) or not?

Yes, that's correct. The mux API and webpmux binary are implemented such that they don't write any unnecessary chunks.
 

Thanks for your answers,
V.

--
You received this message because you are subscribed to the Google Groups "WebP Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to webp-discuss...@webmproject.org.
To post to this group, send email to webp-d...@webmproject.org.
Visit this group at http://groups.google.com/a/webmproject.org/group/webp-discuss/?hl=en.
For more options, visit https://groups.google.com/a/webmproject.org/groups/opt_out.
 
 

Vlastimil Miléř

unread,
Mar 27, 2013, 7:44:40 PM3/27/13
to webp-d...@webmproject.org
Hi Urvang,

thanks for your answer. However, I still find the spec quite
confusing. For example: "Alpha-blending only applies to the _common
area_ of the two frames." What is the common area? A frame always
actually covers the whole canvas, doesn't it? The fact that only a
small piece of it changed in the previous step does not seem relevant.
Let me try rephrasing this part of the spec and then please tell me,
if that was what you meant or not:

----------

Disposal method (D): 1 bit

: Indicates how the _current frame_ is to be treated after it has been
displayed (before rendering
the _next frame_) on the canvas (screen or memory buffer):

* `0`: Do not dispose. Leave the canvas as is.

* `1`: Dispose to background color. Fill the rectangle on the canvas covered
by the _current frame_ with the background color specified in the
[ANIM chunk](#anim_chunk).

After disposing the _current frame_, render the _next frame_ on the canvas
using [Alpha-blending](#alpha-blending). If the _next frame_ does not have an
alpha channel, assume alpha=255 and effectively replace the rectangle.

**Notes**:
* The frame rectangle is defined by _Frame X_, _Frame Y_,
_frame width_ and _frame height_ in its ANMF chunk.

* Alpha-blending shall be done in linear color space(?),
[color profile](#color-profile) should be taken into account.
If color profile is not present, sRGB is to be assumed.
(even sRGB needs linearizing due to the gamma of ~2.2)

----------

This is relatively close of what GIF has and I assume that was your goal.
However, it shares one of GIF's shortcomings and that is difficulty of making
regions transparent in an animation - in case of GIFs, it involves using the
Background disposal method and artificially increasing the size of the previous
frame to cover the new transparent pixels in the current frame just to be able
to make them transparent - very inefficient thing. It would be very handy
to have a second flag that would control whether to use alpha blending or not,
like APNG has. I would also consider adding the 3rd (Previous) disposal
method of GIF and APNG. It is handy for animations, where something small
moves over a static background.

Of course, having a separate 1-bit "change mask" instead of misusing the alpha
channel for optimization would be best of all. Then you could drop the
alpha-blending
stuff altogether - no editor is going to use it anyway, because it is
too difficult.

V.

Urvang Joshi

unread,
Mar 28, 2013, 10:32:38 PM3/28/13
to webp-d...@webmproject.org
Hi Vlastimil,


On Wed, Mar 27, 2013 at 4:44 PM, Vlastimil Miléř <vlastim...@gmail.com> wrote:
Hi Urvang,

thanks for your answer. However, I still find the spec quite
confusing. For example: "Alpha-blending only applies to the _common
area_ of the two frames." What is the common area? A frame always
actually covers the whole canvas, doesn't it? The fact that only a
small piece of it changed in the previous step does not seem relevant.

By 'frame', I meant the 'frame rectangle' as you rightly understood.
 
Let me try rephrasing this part of the spec and then please tell me,
if that was what you meant or not:

----------

Disposal method (D): 1 bit

: Indicates how the _current frame_ is to be treated after it has been
displayed (before rendering
  the _next frame_) on the canvas (screen or memory buffer):

  * `0`: Do not dispose. Leave the canvas as is.

  * `1`: Dispose to background color. Fill the rectangle on the canvas covered
    by the _current frame_ with the background color specified in the
    [ANIM chunk](#anim_chunk).

After disposing the _current frame_, render the _next frame_ on the canvas
using [Alpha-blending](#alpha-blending). If the _next frame_ does not have an
alpha channel, assume alpha=255 and effectively replace the rectangle.

**Notes**:
  * The frame rectangle is defined by _Frame X_, _Frame Y_,
     _frame width_ and _frame height_ in its ANMF chunk.

  * Alpha-blending shall be done in linear color space(?),
    [color profile](#color-profile) should be taken into account.
    If color profile is not present, sRGB is to be assumed.
    (even sRGB needs linearizing due to the gamma of ~2.2)

----------

Yes, your understanding is correct. I reworded in the change as per your suggestion:

Thanks! 


This is relatively close of what GIF has and I assume that was your goal.
However, it shares one of GIF's shortcomings and that is difficulty of making
regions transparent in an animation - in case of GIFs, it involves using the
Background disposal method and artificially increasing the size of the previous
frame to cover the new transparent pixels in the current frame just to be able
to make them transparent - very inefficient thing. It would be very handy
to have a second flag that would control whether to use alpha blending or not,
like APNG has.

I did think about the blend/no-blend flag a while back -- then decided not to go for it, as I thought that 'dispose_to_background' should cover such cases.
For my understanding, can you give me an example GIF which suffers from this problem?
 
I would also consider adding the 3rd (Previous) disposal
method of GIF and APNG. It is handy for animations, where something small
moves over a static background.

It is very important to understand that images should be designed not to be memory heavy for renderers/viewers. The 'dispose_to_previous' may make encoding the image easier/more efficient in some cases, but it is extremely heavy for a renderer.  It will have to retain all (or many of) the previously decoded canvases in memory all the time. That was the main reason for not including that disposal method.

More detailed reasoning on this is here: https://code.google.com/p/webp/issues/detail?id=144#c1

Vlastimil Miléř

unread,
Mar 29, 2013, 4:53:17 AM3/29/13
to webp-d...@webmproject.org
Hello Urvang,

I am glad I now understand the spec. I hope other people will
understand its current form.

Now, back to the proposed changes.
Why 'dispose_to_background' is not an ideal solution for
semitransparent animations? A good example is given by Anthony Thysen
on his web page about GIF optimizations
http://www.imagemagick.org/Usage/anim_opt/ in the "Moving Hole
Animation" paragraph. I can also describe a simple example:
2 frame animation:
1st frame is a fully opaque photo
2nd frame is exactly the same, except it has one single transparent pixel
The only way to encode the second frame is to use
dispose_to_background, effectively erasing the whole canvas and then
encoding the whole 2nd frame.

If you had a blend/no_blend flag, the second frame could be just one
transparent pixel.

The example above may seem a bit extreme, but the problem is quite real.

---

The Previous dispose method is actually not that resource intensive as
you describe. The viewer only needs to remember 1 frame, not all
previous frames. But I am not saying this is a critical feature, just
handy for an arguably important class of animations.

---

So, let's summarize the facts... WebP now has 2 disposal methods:
1. No disposal - this is handy for opaque animations and allows the
writers to encode the current frame as a difference to previous frame.
The alpha channel of WebP is (just like in GIF's case) (mis)used to
represent a change mask. This means that the alpha channel will most
likely consist of 0s and 255s only and must be saved lossleslly or
else it would not do a good job as a change mask. Note that alpha
blending in these cases is trivial. I cannot imagine a realistic
scenario, in which the encoder would (without non-scalable, human
assistance) effectively use semi-transparent pixels when encoding
difference between 2 consecutive frames.
2. Background disposal - handy for ... I am actually not sure. It does
not seem handy at all. It is the only way to encode a frame that has a
semitransparent pixel where the previous frame didn't (in a way that
is often quite unoptimal). It may also be good for animations that are
mostly empty and if not empty, the frames are very different. But for
this kind of animations, the 'No disposal' is not much worse, because
encoding an area filled with single color does not take much space,
does it?

This does not seem good to me. If you really want to keep things as
simple as possible, cover the basic cases and allow the writers to do
sensible optimizations, I would propose this:
1. drop the disposal method
2. drop the background color from the header
3. add a blending method (maybe find a better name for it - like
composition method or something like that) with 2 options:
- No blending - replace the rectangle on the canvas
- Mask blending - if pixel alpha < 128, keep old pixel, else replace
with new pixel

This is very easy to implement for the viewers and I believe will
cover the basic cases better and without ambiguities (like the
color-space correct alpha-blending, which involves color managing the
pixels before and after blending them). It also allows writers to
easily do the basic optimizations.

There are a lot of ways how to allow better optimizations that do not
cost too much resources, but I believe this is the real core.

Best regards,
Vlasta

Urvang Joshi

unread,
Mar 29, 2013, 6:08:23 PM3/29/13
to webp-d...@webmproject.org
Hi again,


On Fri, Mar 29, 2013 at 1:53 AM, Vlastimil Miléř <vlastim...@gmail.com> wrote:
Hello Urvang,

I am glad I now understand the spec. I hope other people will
understand its current form.

Now, back to the proposed changes.
Why 'dispose_to_background' is not an ideal solution for
semitransparent animations? A good example is given by Anthony Thysen
on his web page about GIF optimizations
http://www.imagemagick.org/Usage/anim_opt/ in the "Moving Hole
Animation" paragraph. I can also describe a simple example:
2 frame animation:
1st frame is a fully opaque photo
2nd frame is exactly the same, except it has one single transparent pixel
The only way to encode the second frame is to use
dispose_to_background, effectively erasing the whole canvas and then
encoding the whole 2nd frame.

If you had a blend/no_blend flag, the second frame could be just one
transparent pixel.

The example above may seem a bit extreme, but the problem is quite real.

Humm, that is an interesting example. No_blend option would definitely help in such a case.
In fact, we are seriously considering to add this option. it has good potential technically, and it can easily go into the ANMF chunk (which has 7 reserved bits right now).

At the same time, it would be good to have the spec. API and gif2webp all supporting and making use of that option. That would help to prove its practical usefulness too.
So, we are opting to defer the decision of adding this option to the next minor release (say v0.3.1) based on evaluations. (Note that if we choose to add this option, backward compatibility will still be ensured with 0.3.0, as we could specify that value of '0' would mean 'blend' and '1' would mean 'no blend').
 

---

The Previous dispose method is actually not that resource intensive as
you describe. The viewer only needs to remember 1 frame, not all
previous frames. But I am not saying this is a critical feature, just
handy for an arguably important class of animations.

---

So, let's summarize the facts... WebP now has 2 disposal methods:
1. No disposal - this is handy for opaque animations and allows the
writers to encode the current frame as a difference to previous frame.
The alpha channel of WebP is (just like in GIF's case) (mis)used to
represent a change mask. This means that the alpha channel will most
likely consist of 0s and 255s only and must be saved lossleslly or
else it would not do a good job as a change mask. Note that alpha
blending in these cases is trivial. I cannot imagine a realistic
scenario, in which the encoder would (without non-scalable, human
assistance) effectively use semi-transparent pixels when encoding
difference between 2 consecutive frames.
2. Background disposal - handy for ... I am actually not sure. It does
not seem handy at all.

Actually, there are a number of use-cases where dispose to background is helpful.
For example, consider a case where a small rectangle (containing some object) is moving around on a white background.
Or a moving cartoon which is being animated on a plain background.
e.g. The animated icons here all use (and benefit from) dispose to background:
Once again, thanks for the feedback!

Vlastimil Miléř

unread,
Mar 29, 2013, 7:26:03 PM3/29/13
to webp-d...@webmproject.org
Hi Urvang,

thanks for all your replies. I think I'll will suspend my work on the
animation codec and wait for the new spec to see how it turns out.

One more thing and then I'll shut up. I just want to explain why I
said that I do not find the background disposal method much useful. I
looked at the web page you linked, grabbed one of the animated gifs,
extracted one frame and saved is as lossless webp (whole.webp - 568
bytes). Then I cropped it (removing all empty space) and saved again
(cropped.webp - 508 bytes). Finally, I made the canvas twice the size
of the cropped version (twice.webp - 512 bytes). Consider an animation
where Sonic runs from one side of the canvas to the other side. Let's
say that each frame, he moves by as many pixels as is his size. Using
the background disposal method would save you about 1% (~ 508 vs. 512
bytes) compared with the case where you have a no_blend flag (and must
include the region where Sonic was 1 frame before) or about 10% (~508
vs. 568 bytes) compared to the case where you would always encode the
whole frame. On average, you would see gains closer to the 1% figure
or even below it, because the objects on the images usually do not
move that fast. And there is not much else you can do with this
disposal method. I do not consider this impressive and worth caring
about. There are other, lower hanging fruits.

GIFs use dispose to background, because they do not have (almost) any
other ways how to erase pixels. You, on the other hand, still have
time to consider better approaches. I doubt anyone is using .webp
animations in any production environment as they would run into the
same problems I had so there is still time to change the spec
drastically.

Br,
Vlasta
whole.webp
cropped.webp
twice.webp
Reply all
Reply to author
Forward
0 new messages