[ANN] v0.1.2 and beyond

31 views
Skip to first unread message

Pascal Massimino

unread,
May 24, 2011, 6:29:32 PM5/24/11
to WebP Discussion
Hi,

I realized we haven’t really 'formal' announced the v0.1.2 release, although it's
quite old news already!

In light of the recent blog post that was published on the Chromium blog, it's worth
having a dev-oriented recap and flesh out the future lines of development worth pursuing:

* The main motivation for tagging the tree at v0.1.2 was the completion of the
  incremental decoding feature, that was integrated in Chrome. With it, a decoding
  object can be instantiated with what bytes are available already, and left in a
  suspended state until further bytes are received. When decoded, new rows of samples
  can be displayed immediately. This is particularly useful for big images. Best to have a
  look at the code for a longer description.
* The yuv->rgb conversion has been improved too by using an interpolating filter during
  upsampling. This was really a missing feature of the initial release that you pointed out
  and we have fixed since.
* the configuration / autotools system was made more generic, to ease packaging
  (Debian, SUSE, etc) and integration.

Now, what's beyond the already-old 0.1.2? Well,

* encoder has already been made 2x faster by using SSE2 implementation
 of the most critical functions. But there are some higher-level algorithmic
 improvements that could be used. They are not aggressively over-optimized
 yet, as it would be premature and possibly close the door to some
 novelties in using VP8 features. I'm mostly thinking about the time we
 spend performing rate-distortion trade-off evaluation. There are still more
 algorithms to be tried here, along with improving the segmentation, or the
 distortion metrics we use. In any case, I feel that the VP8 format
 can still be pushed, to further improve the compression efficiency it achieves
 now, as we saw in the updated study.
* Decoder still lacks optimization, and is around 3x slower than an
 optimized jpeg decoder. The immediate reasonable goal would be
 a 2x slow-down only, by using SSE2 in a similar fashion. In-loop
 filtering code is a good candidate, as well as the usual dct transforms.
 Speeding up the boolean decoder would require more than SSE2, though.
* On the decoding front, on-the-fly downscaling is rewarding when
 one only wants a downscaled output (like, for a quick preview). Quite some CPU
 ticks can be saved by not storing all samples, saving some yuv->rgb conversions,
 or using an alternate in-loop filtering. Something worth adding in the API.
* Note that we've added a JNI wrapper for the decoder too. The wrapper for
 the encoder will also be added, at some point.

Some new features we’re working on and announced at Google I/O include container-level
features around metadata (ICC, XMP,...), multi-image, 3D and animation.

All the above are still within the boundary of the VP8 specifications, and there's more we
are experimenting with, off-track, based on the community’s feedback, namely alpha and
yuv444, We value vp8 compatibility very highly including the hardware implementations
out there but we also want to experiment with new algorithms within libwebp and see
what people get enthusiastic about. To that effect, there's an --enable-experimental flag
for './configure' that activates some of the next-gen code and features. Right now, there
is a stub for experimentation with the alpha channel compression. Using the zlib for
compression (like PNG) is quite efficient, but there’s room for experimenting with alternate
lossless methods too. Alongside the alpha support, yuv444 support through enhancement
layer has been started but is still at very early stage.

There a lot more features we could go after, but we really want to grow the format and
code around the most needed ones first and move upward with help of the community
feedback. A good example to start with is the Wikipedia experiment that Mathias 
mentioned on this list. Please try it!

We welcome any feedback, remarks, and code about what 0.2 should be made of!
webplogo.webp

Pascal Massimino

unread,
May 24, 2011, 6:31:02 PM5/24/11
to WebP Discussion
webplogo.webp

Pascal Massimino

unread,
May 24, 2011, 6:33:04 PM5/24/11
to WebP Discussion
webplogo.webp

Pascal Massimino

unread,
May 24, 2011, 6:34:53 PM5/24/11
to WebP Discussion
webplogo.webp

Pascal Massimino

unread,
May 24, 2011, 6:38:18 PM5/24/11
to WebP Discussion
webplogo.webp

skal

unread,
May 24, 2011, 6:44:31 PM5/24/11
to WebP Discussion
Hey

On May 24, 3:38 pm, Pascal Massimino <pascal.massim...@gmail.com>
wrote:
> Hi,


sorry for the spam, i've been repeatedly hit by an error #103 from
gmail
while i was trying to send this message. How ironic.

skal

>
> I realized we haven’t really 'formal' announced the v0.1.2
> release<http://code.google.com/p/webp/downloads/list>,
> although it's
> quite old news already!
>
> In light of the recent blog
> post<http://blog.chromium.org/2011/05/webp-in-chrome-picasa-gmail-with-sle...>that
> was published on the Chromium blog, it's worth
> having a dev-oriented recap and flesh out the future lines of development
> worth pursuing:
>
> * The main motivation for
> tagging<http://review.webmproject.org/gitweb?p=libwebp.git;a=tag;h=26b0d1f539...>the
> tree at v0.1.2 was the completion of the
>   incremental decoding feature, that was integrated in Chrome. With it, a
> decoding
>   object can be instantiated with what bytes are available already, and left
> in a
>   suspended state until further bytes are received. When decoded, new rows
> of samples
>   can be displayed immediately. This is particularly useful for big images.
> Best to have a
>   look at the code for a longer
> description<http://review.webmproject.org/gitweb?p=libwebp.git;a=blob;f=src/webp/...>
> .
> * The yuv->rgb conversion has been improved too by using an interpolating
> filter<http://review.webmproject.org/gitweb?p=libwebp.git;a=commit;h=6a37a2a...>during
>   upsampling. This was really a missing feature of the initial release that
> you pointed out
>   and we have fixed since.
> * the configuration / autotools system was made more generic, to ease
> packaging
>   (Debian, SUSE, etc) and integration.
>
> Now, what's beyond the already-old 0.1.2? Well,
>
> * encoder has already been made
> 2x<http://review.webmproject.org/gitweb?p=libwebp.git;a=commit;h=cfbf88a...>faster
> by using SSE2 implementation
>  of the most critical functions. But there are some higher-level algorithmic
>  improvements that could be used. They are not aggressively over-optimized
>  yet, as it would be premature and possibly close the door to some
>  novelties in using VP8 features. I'm mostly thinking about the time we
>  spend performing rate-distortion trade-off evaluation. There are still more
>  algorithms to be tried here, along with improving the segmentation, or the
>  distortion metrics we use. In any case, I feel that the VP8 format
>  can still be pushed, to further improve the compression efficiency it
> achieves
>  now, as we saw in the updated
> study<http://code.google.com/speed/webp/docs/webp_study.html>
> .
> * Decoder still lacks optimization, and is around 3x slower than an
>  optimized jpeg decoder. The immediate reasonable goal would be
>  a 2x slow-down only, by using SSE2 in a similar fashion. In-loop
>  filtering code is a good candidate, as well as the usual dct transforms.
>  Speeding up the boolean decoder would require more than SSE2, though.
> * On the decoding front, on-the-fly downscaling is rewarding when
>  one only wants a downscaled output (like, for a quick preview). Quite some
> CPU
>  ticks can be saved by not storing all samples, saving some yuv->rgb
> conversions,
>  or using an alternate in-loop filtering. Something worth adding in the API.
> * Note that we've
> added<http://review.webmproject.org/gitweb?p=libwebp.git;a=commit;h=f6fb387...>a
> JNI wrapper for the decoder too. The wrapper for
>  the encoder will also be added, at some point.
>
> Some new features we’re working on and
> announced<http://www.youtube.com/watch?v=30_AIEhar-I#t=1768s>at Google
> I/O include container-level
> features around metadata (ICC, XMP,...), multi-image, 3D and animation.
>
> All the above are still within the boundary of the VP8 specifications, and
> there's more we
> are experimenting with, off-track, based on the community’s feedback, namely
> alpha and
> yuv444, We value vp8 compatibility very highly including the hardware
> implementations
> out there<http://www.on2.com/hantro-embedded/hardware-video-codecs/g-series-1-d...>but
> we also want to experiment with new algorithms within libwebp and see
> what people get enthusiastic about. To that effect, there's an
> --enable-experimental flag
> for './configure' that activates some of the next-gen code and features.
> Right now, there
> is a stub for experimentation with the alpha channel compression. Using the
> zlib for
> compression (like PNG) is quite efficient, but there’s room for
> experimenting with alternate
> lossless methods too. Alongside the alpha support, yuv444 support through
> enhancement
> layer has been started but is still at very early stage.
>
> There a lot more features we could go after, but we really want to grow the
> format and
> code around the most needed ones first and move upward with help of the
> community
> feedback. A good example to start with is the Wikipedia experiment that Mathias
> mentioned<https://groups.google.com/a/webmproject.org/group/webp-discuss/browse...>
> on this list<https://groups.google.com/a/webmproject.org/group/webp-discuss/browse...>.
> Please try it!
>
> We welcome any feedback, remarks, and code about what 0.2 should be made of!
>
>  webplogo.webp
> 2KViewDownload

Jeff Muizelaar

unread,
May 24, 2011, 10:57:33 PM5/24/11
to webp-d...@webmproject.org
On Tue, May 24, 2011 at 6:29 PM, Pascal Massimino <pascal.m...@gmail.com> wrote:
All the above are still within the boundary of the VP8 specifications, and there's more we
are experimenting with, off-track, based on the community’s feedback, namely alpha and
yuv444, We value vp8 compatibility very highly including the hardware implementations
out there but we also want to experiment with new algorithms within libwebp and see
what people get enthusiastic about. To that effect, there's an --enable-experimental flag
for './configure' that activates some of the next-gen code and features. Right now, there
is a stub for experimentation with the alpha channel compression. Using the zlib for
compression (like PNG) is quite efficient, but there’s room for experimenting with alternate
lossless methods too. Alongside the alpha support, yuv444 support through enhancement
layer has been started but is still at very early stage.

 What will the process for standardizing the experimental features be? Will there be any sort of last call process?

Also, regarding alpha support is there a chance that more of the existing compression vp8 primitives could be used instead
of zlib? Further, do you know how using zlib on the alpha channel compares to other lossless compressors?

-Jeff


Pascal Massimino

unread,
May 25, 2011, 4:45:40 PM5/25/11
to webp-d...@webmproject.org
Jeff,

On Tue, May 24, 2011 at 7:57 PM, Jeff Muizelaar <jrmu...@gmail.com> wrote:
On Tue, May 24, 2011 at 6:29 PM, Pascal Massimino <pascal.m...@gmail.com> wrote:
All the above are still within the boundary of the VP8 specifications, and there's more we
are experimenting with, off-track, based on the community’s feedback, namely alpha and
yuv444, We value vp8 compatibility very highly including the hardware implementations
out there but we also want to experiment with new algorithms within libwebp and see
what people get enthusiastic about. To that effect, there's an --enable-experimental flag
for './configure' that activates some of the next-gen code and features. Right now, there
is a stub for experimentation with the alpha channel compression. Using the zlib for
compression (like PNG) is quite efficient, but there’s room for experimenting with alternate
lossless methods too. Alongside the alpha support, yuv444 support through enhancement
layer has been started but is still at very early stage.

 What will the process for standardizing the experimental features be? Will there be any sort of last call process?
 
yes, definitely. At some point we'll need to settle on something, but not without being sure we cover all major needs.
Personally, i'm using a lot of web-crawled transparent images for testing. But everybody is encouraged to try
the code directly (remember! `./configure --enable-experimental`. And then you can compress transparent PNGs
directly to and from WebP, using 'cwebp'. The transparent-webp files looks like regular webp file, except that the zlib
data for transparency is stored 'hidden' at the end of partition #0. This is just a temporary measure. But it makes the
WebPDecodeRGBA() function work seamlessly).
 
Also, regarding alpha support is there a chance that more of the existing compression vp8 primitives could be used instead
of zlib?

All the transforms (Hadamard, 4x4)  currently in the vp8 specs are lossy ones and cannot be re-used as is.
That being said, it's unclear whether the alpha channel needs to be losslessly compressed as a whole.
There are two transparency values that must be preserved losslessly at all cost: 0 and 255. Starting from here,
the other values usually code smooth transparency gradients and could probably sustain a reasonable amount
of lossy degradation. Any idea is welcome here...

 
Further, do you know how using zlib on the alpha channel compares to other lossless compressors?

 zlib has the inconvenience of working only in one direction (backward), thus ignoring the two-dimensional
opportunities for matching. Patch-matching (instead of string matching) is an idea to explore i'd say [*].
That being said, right now using zlib 'as is' in webp gives similar performance than PNG, so is a good
starting point.
Beside, the comparison codec i have in mind is the 'ffv1' lossless codec found in ffmpeg (mainly because
it's easy to experiment with). It seems to be doing better than PNG for photo-like images.

skal


[*] cf some colleagues' paper here: http://research.google.com/pubs/pub36415.html , in the context of lossy compression


-Jeff


--
You received this message because you are subscribed to the Google Groups "WebP Discussion" group.
To post to this group, send email to webp-d...@webmproject.org.
To unsubscribe from this group, send email to webp-discuss...@webmproject.org.
For more options, visit this group at http://groups.google.com/a/webmproject.org/group/webp-discuss/?hl=en.

Oliver

unread,
May 27, 2011, 9:45:03 AM5/27/11
to WebP Discussion
On 25 Mai, 22:45, Pascal Massimino <pascal.massim...@gmail.com> wrote:
> the code directly (remember! `./configure --enable-experimental`. And then
> you can compress transparent PNGs

If I do so, I get a really ugly webp with black and white in the
background, but no transparency.

Pascal Massimino

unread,
May 27, 2011, 10:35:16 PM5/27/11
to webp-d...@webmproject.org
Hi Oliver,

Are you converting back to PNG + transparency for viewing (with, e.g., gimp)?

Oliver

unread,
May 28, 2011, 9:54:37 PM5/28/11
to WebP Discussion
On 28 Mai, 04:35, Pascal Massimino <pascal.massim...@gmail.com> wrote:
> Hi Oliver,
>
> On Fri, May 27, 2011 at 6:45 AM, Oliver <soc...@anonsphere.com> wrote:
> > On 25 Mai, 22:45, Pascal Massimino <pascal.massim...@gmail.com> wrote:
> > > the code directly (remember! `./configure --enable-experimental`. And
> > then
> > > you can compress transparent PNGs
>
> > If I do so, I get a really ugly webp with black and white in the
> > background, but no transparency.
>
> Are you converting back to PNG + transparency for viewing (with, e.g.,
> gimp)?

No, I thougt, the decoder in chrome does already support it. :-)
Reply all
Reply to author
Forward
0 new messages