> --
> You received this message because you are subscribed to the Google Groups
> "WebM Discussion" group.
> To post to this group, send email to webm-d...@webmproject.org.
> To unsubscribe from this group, send email to
> webm-discuss...@webmproject.org.
> For more options, visit this group at
> http://groups.google.com/a/webmproject.org/group/webm-discuss/?hl=en.
>
sounds interesting. Could you explain your technique a bit more?
Franco
Sent from mobile.
All you need to do alpha is a single frame with which to do a difference
computaion as luma, applied to alpha. If the mask never changed, a single
alt ref frame could hold the data.
If the alt ref frame can be updated with compressed data on the fly, then
the mask too could be updated on the fly. I do a bit of composition in argb
color space, but it seems to me, most alpha cam is done with a blue or green
screen. In such a case, the single alt-ref frame can contain the blue screen
image without any actors.
As an alph channel decoder, it would be up to you to grab the alt-ref and
use it for dif->luma->alpha. Of course, what good is alpha if you cannot
composition with it, right?
So in the end, it appears to me it can already suppport an alpha layer, more
to the question is how or what'cha gonna' do wit' it?
Does the html5 video spec include support for alpha in a way that is useful
to an html page? Windowless video with actors walking across a text filled
div? Composting multi layers? Seems to me that anything useful could be made
with it as it is.
Andy
Er. That would be quite a kludge— and would prevent you from using
that additional reference frame for its intended purpose. I don't see
why their would be any coding gain in combining it (it's not like the
alpha frame values will be useful predictors for the rest of the frame
or vice versa)... so why not just add another video track to the webm
stream with a second vp8 stream.
Some metadata would be needed to pair up that alpha channel, but
perhaps MKV already has something defined for that. It would probably
also be simpler than the metadata needed for an alt-ref splitting
approach— since that would also have to deal with fixing the timing.
I'm not even saying it's a good idea, Im just saying its already possible as
is in a single track.
----- Original Message -----
From: "Gregory Maxwell" <gmax...@gmail.com>
To: <webm-d...@webmproject.org>
Sent: Tuesday, October 05, 2010 5:27 PM
Subject: Re: Alpha channel support
On Mon, Oct 4, 2010 at 3:05 PM, Andy Shaules <bowl...@gmail.com> wrote:
> Using alternate reference frame group, just set in your alpha mask.
Er. That would be quite a kludge� and would prevent you from using
that additional reference frame for its intended purpose. I don't see
why their would be any coding gain in combining it (it's not like the
alpha frame values will be useful predictors for the rest of the frame
or vice versa)... so why not just add another video track to the webm
stream with a second vp8 stream.
Some metadata would be needed to pair up that alpha channel, but
perhaps MKV already has something defined for that. It would probably
also be simpler than the metadata needed for an alt-ref splitting
approach� since that would also have to deal with fixing the timing.
Enh. It still requires modifications to signal it and e.g. consider
what happens when the stream hits a keyframe.
----- Original Message -----
From: "KenMcD" <kenmc...@gmail.com>
To: "WebM Discussion" <webm-d...@webmproject.org>
Sent: Saturday, November 20, 2010 7:47 AM
Subject: Re: Alpha channel support
>
Silvia.
> To post to this group, send email to webm-d...@webmproject.org.
> To unsubscribe from this group, send email to webm-discuss...@webmproject.org.
Franco
Steve
--
Steve Lhomme
Matroska association Charmain
There are 2 main ways to handle 3D. It's actually a lot like adding 3D
support to existing 2D streams. They are all container based as VP8 is
not going to be changed for that. So for each video frame there will
be another frame (likely greyscale) representing the alpha values to
apply to each pixel. So the dimension for that frame should match the
one of the "original" stream.
The 3 ways to handle that in the header is as follows:
1/ BlockAdditions: as the name suggests is additional data tied to the
Block. These data have an ID to know what they represent (1 would mean
alpha channel with VP8). As it is tied to the Block it has the same
timestamp. Existing player with no knowledge of it just discard that
info. The main different with existing WebM muxers is that they have
to support BlockGroup/Block in addition to SimpleBlock with was
preferred so far.
There is also a field in the Track Entry to specify that
BlockAdditions is used. So that the player/web browser can know an
alpha channel is in there.
http://www.matroska.org/technical/specs/index.html#BlockAdditions
2/ TrackOperation: is used to combien multiple tracks together. To
create a 3D effect for example. In this case the Blocks for the
regular and alpha are separated in different tracks and combined on
playback. It has the advantage of working with any codec combination.
Right now only "PlaneCombine" and "JoinBlocks" operation are possible,
but a "AlphaChannel" one could easily be added.
http://www.matroska.org/technical/specs/index.html#TrackOperation
Solution #1 has the advantage that the data are tied to the original
codec (VP8 here) and so the greyscale part could make use of other
compression information available in the source frame. That should
lead to better compression of the alpha channel. That means the VP8
API might need some changes to allow handling this as well.
I suppose on playback both solutions are equivalent.
Steve
> To post to this group, send email to webm-d...@webmproject.org.
> To unsubscribe from this group, send email to webm-discuss...@webmproject.org.
> For more options, visit this group at http://groups.google.com/a/webmproject.org/group/webm-discuss/?hl=en.
>
>
--
SL> So for each video frame there will be another frame (likely greyscale)
SL> representing the alpha values to apply to each pixel. So the dimension
SL> for that frame should match the one of the "original" stream.
Is it a strict requirement that the extra frame exactly match the pixel
dimensions of the main frame? It could be useful to have an alpha pixel
map to multiple original pixels, akin to how 4:2:2, 4:1:1 and 4:2:0 map
colour difference channels to the Y channel.
-JimC
--
James Cloos <cl...@jhcloos.com> OpenPGP: 1024D/ED7DAEA6
this is assuming the webm spec is less frozen than the VP8 one.
> 2/ TrackOperation: is used to combien multiple tracks together. To
> create a 3D effect for example. In this case the Blocks for the
> regular and alpha are separated in different tracks and combined on
> playback. It has the advantage of working with any codec combination.
> Right now only "PlaneCombine" and "JoinBlocks" operation are possible,
> but a "AlphaChannel" one could easily be added.
>
> http://www.matroska.org/technical/specs/index.html#TrackOperation
does webm support that per the spec today?
does google have plans for VP8/webm and 3d already? If yes, re-using
that for alpha might make sense...
On the output side, I don't know what is best for browsers. If they
would like YUV+8 bit greyscale or ARGB pixels or even a completely
different coding.
The way Matroska works, you can add fields on be fully backward
compatible. Non existing fields are just discarded. In that respect
(and from what I know all WebM parsers respect that) both solutions
will be backward compatible. Just think of it as an XML file with
added fields/entities.
>> 2/ TrackOperation: is used to combien multiple tracks together. To
>> create a 3D effect for example. In this case the Blocks for the
>> regular and alpha are separated in different tracks and combined on
>> playback. It has the advantage of working with any codec combination.
>> Right now only "PlaneCombine" and "JoinBlocks" operation are possible,
>> but a "AlphaChannel" one could easily be added.
>>
>> http://www.matroska.org/technical/specs/index.html#TrackOperation
>
> does webm support that per the spec today?
I'm not sure the StereoMode field has been formally added to the WebM
variant of the specs. But there are 3D WebM files on the web that work
in nightly builds of Mozilla with nVidia 3D glasses. So 3D WebM is
already a reality.
> does google have plans for VP8/webm and 3d already? If yes, re-using
> that for alpha might make sense...
--
Frank