More information about creating webm files and live streaming (yes, one more newbie :-))

1,123 views
Skip to first unread message

Leandro Santiago

unread,
Jan 3, 2011, 9:16:43 AM1/3/11
to WebM Discussion
Hello to all. I'm new in this mailing list and a newbie in webm :-)

I'm developing an application which works over http sending webm live
streams.

In fact I'm have some problems to start the stream part (the capture
is working fine). There's no audio. Only video.

The idea is my source of frames is not a file, but a camera device
(currently I'm working with v4l and ip camera).

The main problem is I can't find much information about the creation
of webm videos. The libmatroska/libebml documentations are almost
inexistent. The only documentation I found is the matroska spec.

In the webm project website there isn't any information about the webm
container (and the differences from mkv), neither informations about
creating webm files and streams (vp8+mkv+vorbis), but only ivf files
(I coudn't find any ivf file to test).

I'm using ffmpeg (libav*) in my application (written in c++ on Linux),
but I can't use the filesystem in the transmission process (it's
basicly get the frame from camera and send to the client via http),
but with ffmpeg, if you need to create a container, you must create a
file and use the file system. So I can't create the webm container
struct (header, etc) using ffmpeg. As I can create a stream without
use the filesystem (I encode a frame and concat in a buffer in-
memory), my initial idea was use libmatroska to create the webm
container struct and ffmpeg to send the video data itself.

Using tools as the webminspector, ebml-viewers and the matroska
documentation I'm trying do understand this container.

After read these documents [1] and [2]

I created this struct about webm/matroska live stream:

- Header (required)
- Segment Information (must use the "unknow" size (all 1s in the
size))
- Track (informations about the streams (width, height, etc.)
- Clusters (the data itself)

To the http(s) side I'm using the gnu libmicrohttpd library (because
the app also sends other kind of information, not only videos), and
the continuation of my idea is:

- the client connects and do the authentication process
- the server sets the http response header (content-type: "video/
webm", etc.)
- the server creates the webm header (with Doc Type: webm)
- the server creates the segment section (what does "must use the
unknow size" mean?) and concat with the header
- the server creates one track info about the video which will be
sended with the informations: width, height, codec_id (V_VP8) and
concat with the previous data.
- the server sends this buffer to the client, indicating a webm video
willl be send.
- the server enter a loop where, while the client is connected, get a
frame from camera, encode with vp8 and send to to the client.
- when the client disconnect, clean all the allocated data.

Do you agree with this approach?

I also tested the flumotion server, which is able to create webm
streams. Although it works fine with webm, its written in python, and
it seems all codec logic is inside gstreamer.

Then I got a sample of the stream, few seconds, and saved to the disk
(using lynx -dump http://stream > stream.webm). The resulting file is
perfectly playable, but of course I can't seek into it :-)

But when I analise this file with the libwebm/sample program, I get
this output:
$ libwebm/sample webcam.webm
libmkv verison: 1.0.0.10
EBML Header
EBML Version : 1
EBML MaxIDLength : 4
EBML MaxSizeLength : 8
Doc Type : webm
Pos : 28

Segment::Load() failed.

The ebml viewer I'm using [3] also stops with error after parse the
header. Now I'm more confuse than before :-(

I hope you understand my problem and sorry for my bad English. I
really believe in a open web, using only open protocols and and open
formats. But as developer it has been hard, but funny :-)

I also posted these questions on the vpx, webm and matroska irc
channels, but it seems there nobody can help me. So if anyone can help
me, I thank very much :-)

[1] http://www.matroska.org/technical/diagram/index.html
[2] http://www.matroska.org/technical/streaming/index.html
[3] http://code.google.com/p/ebml-viewer/

Lou Quillio

unread,
Jan 4, 2011, 8:17:36 AM1/4/11
to webm-d...@webmproject.org
On Mon, Jan 3, 2011 at 9:16 AM, Leandro Santiago
<leandro...@gmail.com> wrote:
[snip]

> In the webm project website there isn't any information about the webm
> container (and the differences from mkv)

There are container guidelines published here:

http://www.webmproject.org/code/specs/container/

LQ

--
Lou Quillio
Webmaster
WebMProject.org

Steve Lhomme

unread,
Jan 9, 2011, 10:24:50 AM1/9/11
to webm-discuss
You should have a look at mkclean. It's one big C file that
reads/writes and do all sorts of things on Matroska files. It uses
libebml2/libmatroska2 which are the preferred libraries nowadays.

There is a --live mode that does the kind of output you are looking
for. What it does is more or less setting an infinite/unknown size on
each Cluster.

I hope that helps.

> --
> You received this message because you are subscribed to the Google Groups "WebM Discussion" group.
> To post to this group, send email to webm-d...@webmproject.org.
> To unsubscribe from this group, send email to webm-discuss...@webmproject.org.
> For more options, visit this group at http://groups.google.com/a/webmproject.org/group/webm-discuss/?hl=en.
>
>

--
Steve Lhomme
Matroska association Charmain

Arie Skliarouk

unread,
Feb 12, 2011, 6:07:13 PM2/12/11
to WebM Discussion
Hi Leandro,

Did you have any success with creating webm header for streaming on
the fly?

--
Arie

Leandro Santiago

unread,
Feb 13, 2011, 7:28:56 AM2/13/11
to webm-d...@webmproject.org
Yes, after some time, I did :-)

To write the ebml structure I'm using part of the libvpx source code
(I called it libwebmvv), and you can get it in
http://gitorious.org/libwebmvv

This code works directly with structs in memory, that is exactly what I need.

First I send ths file header and the basic information about the vide
track (resolution, fps, etc.) in the segment info, and after start
sending the frames.

As I was already using ffmpeg in my app, I'm using it to encode the
the video: I encode a frame in a buffer and concat this buffer with a
cluster block and send to the user.

Currently I'm putting one frame per cluster, because I imagine it's
the fast way to deliver the video to user, but perhaps it's not a good
approach. But it works, I can open the file with mkvinfo and all
clients I tested worked fine with my stream (I tested mplayer, vlc,
ffplay, firefox, chrome and opera).

2011/2/12 Arie Skliarouk <skli...@gmail.com>:

Steve Lhomme

unread,
Feb 13, 2011, 2:23:47 AM2/13/11
to webm-discuss
GStreamer can do that

> --
> You received this message because you are subscribed to the Google Groups "WebM Discussion" group.
> To post to this group, send email to webm-d...@webmproject.org.
> To unsubscribe from this group, send email to webm-discuss...@webmproject.org.
> For more options, visit this group at http://groups.google.com/a/webmproject.org/group/webm-discuss/?hl=en.
>
>

--
Steve Lhomme
Matroska association Chairman

Zaheer Merali

unread,
Feb 13, 2011, 9:58:40 AM2/13/11
to webm-d...@webmproject.org
GStreamer does this because the matroska muxer (and the webm muxer)
sets enough buffers on the streamheaders property of each buffer such
that the GStreamer multifdsink knows that it needs to send these
headers first to every client. Flumotion uses multifdsink in its
http-streamer.

Zaheer

Reply all
Reply to author
Forward
0 new messages