Programatically creating streams with FFmpeg

1,323 views
Skip to first unread message

revmischa

unread,
Nov 25, 2010, 4:13:22 PM11/25/10
to C++ RTMP Server
I am trying to write an application that can take an arbitrary video
input stream and display it on a webpage. I want to be able to control
everything through software and automate it. I have code written to
interface with FFmpeg and can deal with getting video streams in, and
I can use it to stream via mpegTS to rtmpd. From there I can access
the stream from a flash player via RTMP (and I would also like to
support segmented MPEG for safari).
My question is this: if I have code that publishes a mpeg TS stream to
rtmpd, how do I get the stream URI to give to a flash client? I need
to do everything without any manual intervention, so something like
"look at the console output" is not going to be helpful. Is there
anything I can query to find a mapping of stream -> URI or something
similar? Is there any documentation I should be looking at?

Thanks!

Andriy

unread,
Nov 26, 2010, 6:36:03 AM11/26/10
to C++ RTMP Server
Hi,
Basicly you can do this using RTMP RPC calls :)
So your flash client can send RPC call to RTMPD (nc.call() flash
function)
In this way you can handle your own RPC calls from flash client in
RTMPAppProtocolHandler::ProcessInvoekGeneric method

So in this way you can find all stream names and return it to flash
client.
Of course, you can design more difficult logic :)

Eugen-Andrei Gavriloaie

unread,
Nov 26, 2010, 7:16:00 AM11/26/10
to c-rtmp...@googlegroups.com

Exactly! I missed the obvious solution: ask the server.

> You received this message because you are subscribed to "C++ RTMP Server" mailing list.
> To post to this group, send email to c-rtmp...@googlegroups.com
> To unsubscribe from this group, send email to
> c-rtmp-serve...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/c-rtmp-server?hl=en

C++ RTMP Server

unread,
Nov 26, 2010, 5:39:41 AM11/26/10
to c-rtmp...@googlegroups.com

On Nov 25, 2010, at 11:13 PM, revmischa wrote:

> I am trying to write an application that can take an arbitrary video
> input stream and display it on a webpage. I want to be able to control
> everything through software and automate it. I have code written to
> interface with FFmpeg and can deal with getting video streams in, and
> I can use it to stream via mpegTS to rtmpd. From there I can access
> the stream from a flash player via RTMP (and I would also like to
> support segmented MPEG for safari).
> My question is this: if I have code that publishes a mpeg TS stream to
> rtmpd, how do I get the stream URI to give to a flash client? I need
> to do everything without any manual intervention, so something like
> "look at the console output" is not going to be helpful.

Than don't look! Joking :)

Since the stream name is dynamic when it comes from mpeg-ts (composed from track ids), you can override BaseClientApplication::SignalStreamRegistered and intercept the stream name. Once you have that you need to inform the outside world about it. You can do that using XML RPC built inside rtmpd (vptests project for example).

Another solution is to override ProcessInvokePlay and search for the stream yourself.

> Is there
> anything I can query to find a mapping of stream -> URI or something
> similar? Is there any documentation I should be looking at?

Nope. That's sad. The only way to go in this very moment is to read the comments inside example applications and ask questions here.

Cheers,
Andrei


>
> Thanks!
>
> You received this message because you are subscribed to "C++ RTMP Server" mailing list.
> To post to this group, send email to c-rtmp...@googlegroups.com
> To unsubscribe from this group, send email to
> c-rtmp-serve...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/c-rtmp-server?hl=en

------
Eugen-Andrei Gavriloaie
Web: http://www.rtmpd.com

Mischa Spiegelmock

unread,
Nov 26, 2010, 3:05:08 PM11/26/10
to c-rtmp...@googlegroups.com
Where/how do I override it? What code do I put in there? I'm not
familiar with the internals of rtmpd and there's no documentation. I'd
like to just jump in and do what you have described, but I'm not even
sure where to start. Do I create a new application from appscaffold
and override it in rtmpappprotocolhandler.cpp? How would I store the
stream name so I can output it in RPC calls later?

Mischa Spiegelmock

unread,
Nov 26, 2010, 3:09:31 PM11/26/10
to c-rtmp...@googlegroups.com
I don't have to use mpegts either. Does rtmpd support RTSP ingress? It
would be ideal if I could do an RTSP PUBLISH command to specify a
mountpoint, and then have that mountpoint mapped to something a RTMP
client could access. I'd rather use RTP than mpegts over TCP.

Mischa Spiegelmock

unread,
Nov 26, 2010, 6:28:15 PM11/26/10
to c-rtmp...@googlegroups.com
I don't believe BaseClientApplication::SignalStreamRegistered is being
called for an incoming TS stream, only
TSAppProtocolHandler::RegisterProtocol.

On Fri, Nov 26, 2010 at 2:39 AM, C++ RTMP Server <crtmp...@gmail.com> wrote:
>

Thorsten

unread,
Nov 28, 2010, 7:02:26 PM11/28/10
to C++ RTMP Server
Hi,
I have not tried it, but I believe that it is called. When the
(incoming) stream is instantiated, SignalStreamRegistered of the
assigned application is called.
I'd also be interested in something nicer as the current dynamic
naming convention of the TS streams.
Regards,
Thorsten


On 27 Nov., 00:28, Mischa Spiegelmock <mspiegelm...@gmail.com> wrote:
> I don't believe BaseClientApplication::SignalStreamRegistered is being
> called for an incoming TS stream, only
> TSAppProtocolHandler::RegisterProtocol.
>
> On Fri, Nov 26, 2010 at 2:39 AM, C++ RTMP Server <crtmpser...@gmail.com> wrote:
>
>

Nelu

unread,
Nov 30, 2010, 5:10:09 AM11/30/10
to C++ RTMP Server
Hi,
I've created a small C++ application that is using GStreamer for
capturing
video from a web camera. Now I want to encode video to h264 in an MPEG-
TS container
and send it to rtmpd using UDP. I know how to programatically use
ffmpeg and encode
to h264 but I don't know how to create MPEG-TS container.
Does anybody knows where to find some documentation/samples on how to
do it?
Any help will be appreciated.

Best regards,
Nelu Cociag

Thorsten Pferdekaemper

unread,
Nov 30, 2010, 5:18:06 AM11/30/10
to c-rtmp...@googlegroups.com
Hi,
(also see my mail ""Really free real streaming": ffmpeg -> rtmpd ->
jwplayer/haxe". It describes (amongst others) how to use ffmpeg to create
the UDP stream)

This is what works for me:

ffmpeg -i "<filename-of-movie>" -re -vcodec libx264 -vpre default -vpre
baseline -b 500000 -s 320x180 -strict experimental -g 25 -me_method zero
-acodec aac -ab 96000 -ar 48000 -ac 2 -vbsf h264_mp4toannexb -f mpegts
udp://127.0.0.1:10000?pkt_size=1316

Apparently, the part "-i "<filename-of-movie>"" needs to be changed to get
your input running. The part to create MPEG-TS is "-f mpegts". To stream it
over UDP is done via "udp://127.0.0.1:10000?pkt_size=1316".

Which interface does the web camera use? Does it have a v4l2 driver?

Regards,
Thorsten

Best regards,
Nelu Cociag

You received this message because you are subscribed to "C++ RTMP Server"

Nelu

unread,
Nov 30, 2010, 5:54:39 AM11/30/10
to C++ RTMP Server
Hi,

On Nov 30, 12:18 pm, "Thorsten Pferdekaemper"
<thors...@pferdekaemper.com> wrote:
> Hi,
> (also see my mail ""Really free real streaming": ffmpeg -> rtmpd ->
> jwplayer/haxe". It describes (amongst others) how to use ffmpeg to create
> the UDP stream)
Very nice tutorial. But I want to do it in code and not using ffmpeg
as external
program...
>
> This is what works for me:
>
> ffmpeg -i "<filename-of-movie>" -re -vcodec libx264 -vpre default -vpre
> baseline -b 500000 -s 320x180 -strict experimental -g 25 -me_method zero
> -acodec aac -ab 96000 -ar 48000 -ac 2 -vbsf h264_mp4toannexb -f mpegts
> udp://127.0.0.1:10000?pkt_size=1316
>
> Apparently, the part "-i "<filename-of-movie>"" needs to be changed to get
> your input running. The part to create MPEG-TS is "-f mpegts". To stream it
> over UDP is done via "udp://127.0.0.1:10000?pkt_size=1316".
>
> Which interface does the web camera use? Does it have a v4l2 driver?
Yes, it uses v4l2 driver.

Best regards,
Nelu Cociag

Thorsten Pferdekaemper

unread,
Nov 30, 2010, 8:26:06 AM11/30/10
to c-rtmp...@googlegroups.com
Hi,
if it uses v4l2, ffmpeg should be able to do everything directly. At least,
the docs say so:
> ffmpeg -f video4linux2 -i /dev/video0 ...

I have planned to try something like this.
Anyway, could you share your gstreamer program? I have never tried to use
gstreamer, but it seems a powerful concept.

Nelu

unread,
Dec 1, 2010, 5:32:04 AM12/1/10
to C++ RTMP Server
Hi,

there is the test application I've used.
Is need to install gtk+ and GStreamer, then compile the
application using:
g++ -Wall `pkg-config --cflags --libs gstreamer-0.10 gstreamer-
interfaces-0.10` `pkg-config --cflags --libs gtk+-2.0` capture_test.c -
o capture_test
Run the application using:
./capture_test
your web cam should appear in main application window.
If you press button 'Start recording', each frame will go into
'buffer_probe_callback' function. There you can use external
encoder(ffmpeg)
and compress the video data.

Does anybody knows how to do this using ffmpeg API?

There is the code:

#include <stdlib.h>
#include <string.h>
#include <gtk/gtk.h>
#include <gdk/gdkx.h>
#include <gst/gst.h>
#include <gst/interfaces/xoverlay.h>
#include <gtk/gtk.h>


/* Define sources and sinks according to
* running environment
* NOTE: If you want to run the application
* in ARM scratchbox, you have to change these*/
#ifdef __arm__
/* The device by default supports only
* vl4l2src for camera and xvimagesink
* for screen */
#define VIDEO_SRC "v4l2camsrc"
#define VIDEO_SINK "xvimagesink"
#else
/* These are for the X86 SDK. Xephyr doesn't
* support XVideo extension, so the application
* must use ximagesink. The video source depends
* on driver of your Video4Linux device so this
* may have to be changed */
#define VIDEO_SRC "videotestsrc"
#define VIDEO_SINK "ximagesink"
#endif

/* Define structure for variables that
* are needed thruout the application */
typedef struct
{
GtkWidget *window;

GstElement *pipeline;
GtkWidget *screen;
guint buffer_cb_id;
} AppData;


/* This callback will be registered to the image sink
* after user starts recording */
static gboolean buffer_probe_callback(
GstElement *image_sink,
GstBuffer *buffer, GstPad *pad, AppData *appdata)
{
g_print("-receive buffer=%x ",(unsigned int)buffer);
/* This is the YUV (I420) buffer that you can use for encoding... */
unsigned char *data_frame =
(unsigned char *) GST_BUFFER_DATA(buffer);


/* Returning TRUE means that the buffer can is OK to be
* sent forward. When using fakesink this doesn't really
* matter because the data is discarded anyway */
return TRUE;
}

/* Callback that gets called when user clicks the "Take photo" button
*/
static void on_start_recording(GtkWidget *widget, AppData *appdata)
{
GstElement *image_sink;

/* Get the image sink element from the pipeline */
image_sink = gst_bin_get_by_name(GST_BIN(appdata->pipeline),
"image_sink");
/* Display a note to the user */
g_print("\nStart recording...");

/* Connect the "handoff"-signal of the image sink to the
* callback. This gets called whenever the sink gets a
* buffer it's ready to pass forward on the pipeline */
appdata->buffer_cb_id = g_signal_connect(
G_OBJECT(image_sink), "handoff",
G_CALLBACK(buffer_probe_callback), appdata);
}

/* Callback that gets called whenever pipeline's message bus has
* a message */
static void bus_callback(GstBus *bus, GstMessage *message, AppData
*appdata)
{
gchar *message_str;
const gchar *message_name;
GError *error;

//message_name =
gst_structure_get_name(gst_message_get_structure(message));
//g_print("got message: %s\n", message_name);

/* Report errors to the console */
if(GST_MESSAGE_TYPE(message) == GST_MESSAGE_ERROR)
{
gst_message_parse_error(message, &error, &message_str);
g_error("GST error: %s\n", message_str);
g_free(error);
g_free(message_str);
}

/* Report warnings to the console */
if(GST_MESSAGE_TYPE(message) == GST_MESSAGE_WARNING)
{
gst_message_parse_warning(message, &error, &message_str);
g_warning("GST warning: %s\n", message_str);
g_free(error);
g_free(message_str);
}

/* See if the message type is GST_MESSAGE_APPLICATION which means
* thet the message is sent by the client code (this program) and
* not by gstreamer. */
if(GST_MESSAGE_TYPE(message) == GST_MESSAGE_APPLICATION)
{
/* Get name of the message's structure */
message_name =
gst_structure_get_name(gst_message_get_structure(message));
if(!message_name) return;
g_print("got message application: %s\n", message_name);

/* The hildon banner must be shown in here, because the bus callback
is
* called in the main thread and calling GUI-functions in gstreamer
threads
* usually leads to problems with X-server */

/* "photo-taken" message means that the photo was succefully taken
* and saved and message is shown to user */
if(!strcmp(message_name, "photo-taken"))
{
g_print("\nPhoto taken");
}

/* "photo-failed" means that the photo couldn't be captured or saved
*/
if(!strcmp(message_name, "photo-failed"))
{
g_print("\nError: Saving photo failed");
}
}

}

/* Callback to be called when the screen-widget is exposed */
static gboolean expose_cb(GtkWidget * widget, GdkEventExpose * event,
gpointer data)
{

/* Tell the xvimagesink/ximagesink the x-window-id of the screen
* widget in which the video is shown. After this the video
* is shown in the correct widget */
g_print("expose-event- \n");


gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(data),
GDK_WINDOW_XWINDOW(widget->window));
return FALSE;
}

/* Initialize the the Gstreamer pipeline. Below is a diagram
* of the pipeline that will be created:
*
* |Screen| |Screen|
* ->|queue |->|sink |-> Display
* |Camera| |CSP | |Tee|/
* |src |->|Filter|->| |\ |Image| |Image | |Image|
* ->|queue|-> |filter|->|sink |-> JPEG file
*/
static gboolean initialize_pipeline(AppData *appdata,
int *argc, char ***argv)
{
GstElement *pipeline, *camera_src, *screen_sink, *image_sink;
GstElement *screen_queue, *image_queue;
GstElement *csp_filter, *image_filter, *tee;
GstCaps *caps;
GstBus *bus;


/* Initialize Gstreamer */
gst_init(argc, argv);

/* Create pipeline and attach a callback to it's
* message bus */
pipeline = gst_pipeline_new("test-camera");

bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata);
gst_object_unref(GST_OBJECT(bus));

/* Save pipeline to the AppData structure */
appdata->pipeline = pipeline;

/* Create elements */
/* Camera video stream comes from a Video4Linux driver */
//camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src");
camera_src = gst_element_factory_make("v4l2src", "camera_src");

/* Colorspace filter is needed to make sure that sinks understands
* the stream coming from the camera */
csp_filter = gst_element_factory_make("ffmpegcolorspace",
"csp_filter");
/* Tee that copies the stream to multiple outputs */
tee = gst_element_factory_make("tee", "tee");
/* Queue creates new thread for the stream */
screen_queue = gst_element_factory_make("queue", "screen_queue");
/* Sink that shows the image on screen. Xephyr doesn't support XVideo
* extension, so it needs to use ximagesink, but the device uses
* xvimagesink */
screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink");
/* Creates separate thread for the stream from which the image
* is captured */
image_queue = gst_element_factory_make("queue", "image_queue");
/* Filter to convert stream to use format that the gdkpixbuf library
* can use */
image_filter = gst_element_factory_make("ffmpegcolorspace",
"image_filter");
/* A dummy sink for the image stream. Goes to bitheaven */
image_sink = gst_element_factory_make("fakesink", "image_sink");

g_print("Check that elements are correctly initialized \n");
/* Check that elements are correctly initialized */
if(!(pipeline && camera_src && screen_sink && csp_filter &&
screen_queue
&& image_queue && image_filter && image_sink))
{
g_critical("Couldn't create pipeline elements");
return FALSE;
}

g_print("Set image sink to emit handoff-signal before throwing away
it's buffer \n");
/* Set image sink to emit handoff-signal before throwing away
* it's buffer */
g_object_set(G_OBJECT(image_sink),
"signal-handoffs", TRUE, NULL);

g_print("Add elements to the pipeline. This has to be done prior to
linking them \n");
/* Add elements to the pipeline. This has to be done prior to
* linking them */
gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter,
tee, screen_queue, screen_sink, image_queue,
image_filter, image_sink, NULL);

/* Specify what kind of video is wanted from the camera */
g_print(" Specify what kind of video is wanted from the camera \n");
/*caps = gst_caps_new_simple("video/x-raw-rgb", */
/*caps = gst_caps_new_simple("video/x-raw-yuv",
"format", GST_TYPE_FOURCC, GST_MAKE_FOURCC ('U', 'Y', 'V', 'Y'),
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 480,
"bpp", G_TYPE_INT, 24,
"framerate", GST_TYPE_FRACTION, 25, 1,
NULL);

caps = gst_caps_new_simple("video/x-raw-rgb",
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 480,
"bpp", G_TYPE_INT, 24,
"framerate", GST_TYPE_FRACTION, 25, 1,
NULL); */

caps = gst_caps_new_simple("video/x-raw-yuv",
"format", GST_TYPE_FOURCC, GST_MAKE_FOURCC ('I', '4', '2', '0'),
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 480,
"framerate", GST_TYPE_FRACTION, 20, 1,
NULL);



g_print("Link the camera source and colorspace filter using
capabilities specified \n");
/* Link the camera source and colorspace filter using capabilities
* specified */
if(!gst_element_link_filtered(camera_src, csp_filter, caps))
{
return FALSE;
}
gst_caps_unref(caps);

g_print("Connect Colorspace Filter -> Tee -> Screen Queue -> Screen
Sink * This finalizes the initialization of the screen-part of the
pipeline \n");
/* Connect Colorspace Filter -> Tee -> Screen Queue -> Screen Sink
* This finalizes the initialization of the screen-part of the
pipeline */
if(!gst_element_link_many(csp_filter, tee, screen_queue, screen_sink,
NULL))
{
return FALSE;
}

g_print("gdkpixbuf requires 8 bits per sample which is 24 bits per
pixel \n");
/* gdkpixbuf requires 8 bits per sample which is 24 bits per
* pixel */
/*caps = gst_caps_new_simple("video/x-raw-rgb",
"width", G_TYPE_INT, 352,
"height", G_TYPE_INT, 288,
"bpp", G_TYPE_INT, 24,
NULL); */
caps = gst_caps_new_simple("video/x-raw-yuv",
"format", GST_TYPE_FOURCC, GST_MAKE_FOURCC ('I', '4', '2', '0'),
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 480,
"framerate", GST_TYPE_FRACTION, 20, 1,
NULL);


g_print("Link the image-branch of the pipeline. The pipeline is ready
after this \n");
/* Link the image-branch of the pipeline. The pipeline is
* ready after this */
if(!gst_element_link_many(tee, image_queue, image_filter, NULL))
return FALSE;
if(!gst_element_link_filtered(image_filter, image_sink, caps)) return
FALSE;

gst_caps_unref(caps);

/* As soon as screen is exposed, window ID will be advised to the
sink */
g_signal_connect(appdata->screen, "expose-event",
G_CALLBACK(expose_cb),
screen_sink);

gst_element_set_state(pipeline, GST_STATE_PLAYING);

return TRUE;
}

/* Destroy the pipeline on exit */
static void destroy_pipeline(GtkWidget *widget, AppData *appdata)
{
/* Free the pipeline. This automatically also unrefs all elements
* added to the pipeline */
gst_element_set_state(appdata->pipeline, GST_STATE_NULL);
gst_object_unref(GST_OBJECT(appdata->pipeline));
}

/* Initialize the gui by creating a HildonProgram
* and HildonWindow */
void example_gui_initialize(
GtkWidget **window,
int *argc, char ***argv,
gchar *example_name)
{
g_thread_init(NULL);

/* Initialize GTK+ */
gtk_init(argc, argv);

/* Create HildonProgram and set application name */
*window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
gtk_window_set_title (GTK_WINDOW (*window), "Video Capture");
gtk_widget_set_size_request (*window, 740, 576);

/* Connect destroying of the main window to gtk_main_quit */
g_signal_connect(G_OBJECT(*window), "delete_event",
G_CALLBACK(gtk_main_quit), NULL);
}

void example_gui_run(GtkWidget *window)
{
/* Show the window and widgets it contains
* and go to the main loop. */
gtk_widget_show_all(window);
gtk_main();
}


int main(int argc, char **argv)
{
AppData appdata;
GtkWidget *button, *hbox, *vbox_button, *vbox;

/* Initialize and create the GUI */

example_gui_initialize(
&appdata.window,
&argc, &argv, (gchar *)"Camera example");

vbox = gtk_vbox_new(FALSE, 0);
hbox = gtk_hbox_new(FALSE, 0);
vbox_button = gtk_vbox_new(FALSE, 0);

gtk_box_pack_start(GTK_BOX(hbox), vbox, FALSE, FALSE, 0);
gtk_box_pack_start(GTK_BOX(hbox), vbox_button, FALSE, FALSE, 0);

appdata.screen = gtk_drawing_area_new();
gtk_widget_set_size_request(appdata.screen, 500, 380);
gtk_box_pack_start(GTK_BOX(vbox), appdata.screen, FALSE, FALSE, 0);

button = gtk_button_new_with_label("Start recording");
gtk_widget_set_size_request(button, 170, 380);
gtk_box_pack_start(GTK_BOX(vbox_button), button, FALSE, FALSE, 0);

g_signal_connect(G_OBJECT(button), "clicked",
G_CALLBACK(on_start_recording), &appdata);
gtk_container_add(GTK_CONTAINER(appdata.window), hbox);

/* Initialize the GTK pipeline */
if(!initialize_pipeline(&appdata, &argc, &argv))
{
g_print("\n Failed to initialize pipeline");
return -1;
}
g_print("It seems is initialized OK !!!\n");

g_signal_connect(G_OBJECT(appdata.window), "destroy",
G_CALLBACK(destroy_pipeline), &appdata);

/* Begin the main application */
example_gui_run( appdata.window);

/* Free the gstreamer resources. Elements added
* to the pipeline will be freed automatically */

return 0;
}


Regards,
Nelu Cociag
Reply all
Reply to author
Forward
0 new messages