Hello Miguel,
On 03.06.2017 17:33, miguel gusils wrote:
> Hi Jan,
>
> Could you elaborate on your livestreaming setup?
> Is it in use in a production environment?
>
Contrary to the signaled huge demand by university personel, we have
only had one occasion where a livestream was requested, but that worked
completely fine for about 2 hours continuous streaming to ~10 guests,
which were other universities in germany playing the stream on beamers
in big lecture halls. The majority used the MPEG-DASH stream.
The essential part to this setup is a single ffmpeg process. This
process captures all inputs (RTSP Stream from camera, audio input, local
HDMI input from magewell card) and does a Picture-in-Picture mix on the
fly as well as some basic audio filtering & heavy compression. That's
all done via filter_complex options. On the Agent the stream get's a
very light h.264 720p compression and pushes the output via rtmp to the
university's central Wowza server. There, the stream will get transcoded
to several smaller resolutions and delivered via HLS and MPEG-Dash,
using a small Video.js webplayer using dash.js and hls plugins.
The ffmpeg process on the capture agent is managed via systemd, which
will try to keep this stream running no matter what as long as the
systemd unit is enabled.
Now, the tricky (and still a bit messy) part is central control. The
first version, which i've quickly cobbled together because we were in a
bit of a hurry, used a small node script on the agents to listen for
commands via http and in turn enable or disable the systemd unit, and a
central python flask server to handle authentication, web interface and
relaying commands. This was all a bit fragile, so I've revised this in
the past week: The new version uses a central etcd instance on which a
very simple bash script will watch certain keys and in turn
enable/disable the units. On one of the servers a node server runs a web
interface that changes these specific keys in etcd, reports back the
status from the agent and generates the webplayers.
I'm planing to open source that software, but before that it will need a
bit more cleaning up.
The important part is, as i've mentioned, this ffmpeg command:
ffmpeg \
-i "{{ camera_url }}" \
-f v4l2 -i /dev/video0 \
-thread_queue_size 2048 -f alsa -i dsnoop \
-filter_complex
"highpass=f=120,acompressor=threshold=0.3:makeup=4:release=20:attack=5:knee=4:ratio=10:detection=peak,alimiter=limit=0.8"
\
-filter_complex
"[0:v]scale=w=500:h=-1[a];[1:v][a]overlay=(W-w-20):(H-h-20)[b];[b]scale=w=1280:h=720[x]"
\
-map "[x]:0" -map "2:0" \
-c:a aac \
-c:v libx264 -bf 0 -g 25 -crf 25 -preset veryfast \
-f flv rtmp://{{ wowza.auth.user }}:{{ wowza.auth.pass }}@{{
wowza.server }}/{{ wowza.application }}/{{ agent_name }}
The dsnoop input on alsa allows multiple processes to capture from a
alsa input at the same time, this is necessary to avoid conflicts with
recording. you will most likely have to modify the second filter_complex
chain. This will first scale the camera input to 500 px width and then
overlay it ontop of the capture card input which is captured at 1080p,
so roughly a quarter of the overall width. Position is defined by
(W-w-20)..., in this case bottom right corner. Finally the whole video
is scaled down from 1080p to 720p.
I hope that this was of help to you. If you have any questions, feel
free to ask!
Regards
--
Jan Koppe
eLectures / LearnWeb
Westfälische Wilhelms-Universität
Georgskommende 25 - Room 310
48143 Münster/Westf. - Germany
E-mail:
jan....@wwu.de