Installation instructions to install MPF within a docker container on Jetson family of product

129 views
Skip to first unread message

Vincent Nguyen

unread,
Jun 28, 2022, 9:23:57 AM6/28/22
to MPF Users

Good morning,

For once, this is not a question: after a few hurdles, I managed to install MPF inside a docker container, while still taking advantage of the host X server to display content (such as the DMD), on a Jetson devkit (in my case a Jetson AGX Xavier, but should apply to any Jetson). It is not rocket science, as MPF essentially install a bunch of Python components for which there are very few HW dependencies

I choosed Jetson because I know how to make X forwarding work, and I know I can benefit from all GPU acceleration (especially multimedia codecs) from within the container (although that remains to be tested)

In principle, the same should apply as well to any other Linux distro, although your mileage may vary depending on dependencies and container runtime capabilities

This has a lot of benefits, the major ones being;
- low power consumption: a Jetson Nano is 10W, a Jetson Xavier NX is 15W
- low mechanical foot print: about two credit cards side by side
- full support for multimedia capabilities: any Jetson product can run multiple h264/h265 decoding sessions, probably more actually than what you would ever need for a pinball machine
- compositing : you can also take advantage of the GPU for advanced layering techniques

Note however that the last two have not been tested, as I'm clueless on MPF so need to start digging into it :-) Next step is to connect a PIN2DMD to the Jetson (so will likely need to add a device passthrough to the container).

Also, if you are familiar with containers, in principle I should set machine folders as a volume on the host, and mount it to the container so that data is persistent, and almost all steps below should be defined as a dockercompose file, but I'm lazy this afternoon :-)

So without further ado, here are the steps to install:


# Install BSP (through SD card on devkit, or through SDK Manager on production module or third party systems)
# Install docker runtime through SDK manager or with command line below

sudo apt install nvidia-docker2
sudo systemctl restart docker


# Run the next steps on the Jetson itself

# Install base L4T_container from ngc.nvidia.com

sudo docker pull nvcr.io/nvidia/l4t-base:r32.7.1

# Create container

xhost +
sudo docker run -it --name=mpf_l4t --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r34.1

# connect to container console (once command is run, following commands are executed from within the container)
sudo docker attach mpf_l4t

# all following steps are  run INSIDE the container
# Install build components and dependencies

apt update
apt install build-essential
apt install git
apt install ca-certificates

apt install libavdevice-dev
apt install libavfilter-dev

# clone git and build

cd ~
git clone https://github.com/missionpinball/mpf-debian-installer.git
cd mpf-debian-install
./install

cd ~
git clone https://github.com/missionpinball/mpf-examples
cd demo_man

# Note on DISPLAY variable - on my setup, using Teamviewer, I have to set DISPLAY=:1

export DISPLAY=:1
mpf both -X

# To exit test application, hit Ctl-C
# To disconnect from container, hit Ctl-P then Ctl-Q

Vincent Nguyen

unread,
Jun 28, 2022, 11:19:31 AM6/28/22
to MPF Users
And tested PIN2DMD connected to it, works just fine, need to use the following docker run command:

sudo docker run -it --name=mpf_l4t --net=host --runtime nvidia -e DISPLAY=$DISPLAY --device /dev/bus/usb/001/004 -v /home/mpf_machines/:/home/mpf_machines/ -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r34.1

Need to adjust the /dev/bus/usb/001/004 to whatever lsusb provides (Bus and ID) for PIN2DMD
It's rather inconvenient at that, because everytime I unplug/plug the ID change, but I'll get there

Ralf G.

unread,
Jun 28, 2022, 2:33:06 PM6/28/22
to MPF Users
Vincent,
following - very interesting.
It'd be real nice to see if that would be a way to get around the Kivy/GStreamer bottleneck one has when running mpf-mc on a Raspi? It doesn't feel right to use almost a full blown PC "just" for the output of some video, when much smaller SBCs offer all the hardware that's needed.

Ralf

Vincent Nguyen

unread,
Jun 28, 2022, 3:33:28 PM6/28/22
to mpf-...@googlegroups.com
The smallest Jetson has a decode capability of 4 x 1080p60, and it's time sliced (meaning you can achieve a higher number of streams at lower resolutions and/or lower framerate).

The decoding API is actually gstreamer, so my next step is to install the official gstreamer release for Jetson, but I will likely need to find where in the source gstreamer graph is instanciated to modify it to use the decoder capabilities. 

But conceptually, it's just a SW issue. Although I'm not a SW dev, I'll definitely have to play around with it.


--
You received this message because you are subscribed to a topic in the Google Groups "MPF Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/mpf-users/LG6O5My_wAU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to mpf-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mpf-users/1578314a-6e17-4dd3-8d5a-4693133431f1n%40googlegroups.com.

Ralf G.

unread,
Aug 22, 2022, 2:48:43 PM8/22/22
to MPF Users
Vincent,
I stumbled across this article:
 I don't have a Jetson yet (and by the looks of it, I won't be able to get hold of one before 2023...), so can't check it out, but would that not be useful running MPF > Kivy > GStreamer out of the box, accelerated? Or am I simplifing this too much?
Ralf

Vincent Nguyen

unread,
Aug 23, 2022, 4:00:55 AM8/23/22
to MPF Users
That was the first direction I originally took. However it's not straightforward to replace the gstreamer pipeline. Gstreamer in itself is a graph based pipeline (so you construct your video chain by chaining nodes together, from source to display). In the case of Jetson (and probably many others), you therefore need to modify the graph to replace the SW based decoding node by the HW based decoding node supported on Jetson (and probably many others SoC operate the same way), you just don't replace the SW based decoding node with a HW based one.

FFMPEG on the other hand is a swap in replacement. It has less graph capabilities, meaning on gstreamer you can pretty much take any source, add filtering in between, or color space conversion, send the result to two different "sinks", whereas on ffmepg you are somewhat limited by the core SW capabilities.

But in the scope of MPF, I think that should be perfectly fine.

I could not work on it during summer time, as I'm remote in that period, but before I left I had compiled FFMPEG with HW acceleration support in a docker container, on Jetson.

The next steps are to replace the installed ffmpeg in Kivy, and instruct Kivy to use ffmpeg instead of gstreamer. Unfortunately, this is rather convoluted, since Kivy requires a specific python extension to use ffmpeg in the first place,  and I could not really find yet how to install that extension.

Brian Madden

unread,
Aug 23, 2022, 1:08:10 PM8/23/22
to MPF Users
Quinn is in the process of updating the MPF-MC audio system to use ffpyplayer instead of gstreamer. That will allow a few things, including for us to remove the need for gstreamer and then only use ffpyplayer for video. (He will also write a new video player widget which will work like the existing kivy one but will only use ffpyplayer and will allow the audio tracks from videos to be processed like any other audio file in MPF-MC, including the support for tracks, ducking, etc.

I think ffmpeg is different than ffpyplayer, right? ffpyplayer maybe uses SDL2 under the hood, or? Anyway I'm hoping we can keep these efforts aligned to ultimately allow video support that stays in the GPU.

I wonder if we should enable discussions for the repos on GitHub, and then move these developer conversations there?

Quinn Capen

unread,
Aug 23, 2022, 3:01:23 PM8/23/22
to MPF Users
To clarify what Brian said, I'm removing the gstreamer dependency for the audio system (it is only used for streaming sound files, not in-memory ones) and replacing it with ffmpeg (was planning to use the version/libraries installed with ffpyplayer since it is installed with Kivy). After I get that working and released, next up will be to create a new custom video player based on ffpyplayer in order to route video sound into the MPF-MC audio engine (to support mixing and ducking). Supporting hardware optimized ffmpeg builds should be possible with some installation coordination (especially on Windows).

ffpyplayer is a wrapper around ffmpeg that also uses SDL2.

Quinn

Vincent Nguyen

unread,
Aug 24, 2022, 4:35:33 AM8/24/22
to MPF Users
Disclaimer: I'm not a SW developer, just a tinkerer with a good architecture understanding (and a HW focus)

To come back on the comments: it seems to me the work is done to allow for some custom audio mixing path, but it's not relevant to video playback, correct ? Unless you intend to take advantage of this work to re-architecture video playback (in which case how do you plan to adress content compositing, etc) ?

Kivy, if I recall, has an option for enabling ffpyplayer instead of gstreamer, but indeed it does not allow for custom graph (which is what you're trying to achieve here for the purpose of audio mixing if I understand correctly).

Quinn Capen

unread,
Aug 24, 2022, 10:45:26 AM8/24/22
to MPF Users
You're correct. The initial goal was to route the audio from videos into the custom audio engine. This is not possible in Kivy without providing a custom video solution as well. All of the built-in video playback options in Kivy have their own audio output implementation and none have hooks to provide a custom routing of the audio stream. I believe the best solution is to provide our own video playback engine. This may also enable a path to adding additional video features such as transparent video (which has been requested many times). Kivy does not directly support additional video plugins, however the widget system in Kivy would allow us to create a new custom video widget that is backed by our own video playback code. The Kivy widget API supports the necessary drawing buffer features to display rendered video. In fact, if you look at the Kivy video widget, it is basically just an image widget that updates itself at the video frame rate rendering each frame as an image. That is the same strategy I'm planning to use (ffmpeg will decode the video frames and render them as images that a custom Kivy widget will display and integrate into all the other Kivy widgets). MPF-MC already leverages some custom widgets and widget wrappers.

The drawback of creating our own video engine is it is another piece of complicated code that has to be supported and maintained by volunteer programmers (this is always a trade off with open-source software). My plan is to leverage other active open-source projects (in this case ffpyplayer) and essentially create a custom version of it. Because it is a wrapper around ffmpeg we get to leverage all of the capabilities and continued development of that project (including all codecs and the really complicated parts of video and audio decoding and rendering). The rest of the code should be just basic plumbing around those ffmpeg API calls (event handling, buffers, configuration settings, etc.). This architecture change should also allow for custom hardware-optimized builds of the ffmpeg libraries.

To maximize compatibility, I plan on leaving the existing video implementation alone so it will continue to work as it does now. The new video player will be a new widget type that has settings and events that are compatible with the existing video widget (to make switching easy). This will give the user the option to select which video option they want to use.

Hope that addresses your questions and concerns. 
Reply all
Reply to author
Forward
0 new messages