Hi Matthias,
org.opencastproject.job.load.acceptexceeding is used to tell
a node to accept a job that have its job load greater than the
maximum node capacity so it doesn't help in your case.
What you probably want to do is to limit the number of encoding jobs running at the same time, which can be done by specifying job loads in the encoding profile configuration found in etc/encodings (something like 'profile.PROFILE_NAME.jobload=2.0'). You will need to do some calculations to get the right number, considering how many encoding jobs you want to run simultaneously.
As an example, our workers have 32 cores and our most intensive encoding jobs produce 4 outputs using the process-smil operation, which encodes all in parallel. We thus have 4 encoding profiles, each has a job load of 4.0 and, in etc/org.opencastproject.composer.impl.ComposerServiceImpl.cfg we have 'job.load.factor.process.smil=0.5', so each encoding job has a job load of 4 x 4.0 * 0.5 = 8.0, which limits the node to run 4 of those jobs concurrently at any given time (32 cores/8.0).
I hope this helps!
Thanks,
Rute
Harvard-DCE
--
To unsubscribe from this group and stop receiving emails from it, send an email to users+un...@opencast.org.
Am 25.10.2019 um 12:49 schrieb Matthias Vollroth <thd.ma...@gmail.com>:
Hi Kristofs, thanks for your efforts."There isn’t anything limiting the amount of memory resources on your system and killing ffmpeg when it exceeds that limit"No there is not - the server instance is not limited in any way - composer jobloads are on default (job.load.max.multiple.profiles=0.8 / job.load.factor.process.smil=0.5) on both clusters.Only on the productive system ffmpeg jobs just consuming more ressources as available - then linux is automatically killing these jobs with sigkill and the encoding of 720p/1080p gets aborted.
--