Re: [OC Users] OC on Docker for production, good idea?

179 views
Skip to first unread message
Message has been deleted

Lars Kiesow

unread,
Sep 11, 2018, 9:53:53 AM9/11/18
to us...@opencast.org
Hi,
several institutions use the official docker containers in production.
If you do not need any special set-up and want to stick to the official
releases, it should be fine to use them.
–Lars

On Tue, 11 Sep 2018 04:48:52 -0700 (PDT)
Adilagha Aliyev <adilagh...@gmail.com> wrote:

> Hey. Does anyone run OC on Docker for production? I know that for
> testing/developing locally, it works alright, but is it good idea to
> use it for production?
>
> I would like to know if it is equally stable. Also is it better idea
> to create custom Docker image, or to use the one on official docker
> github page?
>
>

Abraham Martin

unread,
Sep 11, 2018, 3:13:22 PM9/11/18
to Opencast Users, autom...@uis.cam.ac.uk
Hello all,

I'm new to the community. I work for the University of Cambridge and we have started to plan a production deployment 
(for a pilot) of Opencast with Galicaster based capture agents.

All our current deployments for other projects/products/services are Kubernetes based on Google Cloud.

We would like to do the same with Opencast and deploy it to Kubernetes on Google Cloud. I have found the available
docker compose files (all-in-one and separate) and the list of containers in Quay but I haven't found any k8s 
deployment files or Helm charts. Do you know if anyone has created any of those in the past? Our initial plan would
be to deploy to 3 nodes k8s cluster and to make it highly available if that is possible (multiple masters, workers, 
HA DB etc)

Anyone has experience with it? Anyone would recommend (or not recommend) any of the things I said?

Thanks!

Abraham.

-- 
Dr. Abraham Martin
Head of DevOps

University Information Services
Roger Needham Building
7 JJ Thomson Avenue
Cambridge, CB3 0RB

Greg Logan

unread,
Sep 11, 2018, 5:52:59 PM9/11/18
to Opencast Users, autom...@uis.cam.ac.uk
Hi Abraham,

To my knowledge there is no one running Opencast has created either k8s deployment files or Helm charts.  That's not to say it's impossible - I know people are using the Docker images in production - just that no one has mentioned k8s or Helm.

Some caveats:
- Opencast is quite happy to run with multiple workers, however they need to have unique IDs.  The same image is fine, but they need to identify themselves uniquely to the admin node for job dispatching to work properly.
- Opencast's admin node is currently a pain point in that it is *not* distributed.  You *cannot* have multiple admin nodes running at the same time safely.  Internal job dispatching is currently done from a central point which does not understand distribution.

There are long term plans to fix this, but that's something to keep in mind for your pilot.  Our largest adopters probably have some hints about how to get the admin node to scale to huge volumes, but for a pilot you aren't likely to run into any immediate issues.  This may sound scary, but our larger adopters are running huge volumes of recording through with the current codebase so I'm sure we can get you working!

Please let this list know with any questions or issues you run into :)

G

--
You received this message because you are subscribed to the Google Groups "Opencast Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to users+un...@opencast.org.

Kristof Keppens

unread,
Sep 12, 2018, 3:03:37 AM9/12/18
to us...@opencast.org
Hi Abraham,

We are currently working on our new opencast system ( to replace our aging opencast 1.6 installation ) on kubernetes ( self hosted ) with the official opencast docker images. I’ll give a brief explanation of what our setup currently is :

* HA kubernetes setup with multiple master and multiple worker nodes
* Distributed opencast setup : 1 x admin, 1 x presentation, 4 x workers ( can scale as needed ), 2 x ingest, ActiveMQ
* Streaming ( Wowza ) and MySQL is currently hosted outside of the kubernetes setup
* Traefik for Ingress and SSL certificate handling
* NFS for shared storage between the different opencast nodes
* Monitoring on kubernetes with Prometheus ( and alertmanager for notifications and grafana for metrics )

At the moment we’re still in the pilot phase with this new setup, our conventional opencast 1.6 installation will remain the main system for this year, so it hasn’t been extensively testen. We are planning some load tests and moving the first capture agents to the new system in the following months.

If you’re interested I can share our kubernetes manifests, but I’ll need to revise them a bit cause they contain specific bits for our institution. If the interest for opencast on kubernetes grows it might be interesting to see if we can create a helm chart for deploying opencast, but haven’t had the chance to look into the creation of helm charts.

Kind regards

Kristof Keppens

-----------------------------------------------------------------------------------

Directie ICT | Onderwijstechnologie (ICTO)

Krijgslaan 281/S9 | 9000 Gent | http://icto.ugent.be

Materiaal ontlenen: http://icto.ugent.be/nl/content/materiaal-ontlenen

Multimedia Helpdesk Tel.: 09/264 85 73

Stuart Phillipson

unread,
Sep 12, 2018, 5:38:08 AM9/12/18
to us...@opencast.org, autom...@uis.cam.ac.uk
probably have some hints about how to get the admin node to scale to huge volumes


I’m not sure if we did anything that special to admin for scaling. We did split out some functions (not sure how many others are doing this?), so we run the ingest services on nodes separate from admin and we moved the edit process on to its own node. Other than that we gave it more resources than any other VM we use and it seems pretty ok with 2,500+ hours of recordings going through it a week. 
 
Stuart Phillipson | IT Services Media Technologies Team Lead

Office 1
Kilburn Building
University of Manchester
Manchester
M13 9PL
United Kingdom

e-mail: stuart.p...@manchester.ac.uk
Phone: 016130 60478
 

Matthias Neugebauer

unread,
Sep 12, 2018, 11:10:45 AM9/12/18
to Opencast Users, autom...@uis.cam.ac.uk, Stuart.P...@manchester.ac.uk
Hi,

we currently still use Docker Swarm in production, but plan to move to Kubernetes.

As for deployment files: I gave a talk at the last Opencast summit about containerized environments where I also presented an example Kubernetes configuration (see here). Note that these are _not_ production ready! I would really like to have some YAML files / ideally a Helm chart, but haven't had the time to create one. If one of you beets me to it, I would really appreciate it if you would submit those the the opencast-docker repository.

Concerning Greg's comment about unique worker URLs: If you use environment variables to setup the containers and leave out the ORG_OPENCASTPROJECT_SERVER_URL variable, it will be set automatically based on the hostname of the container. This assumes it is routable which should be the case for Kubernetes. So in our Swarm cluster we have three sets of configuration files: for admin and presentation we set the server URL since there is only one instance and for workers we leave it out. All workers are thus sharing the same configuration files and we are able to easily scale the number of worker instances in our cluster.

Hope that helps.

Best regards
Matthias

Am Mittwoch, 12. September 2018 11:38:08 UTC+2 schrieb Stuart Phillipson:
probably have some hints about how to get the admin node to scale to huge volumes


I’m not sure if we did anything that special to admin for scaling. We did split out some functions (not sure how many others are doing this?), so we run the ingest services on nodes separate from admin and we moved the edit process on to its own node. Other than that we gave it more resources than any other VM we use and it seems pretty ok with 2,500+ hours of recordings going through it a week. 
Stuart Phillipson | IT Services Media Technologies Team Lead

Office 1
Kilburn Building
University of Manchester
Manchester
M13 9PL
United Kingdom

e-mail: stuart.phillipson@manchester.ac.uk
Phone: 016130 60478
 

Steve Ison

unread,
Oct 4, 2018, 9:10:53 AM10/4/18
to Opencast Users, autom...@uis.cam.ac.uk
Hi Greg,

I'm one of Abraham's team in Cambridge. Could you point me at the documentation for setting the worker ID please, I'm failing to find it (not in the multiple server setup instructions?).

Thanks,
Steve.

Greg Logan

unread,
Oct 4, 2018, 11:38:13 AM10/4/18
to Opencast Users, autom...@uis.cam.ac.uk
Hi Steve,

The worker's ID is the hostname derived from the org.opencastproject.server.url key in custom.properties, so it's not set directly.

G
Reply all
Reply to author
Forward
0 new messages