Moving data from binary install to Docker

47 views
Skip to first unread message

Lex Medeiros

unread,
Aug 1, 2023, 12:22:36 PM8/1/23
to dot...@googlegroups.com
Dear Group:

I hope everyone is having a great time right now! 

 I almost had a stroke when I saw how easy it is nowadays to have a dotCMS instance running with Docker!


Stored the yml file in a local directory and ran this command line from that local directory


docker-compose up -d

Then, in no time I got a dotCMS instance running. I was able to log into the backend with the passwords set in the yml file! What a difference from the binary installation!

As I am now doing everything in Docker I have the following questions about moving data from the binary install to Docker:

To maintain the asset directory, the elastic search index data and postgreSQL database, is it better to use persistent volumes or bind mounts on the local machine for dotCMS containers?


I assume that after having installed dotCMS with Docker and coming from a binary installation, I have to get into the postgreSQL container drop the database and upload the postgreSQL backup I did of my postgreSQL binary installation? Also put the asset directory in the right path too? And, theoretically, that would complete my “migration” to Docker and have the dotCMS instance happily ever after!

Lastly, is there a docker-compose.yml file with an nginx reverse proxy configuration? 

What is the best directory - in a linux machine - to store the dotCMS docker-compose.yml file? Since the documentation recommends one creates all the persistent values inside the directory that holds the yml file.  https://www.dotcms.com/docs/latest/restore-dataset-using-docker


Thank you for all your help.

Alex

Todd Jacobsen

unread,
Aug 17, 2023, 3:52:53 PM8/17/23
to dotCMS User Group
Alex,

Docker is the best. 

We use bind mounts for the "assets" directory. Up to you what you do with opensearch and postgres - you could use docker volumes for both, and use `pd_dump` to make database backups to save outside of the docker volume. 

I am not aware of a best practice around where to store docker container configs and data. Given that the bind-mounted "assets" dir and files needs to be writable by the dotCMS docker user,  see https://github.com/dotCMS/core/blob/master/docker/dotcms/Dockerfile#L59-L63
It can be nice to 
- create a "dotcms" linux user/group on the docker host with UID/GID 65001
- allow that user to run docker containers, usually by adding it to the "docker" group though that is distro-dependent
then maybe put your docker-compose files in directories like ~dotcms/containers/prod-2301/

Create a pg_dump file from your current site using options 
pg_dump --no-owner --clean 

One nice way to import the pg_dump data from your old version when using a docker compose file like https://github.com/dotCMS/core/tree/master/docker/docker-compose-examples/single-node
is to
- edit docker-compose.yml to expose the postgres server port 5432
- docker compose up -d db  to start ONLY the postgres server - this prevents dotCMS from initializing a clean database
- load your pg_dump database data
- run "clean-tables.sql" - see attached
- run docker compose up -d  to start opensearch and dotcms containers

Logging into the backend from a remote (not localhost) location requires HTTPS for secure cookies. Port 8443 in the container uses HTTPS with a valid cert for https://local.dotcms.site/

We do not provide an example for nginx or any reverse proxy as there are many factors, primarily how to manage your SSL certs, that are beyond the purview of the application server. Once you find an nginx docker config that works for you, it can be added to your dotCMS docker-compose.yml file if you wish to run everything on one server.

Good luck!
Todd
clean-tables.sql
Reply all
Reply to author
Forward
0 new messages