I am trying to get ElasticSearch to run via docker swarm. I keep getting an error: " memory locking requested for elasticsearch process but memory is not locked
I can get an instance to run via docker-compose, but when running docker deploy the 'ulimit' option in my compose file is not used.
It seems like that should be possible via this PR
, and with the right combination of updating various portions of the docker engine I might be able to get the ulimits option from my compose files, which I think is the best solution because then I can control each service's resource usage.
But I also see from elastic documentation
and various SO answers that you might need to modify your docker daemon to set LimitMEMLOCK=infinity. You guys have some good documentation on how to do that here
but when I deploy a new instance with this drop in unit, and then run:
docker run --rm centos:8 /bin/bash -c 'ulimit -Hl && ulimit -Sl'
I get 64 from each, which I believe indicates that I am not actually changing the MEMLOCK setting. When I run systemctl status docker.service I can see my drop in unit there, so drop in file is created/loaded via systemd, but I am still getting the error elastic could not lock memory.
I would appreciate any insight/guidance around where else I should look for troubleshooting why this does not run in swarm mode, and/or if I should upgrade my docker engine so that I can make use of the ulimits that I define in my compose files.