warm restart

267 views
Skip to first unread message

David Karlsen

unread,
Nov 30, 2019, 12:03:23 PM11/30/19
to memcached
Reading https://github.com/memcached/memcached/wiki/WarmRestart it is a bit unclear to me if the mount *has* to be tmpfs backed, or it can be a normal fileystem like xfs.
We are looking into running memcached through Kubernetes/containers - and as a tmpfs volume would be wiped on pod-recreation

Roberto Spadim

unread,
Nov 30, 2019, 12:06:45 PM11/30/19
to memc...@googlegroups.com
it's a file system, tem point about warm restart is reset server and load previous data, and how to do this? kill the proess with the proper signal


Em sáb., 30 de nov. de 2019 às 15:03, David Karlsen <da...@davidkarlsen.com> escreveu:
Reading https://github.com/memcached/memcached/wiki/WarmRestart it is a bit unclear to me if the mount *has* to be tmpfs backed, or it can be a normal fileystem like xfs.
We are looking into running memcached through Kubernetes/containers - and as a tmpfs volume would be wiped on pod-recreation

--

---
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/2ed07578-4aff-4704-83b1-3cd7d56de59f%40googlegroups.com.


--
Roberto Spadim

Roberto Spadim

unread,
Nov 30, 2019, 12:32:40 PM11/30/19
to memc...@googlegroups.com

dormando

unread,
Nov 30, 2019, 9:58:31 PM11/30/19
to memcached
Hey,

It's only guaranteed to work in a ram disk. It will "work" on anything
else, but you'll lose deterministic performance. Worst case it'll burn out
whatever device is underlying because it's not optimized for anything but
RAM.

So, two options for this situation:

1) I'd hope there's some way to bind mount an underlying tmpfs. With
almost all container systems there's some method of exposing an underlying
path, though I have a low opinion of Kube so manybe not.

2) It does just create two normal files: the path you give it + .meta file
that appears during graceful shutdown. After shutdown you can copy these
(perhaps with pigz or something) to a filesystem then restore to in-pod
tmpfs before starting up again. It'll increase the downtime but it'll
work.

I guess.. 3) For completeness it also works on a DCPMM dax mount, which
survive reboots and act as "filesystems". You'd need to have the right
system and memory and etc.

-Dormando

On Sat, 30 Nov 2019, David Karlsen wrote:

> Reading https://github.com/memcached/memcached/wiki/WarmRestart it is a bit unclear to me if the mount *has* to be tmpfs backed, or it can be a normal fileystem like xfs.
> We are looking into running memcached through Kubernetes/containers - and as a tmpfs volume would be wiped on pod-recreation
>

David Karlsen

unread,
Nov 30, 2019, 10:15:45 PM11/30/19
to memc...@googlegroups.com
Won’t the cache be written to file at shutdown and not contionously while running?

--

dormando

unread,
Nov 30, 2019, 10:25:32 PM11/30/19
to memc...@googlegroups.com
The disk file is memory mapped; that is the actual memory, now external to
memcached. There's no flush at shutdown, it just gracefully stops all
in-flight actions and then does a fast data fixup on restart.

So it does continually read/write to that file. As I said earlier you can
create an "equivalent" to writing the file at shutdown by moving the file
after shutdown :)
> To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/CAGO7Ob36THu%2BJTo3uxFMW6t6RV4BUQ38WMuUVUetY9TqGwDP7w%40mail.gmail.com.
>
>

David Karlsen

unread,
Dec 1, 2019, 5:33:51 AM12/1/19
to memc...@googlegroups.com
Thank you - that explains it well. I'll look around if I can create a "durable" tmpfs in k8s via a storageclass :)


dormando

unread,
Dec 4, 2019, 2:20:29 AM12/4/19
to memc...@googlegroups.com
If you succeed you should share with the class :)
> To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/CAGO7Ob0k73qux%3DL1GPHRchMy5M0v1EfY2NtqWJWxuTLcx2NP%2Bg%40mail.gmail.com.
>
>

David Karlsen

unread,
Dec 4, 2019, 3:58:27 PM12/4/19
to memc...@googlegroups.com

David Karlsen

unread,
Dec 6, 2019, 10:39:05 AM12/6/19
to memcached
So far I am stuck on:

k logs test-memcached-0 
ftruncate failed: Bad file descriptor


  - memcached
    - -m 768m
    - -I 1m
    - -v
    - -e /cache-state/memory_file

-vvv does not reveal anything interesting.
What could be the cause of this?

David Karlsen

unread,
Dec 6, 2019, 11:24:52 AM12/6/19
to memc...@googlegroups.com

--

---
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+...@googlegroups.com.

dormando

unread,
Dec 6, 2019, 5:51:38 PM12/6/19
to memc...@googlegroups.com
It's going to use some caps (opening files, mmap'ing them, shared memory,
etc). I don't know what maps to which specific thing.

That error looks like an omission on my part..

mmap_fd = open(file, O_RDWR|O_CREAT, S_IRWXU);
if (ftruncate(mmap_fd, limit) != 0) {
perror("ftruncate failed");
abort();
}

missing the error check after open.

Try adding a:

if (mmap_fd == -1) {
perror("failed to open file for mmap");
abort();
}

between the open and if (ftruncate) lines, which will give you the real
error. I'll get that fixed upstream.
> To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/CAGO7Ob1d002j0ve-aN6hBGncZWF7jR9ygpaz7B54wbQUGDA%2Beg%40mail.gmail.com.
>
>

David Karlsen

unread,
Dec 9, 2019, 8:31:53 AM12/9/19
to memcached
OK, so I compiled with that changed (doing the same steps as in the .travis.yml) - but it seems to use shared-libraries. Is there anyway to compile this statically?
I also created a PR with the same change: https://github.com/memcached/memcached/pull/587


lørdag 30. november 2019 18.03.23 UTC+1 skrev David Karlsen følgende:

David Karlsen

unread,
Dec 9, 2019, 8:49:36 AM12/9/19
to memcached
OK, I hacked it together apt-installing some shared libs.
With the patch applied I get:

k logs test-memcached-0
failed to open file for mmap: No such file or directory

Which is a bit strange - should not the file be created dynamically if it does not exist?
> To unsubscribe from this group and stop receiving emails from it, send an email to memc...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/f39374e2-d603-43e6-928a-a9a0fc9e93ed%40googlegroups.com.
>
>
>
> --
> --
> David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to memc...@googlegroups.com.

dormando

unread,
Dec 9, 2019, 3:32:18 PM12/9/19
to memcached
Is the directory missing?

On Mon, 9 Dec 2019, David Karlsen wrote:

> OK, I hacked it together apt-installing some shared libs.With the patch applied I get:
> To unsubscribe from this group and stop receiving emails from it, send an email to memcached+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/8defad58-9248-466d-8e4a-14f9e4afd9af%40googlegroups.com.
>
>

David Karlsen

unread,
Dec 9, 2019, 3:51:12 PM12/9/19
to memc...@googlegroups.com

David Karlsen

unread,
Dec 9, 2019, 3:51:40 PM12/9/19
to memc...@googlegroups.com
No. It is there and writable

man. 9. des. 2019 kl. 21:32 skrev dormando <dorm...@rydia.net>:

dormando

unread,
Dec 9, 2019, 3:54:38 PM12/9/19
to memc...@googlegroups.com
I'm not sure offhand. From my perspective I'd triple check what the
path/file it's trying to open is (add a printf or something), else you're
in container/kube territory and that's a bit beyond me. I assume it logs
something on policy violations, or can be made to do so.
> To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/CAGO7Ob0prNyY%2B_qVyRdmvn-vWnxF2An6i%3DbJc53G7zWmj7J9uQ%40mail.gmail.com.
>
>

David Karlsen

unread,
Dec 9, 2019, 4:50:51 PM12/9/19
to memc...@googlegroups.com
OK, found it, apparently -e needs to be followed by path w/o any whitespace

dormando

unread,
Dec 9, 2019, 4:58:06 PM12/9/19
to memc...@googlegroups.com

David Karlsen

unread,
Dec 9, 2019, 5:58:36 PM12/9/19
to memc...@googlegroups.com
-e /cache_state/memory_file <-- fails
-e/cache_state/memory_file <-- OK

dormando

unread,
Dec 9, 2019, 6:16:43 PM12/9/19
to memc...@googlegroups.com
weird... I suspect that's something in your environment. It's certainly
not that way anywhere I've ever run it. I'm using it right now with a
space :)

On Mon, 9 Dec 2019, David Karlsen wrote:

> -e /cache_state/memory_file <-- fails-e/cache_state/memory_file <-- OK
> To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/CAGO7Ob2w8HuqOStJOaqmFFMvTOiECdvTzp3PDt9JWysgnQo%2BAw%40mail.gmail.com.
>
>
Reply all
Reply to author
Forward
0 new messages