redis freezes completely, seemingly at random and without a trace

3,230 views
Skip to first unread message

Michel Benevento

unread,
Feb 15, 2015, 10:04:47 AM2/15/15
to redi...@googlegroups.com
I have redis-server 2.8.19 running on Ubuntu 14.10. I believe I've made all the right OS level changes to make redis happy. But about a month ago, redis has started freezing up completely about every other 1 or 2 days.  I can't say when or why this started, but there is never any trace in the logs why any of this happens, not any particular load or operation that coincides with the freeze (the server is pretty much idle). It seems to happen totally at random and all there is left to do is kill -9 everything.

I am currently trying to capture more info about the freeze with gdb attached and debug level logging, but so far it is all completely baffling to me. I actually think a mature tool like redis should not be susceptible to this kind of malfunction, so I am on the verge of completely removing redis from my architecture. But I'd rather not, so here's one final attempt to figure this out. Does anyone recognize this? Any ideas how to further analyze/remedy this? My sysadmin fu is decidedly average, so I may be missing something.

Thanks,
Michel

Josiah Carlson

unread,
Feb 15, 2015, 3:19:41 PM2/15/15
to redi...@googlegroups.com
Redis is installed in hundreds of thousands, perhaps millions of environments. Your problem could be caused by one of several hundred different issues. To help us help you, you're going to need to give us more information.

Can you provide a redis log file for the time before/during/after the lockup? Can you provide us with system-level metrics for before/during/after the lockup (CPU, system memory use, Redis memory use, total system memory, disk usage over time, connections to Redis, ...l)? Also: Redis INFO output during normal execution, an idea of what you are using Redis for (ever-growing cache, cache with expiration, something else), number of clients, your Redis conf file, ...

Without some or all of that information, the best guesses anyone can really offer are: 1) you're low on memory during Redis fork() for snapshot/AOF rewriting, 2) kernel bug.

 - Josiah


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.

Michel Benevento

unread,
Feb 15, 2015, 4:24:26 PM2/15/15
to redi...@googlegroups.com
Thanks for your response, I'll try to answer as much as I can. Like I said, I am in the process of capturing more info about the crash.

I am using redis for a few things: as a store for ruby's sidekiq, for storing access tokens and chat messages. I use no special redis features like expiration. But like I said, the server is currently mostly idle (it is the backend for Mealmatic, an iOS app I just released, see www.mealmatic.com) and the log files show nothing in particular so I have no way of establishing when or why exactly the problem occurs. There is practically no data in the system.

I have used apt-get install to install redis and run it under a 'redis' user via a init.d script.

Meanwhile, here is my INFO

mealmatic@mealmatic-1:~$ redis-cli INFO
# Server
redis_version:2.8.19
redis_git_sha1:bbbe4326
redis_git_dirty:1
redis_build_id:6719065d52dde9a4
redis_mode:standalone
os:Linux 3.16.0-25-generic x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.5
process_id:2126
run_id:7ae99384d01ade23e1cff0d585e88114d93a4523
tcp_port:6379
uptime_in_seconds:32991
uptime_in_days:0
hz:10
lru_clock:14747619
config_file:/etc/redis/redis.conf

# Clients
connected_clients:13
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:762280
used_memory_human:744.41K
used_memory_rss:3620864
used_memory_peak:876824
used_memory_peak_human:856.27K
used_memory_lua:56320
mem_fragmentation_ratio:4.75
mem_allocator:jemalloc-3.6.0

# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1424030716
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok

# Stats
total_connections_received:63
total_commands_processed:40987
instantaneous_ops_per_sec:0
total_net_input_bytes:1679051
total_net_output_bytes:7937470
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:196
keyspace_misses:37375
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:1179

# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:109.59
used_cpu_user:47.78
used_cpu_sys_children:0.02
used_cpu_user_children:0.00

# Keyspace
db0:keys=29,expires=0,avg_ttl=0


and here are all the non-commented lines from   /etc/redis/redis.conf :

daemonize yes
pidfile /var/run/redis/redis-server.pid
port 6379
tcp-backlog 511
bind 127.0.0.1
timeout 0
tcp-keepalive 0
loglevel debug 
logfile /var/log/redis/redis-server.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

I have switched on debug level logging and it's just a long list of statements like these:

[2126] 15 Feb 22:15:28.469 - DB 0: 29 keys (0 volatile) in 32 slots HT.
[2126] 15 Feb 22:15:28.469 - 12 clients connected (0 slaves), 703632 bytes in use
[2126] 15 Feb 22:15:33.539 - DB 0: 29 keys (0 volatile) in 32 slots HT.
[2126] 15 Feb 22:15:33.541 - 12 clients connected (0 slaves), 703632 bytes in use

The only strange thing is that I at startup, it keeps giving me the warning about the max open files, even though I have upped the limits for everyone

[2126] 15 Feb 12:46:12.057 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
[2126] 15 Feb 12:46:12.058 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.
[2126] 15 Feb 12:46:12.058 # Current maximum open files is 1024. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.


mealmatic@mealmatic-1:~$ ulimit -n
11000

So I don't know what to do about that or whether it's relevant. That's it so far, when I get a stack trace or more logging info I will report back.

Salvatore Sanfilippo

unread,
Feb 15, 2015, 5:07:50 PM2/15/15
to Redis DB
Hello, for how much time it freezes when this happens? Btw we have a lot of debugging documentation and built-in tools for this issues. Everything is documented at http://redis.io/topics/latency, if this is not enough, this should at least provide you with much more insights about the possible case so that you can follow up here with some more "state" needed to investigate the issue.

Regards,
Salvatore

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.



--
Salvatore 'antirez' Sanfilippo
open source developer - Pivotal http://pivotal.io

"If a system is to have conceptual integrity, someone must control the concepts."
       — Fred Brooks, "The Mythical Man-Month", 1975.

Michel Benevento

unread,
Feb 15, 2015, 5:22:17 PM2/15/15
to redi...@googlegroups.com
I just completely stops working and doesn’t resume al all (at least not for a few hours). I can not stop the server normally and need to kill the process. 
Whenever it has happened, I can see several CLOSE_WAIT connections from lsof, but I am thinking this is an effect rather than a cause.

This doesn’t seem to be a latency issue (by which I mean performance degradation under load) since the entire server is pretty much idle and it all works beautifully when testing. But like I said, I am currently running the server with debug logging and gdb attached so perhaps that will teach us something more.

Thanks for the response, I appreciate it.

Michel




You received this message because you are subscribed to a topic in the Google Groups "Redis DB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/redis-db/payVoIUg4S4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to redis-db+u...@googlegroups.com.

Salvatore Sanfilippo

unread,
Feb 15, 2015, 5:31:57 PM2/15/15
to Redis DB
Hello Michel, I can confirm that this definitely does not look like a latency issue as you said, but a different thing either at Redis level (an incredibly rare bug since we have no reports of it in the latest years if I remember correctly) or something happening at OS / networking level. One of the most interesting things to obtain is indeed a stack trace, possibly with gdb attaching to the process with "gdb -p", but I see you are already trying to do this.

Thanks,
Salvatore

Josiah Carlson

unread,
Feb 15, 2015, 5:45:26 PM2/15/15
to redi...@googlegroups.com
Be careful to set your logging level back to VERBOSE or NOTICE after this process, as DEBUG *will* fill your disk if you give it enough time and reason to do so.

With the lack of memory use, command throughput, usage of AOF, etc., it doesn't seem as though there really is enough going on with Redis to result in these issues necessarily caused by Redis (except by a previously-unknown rare bug as Salvatore has mentioned).

If it isn't difficult, I would recommend trying to switch machines to see if it might be caused by something related to your environment, as several previously-reported "Redis bugs" ended up being hardware issues (bad memory, bad NIC, bad hard drive, ...). Given that others haven't reported this issue despite pushing Redis harder, a hardware issue is the most likely cause given existing data.

 - Josiah


--

Michel Benevento

unread,
Feb 15, 2015, 6:10:34 PM2/15/15
to redi...@googlegroups.com
Thanks for the heads up. 

The server is a single VPS (2 cores, 4GB RAM, 150GB SSD) containing my entire server setup (nginx, unicorn, postgresql, redis). In normal/low load conditions it uses about 30-40% of memory. I am not sure what you mean with ‘command throughput’ or what AOF is, so if you could point me the way in getting that info I would be most grateful. 

I have seen this error occur in two different installations of Redis (I upgraded from 2.8.xx to 2.8.19 to see if that would remedy it). The rest of the system has been rock solid, and in fact so has Redis for well over a year in development (on the same machine). So I do think it is some sort of environment change that has introduced this, but I have been running only regular apt-get upgrades. I am no expert by any stretch, but if it were a hardware issue I think other parts of the system would have acted up also, not just Redis? I am not even sure if you always use the same physical RAM between reboots on a VPS.

Anyway, we’ll have to wait a while for the freeze to reoccur and then I’ll report back.

Brgds,
Michel

You received this message because you are subscribed to a topic in the Google Groups "Redis DB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/redis-db/payVoIUg4S4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to redis-db+u...@googlegroups.com.

Michel Benevento

unread,
Feb 16, 2015, 1:42:22 PM2/16/15
to redi...@googlegroups.com
It has happened again. After about 1 day and a half of uptime the server has gotten stuck again.

The backtrace:

#0 0x00007fff73bfd091 in gettimeofday ()
#1 0x000000000041adce in ?? ()
#2 0x000000000041b0e4 in aeProcessEvents ()
#3 0x000000000041b30b in aeMain ()
#4 0x0000000000423c0e in main ()

info registers:

rax 0x200 512
rbx 0x3 3
rcx 0x1 1
rdx 0x17a2c7 1548999
rsi 0x1 1
rdi 0x7fff73bf5340 140735135306560
rbp 0x7fff73bf5330 0x7fff73bf5330
rsp 0x7fff73bf52e8 0x7fff73bf52e8
r8 0xffffffffff5ff040 -10489792
r9 0x1b29f3b4d2f5e4 7645951058703844
r10 0x7fe368 8381288
r11 0x0 0
r12 0x1 1
r13 0xffffffffff7ff000 -8392704
r14 0x2 2
r15 0x0 0
rip 0x7fff73bfd091 0x7fff73bfd091 <gettimeofday+369>
eflags 0x293 [ CF AF SF IF ]
cs 0x33 51
ss 0x2b 43
ds 0x0 0
es 0x0 0
fs 0x0 0
gs 0x0 0


I also have a 31MB core dump that I can make available if needed.

Really hope this helps!

Brgds,
Michel

Josiah Carlson

unread,
Feb 16, 2015, 2:03:07 PM2/16/15
to redi...@googlegroups.com
I approach command throughput by looking at a few different data points in the INFO output. Some include (but are not limited to):
rdb_changes_since_last_save - how many changes since the last snapshot
total_commands_processed combined with uptime_in_seconds - gives an average qps (yours is barely over 1 command/second executed)
instantaneous_ops_per_sec - gives an idea of current load
total_net_input_bytes and total_net_output_bytes - gives an idea of command and data size (roughly 40 bytes/command with roughly 200 bytes/response)

Long story short: Redis isn't really doing anything. This is a weird bug.


AOF is an abbreviation for "Append-Only File", which is a persistence option. For some workloads, AOF can result in better overall resource utilization for better persistence compared to using snapshots. You can read more about it here: http://redis.io/topics/persistence . You do so few reads and writes that it wouldn't matter much either way.

 - Josiah

Matt Palmer

unread,
Feb 16, 2015, 3:06:33 PM2/16/15
to redi...@googlegroups.com
On Mon, Feb 16, 2015 at 07:42:07PM +0100, Michel Benevento wrote:
> It has happened again. After about 1 day and a half of uptime the server has gotten stuck again.
>
> The backtrace:
>
> #0 0x00007fff73bfd091 in gettimeofday ()

That's... not Redis. I'm having trouble coming up with *any* sort of
scenario in which gettimeofday could possibly hang, but whatever theories
I've got, they all involve rather unpleasant bugs in fairly fundamental
parts of the system.

> I also have a 31MB core dump that I can make available if needed.

Install the debugging symbols for redis and glibc (IIRC you said you were on
Ubuntu, so the packages you want are redis-server-dbg and libc6-dbg), then
load up the core dump in gdb (gdb redis-server <corefile>) and take another
backtrace. It will have function arguments and all sorts of other useful
info, and should, hopefully, fill in that ??.

- Matt

--
I have always wished that my computer would be as easy to use as my
telephone. My wish has come true. I no longer know how to use my telephone.
-- Bjarne Stroustrup

Michel Benevento

unread,
Feb 16, 2015, 3:17:09 PM2/16/15
to redi...@googlegroups.com
Where can I find redis-server-dbg? I get

E: Unable to locate package redis-server-dbg

Michel

Matt Palmer

unread,
Feb 16, 2015, 3:23:32 PM2/16/15
to redi...@googlegroups.com
On Mon, Feb 16, 2015 at 09:16:59PM +0100, Michel Benevento wrote:
> Where can I find redis-server-dbg? I get
>
> E: Unable to locate package redis-server-dbg

Gah. They're not building a debug symbols package. How inconvenient.

Oh well, skip that one, hopefully the glibc symbols will be enough of an
additional assistance.

- Matt

Michel Benevento

unread,
Feb 16, 2015, 3:40:49 PM2/16/15
to redi...@googlegroups.com
I am not at all sure what I’m doing here, but it looks pretty strange and not what we’re looking for I guess.

….
Reading symbols from redis-server...(no debugging symbols found)...done.

warning: core file may not match specified executable file.
[New LWP 2127]
[New LWP 2128]
[New LWP 2126]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/bin/redis-server 127.0.0.1:6379'.
Program terminated with signal SIGINT, Interrupt.
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x0000000000454adb in bioProcessBackgroundJobs ()
#2 0x00007f108ce2b0a5 in start_thread (arg=0x7f108b7ff700) at pthread_create.c:309
#3 0x00007f108cb5888d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Matt Palmer

unread,
Feb 16, 2015, 7:33:07 PM2/16/15
to redi...@googlegroups.com
On Mon, Feb 16, 2015 at 09:40:33PM +0100, Michel Benevento wrote:
> I am not at all sure what I’m doing here, but it looks pretty strange and
> not what we’re looking for I guess.

No, actually, this looks possibly quite useful.

> ….
> Reading symbols from redis-server...(no debugging symbols found)...done.
>
> warning: core file may not match specified executable file.

That's... possibly not good. But we're getting sensible-looking backtraces,
so let us ignore that for now.

> [New LWP 2127]
> [New LWP 2128]
> [New LWP 2126]

Three threads.

> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> Core was generated by `/usr/bin/redis-server 127.0.0.1:6379'.
> Program terminated with signal SIGINT, Interrupt.
> #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
> 185 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.
> (gdb) bt
> #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
> #1 0x0000000000454adb in bioProcessBackgroundJobs ()
> #2 0x00007f108ce2b0a5 in start_thread (arg=0x7f108b7ff700) at pthread_create.c:309
> #3 0x00007f108cb5888d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This is a backtrace for *one* of the threads, and unfortunately it's one
that is blocked on something, possibly a lock. We'll need to see what all
of the threads are up to, so start up GDB again, and run "thread apply all
bt". That should dump a backtrace for all threads, which should make it
more obvious what's happening where.

- Matt

--
Q: Why do Marxists only drink herbal tea?
A: Because proper tea is theft.
-- Chris Suslowicz, in the Monastery

Michel Benevento

unread,
Feb 16, 2015, 7:54:55 PM2/16/15
to redi...@googlegroups.com
mealmatic@mealmatic-1:~$ gdb redis-server core.2126
GNU gdb (Ubuntu 7.8-1ubuntu4) 7.8.0.20141001-cvs
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from redis-server...(no debugging symbols found)...done.

warning: core file may not match specified executable file.
[New LWP 2127]
[New LWP 2128]
[New LWP 2126]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/bin/redis-server 127.0.0.1:6379'.
Program terminated with signal SIGINT, Interrupt.
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.
(gdb) thread apply all bt

Thread 3 (Thread 0x7f108d760780 (LWP 2126)):
#0 0x00007fff73bfd091 in gettimeofday ()
#1 0x000000000041adce in ?? ()
#2 0x000000000041b0e4 in aeProcessEvents ()
#3 0x000000000041b30b in aeMain ()
#4 0x0000000000423c0e in main ()

Thread 2 (Thread 0x7f108affe700 (LWP 2128)):
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x0000000000454adb in bioProcessBackgroundJobs ()
#2 0x00007f108ce2b0a5 in start_thread (arg=0x7f108affe700) at pthread_create.c:309
#3 0x00007f108cb5888d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 1 (Thread 0x7f108b7ff700 (LWP 2127)):
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x0000000000454adb in bioProcessBackgroundJobs ()
#2 0x00007f108ce2b0a5 in start_thread (arg=0x7f108b7ff700) at pthread_create.c:309
#3 0x00007f108cb5888d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

So still the question marks. I do see a ‘...pthread_cond_wait.S: No such file or directory’ that I don’t like. Googling around it looks like it is related to a lot of hangs, freezes and locks.

Question to the list: should I switch my Linux to make this go away?

Brgds,
Michel



Josiah Carlson

unread,
Feb 17, 2015, 12:20:01 AM2/17/15
to redi...@googlegroups.com
A cursory search for issues with gettimeofday() suggests two scenarios where this could happen and has happened before (not to Redis specifically): during interrupt handling, or if the system clock was changed/is being changed.

It doesn't look like gettimeofday() is being run as part of a signal handler (the ?? between aeProcessEvents() and gettimeofday() is an inlined aeGetTime(), which calls gettimeofday() and unpacks the result), so at least that possibility might be eliminated. Though I wonder if gettimeofday() could hang if Redis was *receiving* a signal during the call... it might be worth it to run strace to watch for signals and the gettimeofday() system call if the lockups continue, just for another data point.

If the host machine you are using has a problem with its system clock, it is possible (if unlikely) that Redis is being caught up in a gettimeofday() hang caused by a clock adjustment on the host or in your VM. Whether or not a restart will resolve the issue (by moving the VM to a different machine physically) depends on the provider (generally restarting a VM doesn't move the VM to different hardware, with the recently option in AWS being the exception), and whether this is the actual problem.


I'd try running on a different piece of physical hardware (new VPS, same OS) to try to eliminate that as a possible source of the issue, or at least trying to run just Redis itself on a different piece of hardware temporarily.

 - Josiah





--

You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.

Josh Lohanes

unread,
Feb 20, 2018, 4:56:28 AM2/20/18
to Redis DB
Did you manage to solve this problem? I'm having similar issues every couple of days Redis stops working,
Reply all
Reply to author
Forward
0 new messages