phpredis exceptions - infrequent read error on connection

2,847 views
Skip to first unread message

Siddharth Misra

unread,
Jul 29, 2014, 10:16:23 AM7/29/14
to redi...@googlegroups.com
Hi,

I am using phpredis (https://github.com/nicolasff/phpredis) on my site for storing sessions. Recently I noticed a bunch of errors in the apache error log  of the following type:
PHP Fatal error:  Uncaught exception 'RedisException' with message 'read error on connection' in [no active file]:0\nStack trace:\n#0 {main}\n  thrown in [no active file] on line 0, referer: <url>

This is not happening very frequently. Its happening anywhere between 2-10 times a day with hundreds of thousands http requests everyday.
Not able to find anything conclusive on this so far. Can anyone shed some light on this?

Thanks
-Sid

Thorsten Drönner

unread,
Jul 29, 2014, 10:29:29 AM7/29/14
to redi...@googlegroups.com
Looks like not enough client collections.
Do "config get maxclients" on your master.
Read this:http://redis.io/topics/clients
Especially the Maximum number of clients section with the ulimit.

Adjust all limits accordingly. The error shouldn't occur anymore.

Siddharth Misra

unread,
Jul 29, 2014, 11:44:40 AM7/29/14
to redi...@googlegroups.com
Hi.

The maxclients setting is 50,000.
ulimit -a gives the following output on the server :
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 245387
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 212992
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 245387
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

I don't think that is the issue looking at these figures. Is there something that I am missing?

Josiah Carlson

unread,
Jul 29, 2014, 1:30:26 PM7/29/14
to redi...@googlegroups.com
Are you doing any sort of connection pooling? Do these errors come up during heavy load?

 - Josiah


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.

Siddharth Misra

unread,
Jul 30, 2014, 2:05:34 AM7/30/14
to redi...@googlegroups.com
Hi,

No, we are not doing any connection pooling.  As for heavy load, yes that might be possible but I am not sure and will need to check.

Nitesh Jindal

unread,
Dec 5, 2014, 1:45:26 AM12/5/14
to redi...@googlegroups.com


Siddharth Misra <gr8siddharth@...> writes:

>
>
> Hi,No, we are not doing any connection pooling.  As for heavy load,
yes that might be possible but I am not sure and will need to check.On
Tuesday, 29 July 2014 23:00:26 UTC+5:30, Josiah Carlson wrote:
> Are you doing any sort of connection pooling? Do these errors come up
during heavy load?
>  - Josiah
>
>
> On Tue, Jul 29, 2014 at 8:44 AM, Siddharth Misra <gr8sid... <at>
gmail.com> wrote:Hi.The maxclients setting is 50,000.ulimit -a gives the
following output on the server :core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimitedscheduling
priority             (-e) 0file size               (blocks, -f)
unlimitedpending signals                 (-i) 245387max locked
memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimitedopen
files                      (-n) 212992pipe size            (512 bytes, -
p) 8POSIX message queues     (bytes, -q) 819200real-time
priority              (-r) 0
> stack size              (kbytes, -s) 8192cpu time              
(seconds, -t) unlimitedmax user processes              (-u)
245387virtual memory          (kbytes, -v) unlimitedfile
locks                      (-x) unlimitedI don't think that is the issue
looking at these figures. Is there something that I am missing?
> On Tuesday, 29 July 2014 19:59:29 UTC+5:30, Thorsten Drönner wrote:
> Looks like not enough client collections.Do "config get maxclients" on
your master.Read this:http://redis.io/topics/clients
> Especially the Maximum number of clients section with the
ulimit.Adjust all limits accordingly. The error shouldn't occur
anymore.Am Dienstag, 29. Juli 2014 16:16:23 UTC+2 schrieb Siddharth
Misra:
> Hi,I am using phpredis (https://github.com/nicolasff/phpredis) on my
site for storing sessions. Recently I noticed a bunch of errors in the
apache error log  of the following type:
> PHP Fatal error:  Uncaught exception 'RedisException' with message
'read error on connection' in [no active file]:0\nStack trace:\n#0
{main}\n  thrown in [no active file] on line 0, referer: <url>This is
not happening very frequently. Its happening anywhere between 2-10 times
a day with hundreds of thousands http requests everyday.Not able to find
anything conclusive on this so far. Can anyone shed some light on this?
Thanks-Sid
>
>
>
> I am facing the same issue. I am not using any connection pooling and
the load is not that heavy. Any further update on this?

Thanks,
Nitesh Jindal

>
>
>




Josiah Carlson

unread,
Dec 5, 2014, 12:47:57 PM12/5/14
to redi...@googlegroups.com
If you are not using connection pooling and you are seeing the same issue as Siddharth, then your problem is that your server is trying to open/close too many outgoing connections in a short period of time. Note that this is not a problem with Redis, it's a problem with how your language runtime/Redis client library handles Redis connections.

You have several options, most effective solutions first:

1. Start pooling your connections
2. Use a proxy to pool your connections
3. Tell your OS to recycle outgoing ports faster

 - Josiah



>
>
>




Siddharth Misra

unread,
Dec 6, 2014, 7:02:02 AM12/6/14
to redi...@googlegroups.com
Hi Nitesh,

I was going through logs yesterday and found that the errors suddenly stopped coming a little over a week back. Checking what change we did that caused this to happen.

@Josiah
Your point #3 might be right. I experienced this once earlier and the issue was fixed by making the following changes on the webserver :
sudo sysctl -w net.ipv4.tcp_fin_timeout=5
sudo sysctl -w net.ipv4.ip_local_port_range="12000 65535"
sudo sysctl -w net.ipv4.tcp_tw_recycle=1
sudo sysctl -w net.ipv4.tcp_tw_reuse=1

I made these changes several years ago when I was moving sessions from database to redis using phpredis' built in session handler. For some reason, the process was dying out at about 28000 sessions (there were over a million). I found that I was reaching the OS defined limits for tcp ports and made the above changes and then the whole migration worked like a charm.

I will try this out myself if I see the error coming up again and report back with results.
Reply all
Reply to author
Forward
0 new messages