Can websocket client detect death of server?

493 views
Skip to first unread message

JohnMudd

unread,
Jul 19, 2015, 12:36:30 PM7/19/15
to python-...@googlegroups.com
I'm using Tornado 4.2.1 for websocket in my client and server. Is there a way for client to detect that the server died without having to exercise the connection? Something like the server's on_close() but in the client.

John

Ben Darnell

unread,
Jul 19, 2015, 2:05:46 PM7/19/15
to Tornado Mailing List
Tornado's websocket client reports a closed connection by returning None from read_message(). Note that there is no guarantee that this will occur in a timely fashion; the most reliable way to detect a dropped connection is to use the connection at regular intervals and consider the connection closed if you don't get a response in a reasonable amount of time. (The websocket protocol has built-in 'ping' support for this but it's not exposed on the client side of Tornado's implementation, so you'll need to roll your own application-level ping).

-Ben

On Sun, Jul 19, 2015 at 12:36 PM, JohnMudd <john...@gmail.com> wrote:
I'm using Tornado 4.2.1 for websocket in my client and server. Is there a way for client to detect that the server died without having to exercise the connection? Something like the server's on_close() but in the client.

John

--
You received this message because you are subscribed to the Google Groups "Tornado Web Server" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python-tornad...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

JohnMudd

unread,
Jul 19, 2015, 2:18:32 PM7/19/15
to python-...@googlegroups.com, b...@bendarnell.com
Thanks, that's the conclusion I was reaching but it's good to be confirmed.

Originally I was using application level "pings" from the client side as a keepalive mechanism. Then I tested efficiency of real pings from server back to clients and it appeared to be significantly faster. In some case, many times faster. So I was in the process of switching from client "pings" to real pings from the server. That's when I realized I had lost ability to detect death of server.

I had clients loosing websocket connections about about 8 minutes if i left the connection completely idle. That's why I started pursuing keepalives in the first place. It seems like application level pings from client make the most sense. But I'm left wondering... what's the purpose of server side pings?

Ben Darnell

unread,
Jul 19, 2015, 2:30:18 PM7/19/15
to JohnMudd, Tornado Mailing List
On Sun, Jul 19, 2015 at 2:18 PM, JohnMudd <john...@gmail.com> wrote:
Thanks, that's the conclusion I was reaching but it's good to be confirmed.

Originally I was using application level "pings" from the client side as a keepalive mechanism. Then I tested efficiency of real pings from server back to clients and it appeared to be significantly faster. In some case, many times faster. So I was in the process of switching from client "pings" to real pings from the server. That's when I realized I had lost ability to detect death of server.

I'm surprised that there was a significant difference between application-level "pings" and builtin pings. Was this in terms of throughput or latency?
 

I had clients loosing websocket connections about about 8 minutes if i left the connection completely idle. That's why I started pursuing keepalives in the first place. It seems like application level pings from client make the most sense. But I'm left wondering... what's the purpose of server side pings?

Pings are not client- or server-side; either side can initiate them (and often both sides will, so they can both detect failures. If one side knows the other's ping schedule then you can detect failures with only one side actively pinging, but it's simpler to just ping in both directions). We haven't exposed pings from the client side in Tornado because the right interface isn't clear (expose new methods like read_message()? move to a class-and-callback interface like the server side? add parameters like ping_interval and ping_timeout so the framework takes care of the details for you?).

-Ben

JohnMudd

unread,
Jul 19, 2015, 3:50:52 PM7/19/15
to python-...@googlegroups.com, b...@bendarnell.com


On Sunday, July 19, 2015 at 2:30:18 PM UTC-4, Ben Darnell wrote:
On Sun, Jul 19, 2015 at 2:18 PM, JohnMudd <john...@gmail.com> wrote:
Thanks, that's the conclusion I was reaching but it's good to be confirmed.

Originally I was using application level "pings" from the client side as a keepalive mechanism. Then I tested efficiency of real pings from server back to clients and it appeared to be significantly faster. In some case, many times faster. So I was in the process of switching from client "pings" to real pings from the server. That's when I realized I had lost ability to detect death of server.

I'm surprised that there was a significant difference between application-level "pings" and builtin pings. Was this in terms of throughput or latency?

Thanks again. 

I just repeated calls to ping("test") from 1 to 100 thousand times to get a total elapsed time. I compared that to sending a similar size msg from client to server the same number of times. In that case I think I had the server responding and I received the reply at the client. Depending on the total number of iterations the real pings might be 2-3 faster or 40x faster. Sorry, I didn't save the code and i was not particularly careful. I might have fouled it up. It did surprise me to see a big difference. 
 
 

I had clients loosing websocket connections about about 8 minutes if i left the connection completely idle. That's why I started pursuing keepalives in the first place. It seems like application level pings from client make the most sense. But I'm left wondering... what's the purpose of server side pings?

Pings are not client- or server-side; either side can initiate them (and often both sides will, so they can both detect failures. If one side knows the other's ping schedule then you can detect failures with only one side actively pinging, but it's simpler to just ping in both directions). We haven't exposed pings from the client side in Tornado because the right interface isn't clear (expose new methods like read_message()? move to a class-and-callback interface like the server side? add parameters like ping_interval and ping_timeout so the framework takes care of the details for you?).

Oh, it it possible for the client (applications level code) to detect a ping from the server? I was looking for a way to do this.

Ok, so pings are still useful from the server...  But the server has on_close(). If pings from server are more reliable than the server's on_close then... why even bother implementing on_close in the server? 

Ben Darnell

unread,
Jul 19, 2015, 4:11:23 PM7/19/15
to JohnMudd, Tornado Mailing List
On Sun, Jul 19, 2015 at 3:50 PM, JohnMudd <john...@gmail.com> wrote:
 

I had clients loosing websocket connections about about 8 minutes if i left the connection completely idle. That's why I started pursuing keepalives in the first place. It seems like application level pings from client make the most sense. But I'm left wondering... what's the purpose of server side pings?

Pings are not client- or server-side; either side can initiate them (and often both sides will, so they can both detect failures. If one side knows the other's ping schedule then you can detect failures with only one side actively pinging, but it's simpler to just ping in both directions). We haven't exposed pings from the client side in Tornado because the right interface isn't clear (expose new methods like read_message()? move to a class-and-callback interface like the server side? add parameters like ping_interval and ping_timeout so the framework takes care of the details for you?).

Oh, it it possible for the client (applications level code) to detect a ping from the server? I was looking for a way to do this.

It's would be possible to expose such an interface, but it's not there now.
 

Ok, so pings are still useful from the server...  But the server has on_close(). If pings from server are more reliable than the server's on_close then... why even bother implementing on_close in the server? 

Because most of the time it works, and when it does work it's faster. When you close a browser tab, the server will usually get an on_close event immediately. When you put your laptop to sleep, the server won't know about it until it tries to use the connection again.

-Ben
Reply all
Reply to author
Forward
0 new messages