hmm, sounds good. it would be some kind of session_id.
Some thoughts:
The jug server has to store this channel name, right? How does it get
this, and when does it expire? The rails server has to give these
channel name to the jug server BEFORE the client connects, right?
Another possibility would be, that the real channel is
cryptographically encoded inside this string, then the rails could
give it to the browser, which could send it to the jug server, which
decrypts it to get the real channel name (using shared secret between
rails server and jug server)
In the rails server, people might use the subscription url to
initialize some stuff (not when the page is served, but when flash
connects, which is always some time later), but this could be alright
in most cases, especially when using :store_messages
So, I think that this could work.
But one thing that is really helpful at least for me is the call of
the logout_url, which is necessary for differentiating if users load a
page in a new browser tab or in the same browser tab (in the first
case, they have to 2 chat windows open!)
I didn't test the case "Having no subsciption_url, but logout_url "
with so many clients, but I think, that jug server will have the same
problems here.
So I looked again at the source code and the solution is oh so simple:
begin
open(url.to_s, "User-Agent" => "Ruby/#{RUBY_VERSION}")
rescue => e
return false
end
the url get's opened, but not closed ... THAT'S IT :-)
If you take a look at open_uri.rb, you can see:
def OpenURI.open_uri(name, *rest)
...
if block_given?
begin
yield io
ensure
io.close
end
else
io
end
end
And there was no block given, so the io didn't close. But I even if I
gave a block, it didn't close, with "netstat" I could still see all
open connections. Well, the were not not OPEN, but in TIME_WAIT-
state,, but these were about 1000 sockets in this state, but ruby
doesn't allocate more than 1000 sockets.
If you look at this:
http://www.softlab.ntua.gr/facilities/documentation/unix/unix-socket-faq/unix-socket-faq-2.html#time_wait
you can see, that this is normal from the view of the TCP/IP-Stack...
What do you think, should we play around with the SO_LINGER option to
make these sockets leave the TIME_WAIT-state faster?
thanks, Heiko