SSE using Sinatra: How to improve concurrency?

450 views
Skip to first unread message

Andrew Havens

unread,
Aug 5, 2013, 1:36:32 PM8/5/13
to pdx...@googlegroups.com
Greetings! I bet a few of you here have dealt with (or have the knowledge to deal with) the issue I'm running into, so I thought this would be a good place to ask.

I'm working on building a Rack middleware that subscribes to a Redis channel and pushes the messages out to clients using Server Sent Events. Sinatra provides a nice DSL for doing this. I have a working example, however, the problem I'm running into is that performance degrades substantially once I get to 7 or 8 clients. I have also run into issues with "dead-locking" the server when trying to reuse a Redis connection between requests.

I'm currently using Thin to serve the app (which uses EventMachine under the hood). I thought that the Sinatra DSL already handled the concurrency with EventMachine, but maybe this is something that I need to implement myself? I don't want to restrict myself to only EventMachine based servers (Thin, Rainbows!) in case someone wants to use a multi-threaded server like Puma. What should I do to increase concurrency in my code?

Here's a gist of the working code sample: https://gist.github.com/andrewhavens/6157725

Thanks for your help,

Andrew Havens

Jesse Cooke

unread,
Aug 6, 2013, 1:02:31 AM8/6/13
to pdx...@googlegroups.com
Can you improve your example to show how you saw the performance degrade?
Did you use curl? Did you open up tabs in your browser? Different channels per tab?

Also, is there a way you can distill the issue so we can remove Redis for now, or do you only see it with Redis?


--
You received this message because you are subscribed to the Google Groups "pdxruby" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pdxruby+u...@googlegroups.com.
To post to this group, send email to pdx...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pdxruby/DB35A812-4874-4727-9067-BD30E9BC613E%40gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

markus

unread,
Aug 6, 2013, 2:00:13 AM8/6/13
to pdx...@googlegroups.com
Another thought: what version of ruby are you using? I've seen
something very much like this on 1.8.7, and moving to a more
current/supported version of ruby fixed it.



Ragav Satish

unread,
Aug 6, 2013, 5:00:24 AM8/6/13
to pdxruby

As Jesse said - It might help to know how you're testing this. If you are doing browser based testing you might just be hitting the max persistent connections for the browser and your 7-8 clients smacks suspiciously of most browsers max 6/hostname http://www.browserscope.org/?category=network&v=1

Did the deadlock spit out a trace? It's unclear what you mean by "when trying to reuse a Redis connection between requests". Your version of thin might not be ready for SSE yet. This patch is just a few months old : https://github.com/macournoyer/thin/commit/06cdd8777d5a5dc46adf9a31f8152effea50be78 .

--Ragav

On Mon, Aug 5, 2013 at 10:36 AM, Andrew Havens <misbe...@gmail.com> wrote:

--
You received this message because you are subscribed to the Google Groups "pdxruby" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pdxruby+u...@googlegroups.com.
To post to this group, send email to pdx...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pdxruby/DB35A812-4874-4727-9067-BD30E9BC613E%40gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
----------------------------------------------------------------------------------------------------------------------------
Durable Learning -  Membean.com    |     twitter       |   facebook      

Sean McCleary

unread,
Aug 6, 2013, 1:16:18 PM8/6/13
to pdx...@googlegroups.com
Andrew,

Your question reminds me of something Kyle Drake, a fellow PDXRB member, wrote a while ago: https://github.com/kyledrake/sinatra-synchrony. I believe this project is kind of related to what Ilya Grigorik did with em-synchrony, but for sinatra. I have used https://github.com/igrigorik/em-synchrony and it handled concurrency well.

- Sean McCleary


Mike Perham

unread,
Aug 6, 2013, 1:21:51 PM8/6/13
to pdx...@googlegroups.com
Use Reel, which uses real threads under the covers.  Using synchrony and EventMachine is not best practice these days.



Kyle Drake

unread,
Aug 6, 2013, 2:04:45 PM8/6/13
to pdx...@googlegroups.com
Thin uses a thread pool by default with the newest versions of Sinatra. I just use Puma in that situation because it's explicitly designed for it and doesn't have an EM dep, but it's still using threads unless you turn the EM stuff back on. In threading, ruby will not block on I/O. Parts of my older slides were quite wrong.

Throw your redis connections into a thread pool (mperham's connection_pool if redis gem doesn't give you one), because you need one per thread. You can also use Rainbows and you can have it spawn worker processes for you with a thread pool, so you can use one process per CPU and utilize all the cores, but your app is probably going to be mostly I/O bound.

If you want to squeeze some performance out of it, you could drop Sinatra completely and make a simple rack app to take in the requests. Rainbows/Puma will handle the threading crap for you, all you need to do is make a redis connection pool and you're pretty much done.

Then, write some testing and beat the hell out of it.


Kyle Drake

unread,
Aug 6, 2013, 2:08:50 PM8/6/13
to pdx...@googlegroups.com
Oh, you're using the streaming. That's a different situation.

Yeah, Rack still kindof sucks at streaming unfortunately. You can get the IO request in directly from puma so that you can write to stream, but I'm not sure what the status is with Sinatra streaming with threading. Reel is more specifically designed for that, so it's worth investigating.

Andrew Havens

unread,
Aug 6, 2013, 2:58:01 PM8/6/13
to pdx...@googlegroups.com
Jesse - Yes, my rudimentary testing methods were to open tabs in the browser, one at a time, and see the messages stop appearing. I'm curious to hear of any tools that would work well for this type of testing. Redis is an important part of this puzzle but I'm confused about which Redis gem I should be using: https://rubygems.org/search?query=redis

Markus - I'm using the latest MRI 1.9.3. I suppose MRI 2 may provide some better performance, but I don't think that's the issue here.

Ragav - I did not know about the limit on persistent connections per domain! You were totally right. I opened only 5 tabs in 3 different browsers and my performance improved to 15 clients! Imagine that! I'm using the latest Thin server, so that's not an issue. By "reusing a Redis connection" I mean setting $redis = Redis.new() or memoizing the variable in a class method. Somehow I made it so I couldn't stop the server with control+c, I had to kill -9.

Sean - I remember hearing Kyle talk about sinatra-synchrony. I've also seen a lot of Sinatra SSE examples using em-synchrony. However, there's not much explanation as to why and I'd rather not introduce EM as a dependency if I don't need to. Also it's my understanding that em-synchrony is so you can avoid callbacks. I also remember reading about how Kyle changed his mind about using sinatra-synchrony.

Mike - Thanks for the link to Reel. There's even an example of using Reel for SSE. I's got a pretty similar DSL to Sinatra. I'll have to try it out.

Kyle - Thanks for the advice. I'm not familiar with how to create a Redis connection thread pool. I haven't worked much with Redis in general. I noticed there's even a celluloid-redis gem. So I'm not sure what to do.

Thanks everyone! To summarize my remaining questions:

- How to load test Server Sent Events?

- What Redis gem should I be using?

- Is creating a new Redis subscribe block for each connection a good way to go? I'm making an assumption that the closed connections also close the Redis subscription. Or should I try to initialize a single/shared Redis subscription that pushes updates to an array of connections and manually remove them when they have been closed?

Thanks!

--Andrew Havens

Jesse Cooke

unread,
Aug 6, 2013, 3:18:03 PM8/6/13
to pdx...@googlegroups.com
On Tue, Aug 6, 2013 at 11:58 AM, Andrew Havens <misbe...@gmail.com> wrote:
Jesse - Yes, my rudimentary testing methods were to open tabs in the browser, one at a time, and see the messages stop appearing. I'm curious to hear of any tools that would work well for this type of testing. Redis is an important part of this puzzle but I'm confused about which Redis gem I should be using: https://rubygems.org/search?query=redis 

Markus - I'm using the latest MRI 1.9.3. I suppose MRI 2 may provide some better performance, but I don't think that's the issue here.

Ragav - I did not know about the limit on persistent connections per domain! You were totally right. I opened only 5 tabs in 3 different browsers and my performance improved to 15 clients! Imagine that! I'm using the latest Thin server, so that's not an issue. By "reusing a Redis connection" I mean setting $redis = Redis.new() or memoizing the variable in a class method. Somehow I made it so I couldn't stop the server with control+c, I had to kill -9.

Sean - I remember hearing Kyle talk about sinatra-synchrony. I've also seen a lot of Sinatra SSE examples using em-synchrony. However, there's not much explanation as to why and I'd rather not introduce EM as a dependency if I don't need to. Also it's my understanding that em-synchrony is so you can avoid callbacks. I also remember reading about how Kyle changed his mind about using sinatra-synchrony.

Mike - Thanks for the link to Reel. There's even an example of using Reel for SSE. I's got a pretty similar DSL to Sinatra. I'll have to try it out.

Kyle - Thanks for the advice. I'm not familiar with how to create a Redis connection thread pool. I haven't worked much with Redis in general. I noticed there's even a celluloid-redis gem. So I'm not sure what to do.

Thanks everyone! To summarize my remaining questions:

- How to load test Server Sent Events?
Is there anything in ab or tools like it that'll help with testing out the concurrency?
I suppose you could fire up some threads and check it yourself by making net/http calls, or something of the sort. 

- What Redis gem should I be using?
The redis-rb gem is the best one to use. You can also use the hiredis driver with it. 

- Is creating a new Redis subscribe block for each connection a good way to go? I'm making an assumption that the closed connections also close the Redis subscription. Or should I try to initialize a single/shared Redis subscription that pushes updates to an array of connections and manually remove them when they have been closed?
I just received some advice by RedisGreen the other day that a single global $redis is usually fine, since redis-rb is theadsafe. That being said, check out Mike's https://github.com/mperham/connection_pool gem too. 

Thanks!

--Andrew Havens

--
You received this message because you are subscribed to the Google Groups "pdxruby" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pdxruby+u...@googlegroups.com.
To post to this group, send email to pdx...@googlegroups.com.

Ragav Satish

unread,
Aug 6, 2013, 5:32:35 PM8/6/13
to pdxruby
Andrew : I know you said you are using the latest thin but without belabouring this further -  The thin gemspec is a year old and as far as I know the streaming patch is only on the V2 branch (I could be wrong).
For load testing you could just spin up a 1000+ curl requests. For a rubyish solution (I haven't worked with this gem at all) https://github.com/typhoeus/typhoeus is probably an interesting choice. Look at the "making parallel requests"  in the docs.

For a production server with SSE, phusion passenger is good choice as well. For $100 you get their multi-threaded server and it's well worth supporting.

--Ragav


--
You received this message because you are subscribed to the Google Groups "pdxruby" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pdxruby+u...@googlegroups.com.
To post to this group, send email to pdx...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 

Andrew Havens

unread,
Aug 6, 2013, 6:52:17 PM8/6/13
to pdx...@googlegroups.com
Mike - Thanks for the link to Reel. There's even an example of using Reel for SSE. I's got a pretty similar DSL to Sinatra. I'll have to try it out.

It looks like Reel is a replacement for Thin, Rainbows!, Puma, etc. Is that true? I'm trying to create a Rack App that can be used as middleware in an existing stack (able to work with Thin, Unicorn, Rainbows!, or Puma).

I know you said you are using the latest thin but without belabouring this further -  The thin gemspec is a year old and as far as I know the streaming patch is only on the V2 branch (I could be wrong). 

Ragav, I looked into this a little further. It looks like Thin is undergoing a big rewrite, including changing how it handles streaming, but I'm pretty sure the current version (1.5.1 Straight Razor) still supports streaming. It's mentioned as a supported server in the Sinatra example, and Sinatra switches its streaming implementation based on the server that's being used.

After seeing this in the Sinatra codebase, I'm wondering if I even need to worry about trying to increase concurrency. I've already determined that my original bottleneck was opening too many tabs in the same browser. If it's up to the server to implement the streaming method, then maybe that's better since they can implement it in a way that's best for their architecture. I guess until I've come up with a way to load test it, I won't be able to determine if concurrency is an issue.

I think I found out the reason for the deadlocks. In a GG discussion someone mentioned that Rack::Lock will conflict with streaming, causing deadlocks. I think I experienced this while trying different mounting methods and solved this by mounting my app in front of Rack::Lock. I think I incorrectly assumed that I was creating a blocking Redis connection that couldn't be closed, since I was testing this at the same time.

--Andrew
Reply all
Reply to author
Forward
0 new messages