Custom web server? (was Re: IO.pipe in Maglev)

Visto 48 veces
Saltar al primer mensaje no leído

Jesse Cooke

no leída,
4 jun 2013, 0:37:344/6/13
a maglev-d...@googlegroups.com
I've been thinking lately about a good Ruby web server for MagLev. While it would be cool to use Puma (I'm a huge fan,) is a threaded server really the best idea? MagLev has a GIL, but the worker model from Puma might be nice.

Thoughts? Webrick is actually a good server, but I wonder if something more custom to MagLev would be good.


On Mon, Jun 3, 2013 at 4:44 PM, Jonathan Mason <jonathan...@gmail.com> wrote:
Hey all,

So I'm working on getting Puma up and running under Maglev. I ran into an issue where they are using IO.pipe for some inter-thread communication. 

IO.pipe doesn't seem to exist in Maglev. Anyone have any insite into why that is or how hard it would be to add?

cheers,
jem

--
You received this message because you are subscribed to the Google Groups "MagLev Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to maglev-discuss...@googlegroups.com.
To post to this group, send email to maglev-d...@googlegroups.com.
Visit this group at http://groups.google.com/group/maglev-discussion?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Conrad Taylor

no leída,
5 jun 2013, 15:06:025/6/13
a maglev-d...@googlegroups.com
Jesse, I have seen some great performance using JRuby and Rubinius because they were able to leverage all the cores of the underlying architecture.  Thus, Puma should be used with and have been designed for Ruby implementations which provide true parallelism.  Next, Webrick is great for development but not really meant to be used as a production web server.  After using Celluloid in several projects, it might be a great idea to leverage this actor based concurrency library to build a nice threaded web server.  However, the creator of Celluloid, Tony Arcieri, has done just that with his project Reel which can be found here: https://github.com/celluloid/reel

Good luck,

-Conrad
--

Think different and code well,

-Conrad

Jesse Cooke

no leída,
5 jun 2013, 17:10:225/6/13
a maglev-d...@googlegroups.com
Thanks Conrad, I'm aware of Reel and the perf gains for Puma on RBX and JRuby, but since those are threaded implementations without GILs and MagLev is not, it doesn't necessarily make sense to me to use the same app server.

It's funny people say that webrick isn't meant as a production server. Where do we hear this from? It's reasonably fast, multi-threaded, and it's been around for a long time. I believe it's actually the most deployed Ruby app server, since it's rack's default and Heroku hosts a whole bunch of apps.

I don't think Celluloid would be a good fit for a good web server for MagLev.

Thoughts?

Conrad Taylor

no leída,
6 jun 2013, 6:31:186/6/13
a maglev-d...@googlegroups.com
Jesse, I agree with you that it is one of the most deployed app servers for any language would because it actually part of the code base.  However, Celluloid and Celluloid::IO would actually be a great fit being that it uses fibers (i.e. light weight threads).  Thus, we're talking about the natural abilities within Ruby 1.9 and the performance is even better in Ruby 2.0/2.1.  In your initial post, you posed the question, '... is a threaded server really the best idea? '  I would say that if you're hoping the service many requests, then you would have to use multi-threaded web server whether it be Puma, Webrick, or other.  BTW, I definitely noticed speed boost in using Puma with Ruby 1.9.3 and 2.0.0 even though these implementations have a GIL.  In short, you'll have to define the goals for this new app server as well as evaluate what's already available that may meet these goal before coming to a build versus reuse conclusion.

Jesse Cooke

no leída,
6 jun 2013, 11:13:196/6/13
a maglev-d...@googlegroups.com
Maybe I'm not being clear, but I'm not talking about MRI at all, I'm talking about MagLev. Fibers in MagLev are continuations, something you don't often see used in Ruby. So I'm not convinced Celluloid is necessarily a good fit for a web server on MagLev.

One could service many requests with the process model on MagLev, which seems to be its strong point over a threaded model. That's why I'm wondering if a threaded server like Puma is *really* the best idea, or if something else would be better suited for the MagLev ecosystem. I suppose `puma -t 1:1 -w n` might be sufficient, but I'd like to hear opinions.

So, to make sure I'm framing the question correctly, for this discussion I don't care about any other Ruby implementation, just MagLev ;)

Michael Ries

no leída,
21 feb 2014, 21:43:5521/2/14
a maglev-d...@googlegroups.com
I just made a few attempts on this front. Puma compiles just fine right now under maglev, but when you try to run it you get "ERROR 2702 , arg to listen must be >= 1 and <= 1000 (ArgumentError)" (which leads back here: https://github.com/MagLev/maglev/blob/master-1.9/src/kernel/bootstrap/Socket.rb#L361). That might be a pretty easy fix, but I haven't dug into it yet.

Reel/celluloid won't compile under maglev so I wrote that one off for now.

As far as Webrick goes using a "Hello World" rack app and 10 concurrent requests I get:
Ruby 2.1.0 => 26ms mean
Maglev-1.2Alpha4 => 49ms mean

Perhaps I can dig into puma and see if I can get that working. That would give us options for both threading and process-based concurrency 

Jesse Cooke

no leída,
21 feb 2014, 21:55:3121/2/14
a maglev-d...@googlegroups.com
I'm not sure if you'll get much/any benefit from threads in Maglev. They're green, like MRI 1.8.7, and the process model has always been the preferred form of concurrency with Maglev and Gemstone/S.

But having a server like Puma and its worker model would be fantastic!

ps: master was archived as master-1.8 and master-1.9 was promoted to master. The old master-1.9 branch is only around for historical purposes.


Jesse Cooke

no leída,
22 feb 2014, 4:35:2422/2/14
a maglev-d...@googlegroups.com


On Feb 21, 2014 6:55 PM, "Jesse Cooke" <je...@jc00ke.com> wrote:
>
> I'm not sure if you'll get much/any benefit from threads in Maglev. They're green, like MRI 1.8.7, and the process model has always been the preferred form of concurrency with Maglev and Gemstone/S.

Ugh, I forgot I said all that before. Sorry for unintentionally sounding like a broken record. I didn't bother to look through the previous emails in this thread. My bad!

Tim Felgentreff

no leída,
22 feb 2014, 5:05:3322/2/14
a Jesse Cooke,maglev-d...@googlegroups.com
I'll have to check why that limit was put there. But could you patch Puma to use shorter queues? Or would that break something else?

cheers
-Tim

From: Jesse Cooke
Sent: 22/02/2014 03:55
To: maglev-d...@googlegroups.com
Subject: Re: [maglev-discussion] Re: Custom web server? (was Re: IO.pipe in Maglev)

I'm not sure if you'll get much/any benefit from threads in Maglev. They're green, like MRI 1.8.7, and the process model has always been the preferred form of concurrency with Maglev and Gemstone/S.

Michael Ries

no leída,
22 feb 2014, 11:17:3722/2/14
a maglev-d...@googlegroups.com,Jesse Cooke
I was able to change puma's default queue length (it was 1024 vs the 1000 limit) and that got closer to running, but now I am getting some errors with the way IO.select works in maglev (https://github.com/MagLev/maglev/issues/313) so I'll just play around with a fork of puma for a while and see how much we would have to patch to get puma working.  Let's find out how deep this rabbit hole is...

JohnnyT

no leída,
23 feb 2014, 23:39:3223/2/14
a maglev-d...@googlegroups.com
Hi all -

I've tried to get nio4r working under Maglev, but it dies due to the Kernel.select problem:


I'd love to get that figured out.

JohnnyT 

On Monday, June 3, 2013 10:37:34 PM UTC-6, Jesse Cooke wrote:

Michael Ries

no leída,
26 feb 2014, 23:18:5626/2/14
a maglev-d...@googlegroups.com
Maglev can run Puma!! This thing is working! Thanks to Tim Felgentreff (https://github.com/MagLev/maglev/issues/313) there is a branch which fixes the Kernel.select issue. That fix plus patching puma to read with a smaller queue length of 512 allows puma to run correctly.

I did a quick benchmark like this: ab -c 10 -n 5000 'http://127.0.0.1:9292/' (ran puma in quiet mode with 10 threads)

Ruby 2.1.0: mean 3ms, standard deviation 4.5ms
Maglev: mean 3ms, standard deviation 0.7ms

If I bump the number of workers to 2 (the number of _actual_ cores on my machine) both of them go down to a mean of 2ms with a standard deviation of 0.7ms.

This is definitely a micro benchmark, but it gives us a production strength webserver where you are not paying a speed/scale penalty for switching to Maglev.

Big thanks to Johnny T and Tim for helping to find and fix this issue.


On Monday, June 3, 2013 10:37:34 PM UTC-6, Jesse Cooke wrote:

Richard Sargent

no leída,
27 feb 2014, 0:41:3427/2/14
a maglev-d...@googlegroups.com

Sweet!
Great work!

Jonathan Mason

no leída,
27 feb 2014, 2:30:3527/2/14
a MagLev Discussion
This is fantastic. Sadly I'm not working on that project anymore, but I'm still really happy to see Maglev running Puma. Hopefully I'll get a chance to make use of that in the future.

cheers,
Jon 

Michael Ries

no leída,
27 feb 2014, 2:35:0627/2/14
a maglev-d...@googlegroups.com
:'( sorry I had a PATH issue and puma was actually using mri, not working quite yet


On Monday, June 3, 2013 10:37:34 PM UTC-6, Jesse Cooke wrote:
Responder a todos
Responder al autor
Reenviar
0 mensajes nuevos