This makes me wonder if a thread pool would be "a good thing" for
rails. Since it would be a pool it wouldn't have the thread creation
overhead, and it wouldn't have all the extra threads sleeping and
slowing things down. But in reality fibers should still be slightly
faster than a thread pool, no matter what. Fibers would be intensely
useful if you needed like 500 concurrent requests all accessing the DB
at the same time, but in reality I think people only would really
'want' about 20 concurrent DB requests, so...a thread pool of 20 might
be about as fast.
Also do we want to rename the .so file so that it doesn't conflict
with previously installed mysql gems?
Interestingly, renaming it causes something interesting
if I
require 'mysqlplus'
require 'thin_attributes'
thin attributes then runs
require 'mysql'
within itself, which effectively overrides mysqlplus :)
I'd assume that the pain is worth it, though, so that we don't get
confused as to which one we're using.
Thoughts?
-=R
Interestingly, renaming it causes something interesting
>> Also do we want to rename the .so file so that it doesn't conflict
>>
>> with previously installed mysql gems?
>
> Yea, probably a good idea.
> Aman
if I
require 'mysqlplus'
require 'thin_attributes'
thin attributes then runs
require 'mysql'
within itself, which effectively overrides mysqlplus :)
I'd assume that the pain is worth it, though, so that we don't get
confused as to which one we're using.
Thoughts?
-=R
Also do we want to create a plugin which is 'mysqlplus+async instead
of mysql' for people to use with rails [like a drop in replacement--
all you do is require one file and you're good to go] or just require
them to use neverblock?
They need to just install the mysqlplus gem (I need to move that to RubyForge). That's if AR provides a thread pool for the mysql adapter that uses the async_query method
Played with the thread pool in Edge and mysqlplus for a bit earlier
today.
The difference with current head being :
Found the following to trip up with the query sequence checks :
- SET *
- BEGIN
- ROLLBACK
etc.
Another way to guard against this ( exclusively via #async_query ) is
to clear any previous results before
firing #send_query, scheduling and #get_result ... like the Postgres
client does :
When using #async_query with a threaded connection pool, that
shouldn't be an issue - one's more likely to mess up the sequence
with the evented model.
Included the following as an initializer in /config :
Thoughts ?
Yeah--I like our way [?] where we raise if there's a connection in
progress. If we do, that is :P
> When using #async_query with a threaded connection pool, that
> shouldn't be an issue - one's more likely to mess up the sequence
> with the evented model.
>
> Included the following as an initializer in /config :
>
> http://gist.github.com/11681
Looks good. I guess we have two options--either do as NeverBlock does
[and basically poll incoming queries, check if they're SET * 's and if
they are, then pin the connection to the fiber] or piggy back on the
existing Rails pool system, which checks out a single connection per
request and then checks it back in.
There might be some nicety in just following rails' [somewhat weird]
pool system--only that it would merge more easily with what rails
does, so might be more friendly to the community.
Thoughts?
-=R
Also exposes the following :
Mysql#async_in_progress #compares to the current connection identifier
&&
Mysql#async_in_progress = ( true | false | nil )
Muhammad mentioned that MySQL 6 would feature a hybrid threaded and
evented client, thus #connection_identifier extracted
to handle any logic required for that use case - currently uses
#mysql_thread_id
It may also be useful to expose that as Mysql#connection_identifier
( same as Mysql#thread_id currently, but perhaps a bit more versatile )
When handling cases such as SET, one can then still play well with the
expected send, get_result order eg.
connection.send_query( "SET something" )
connection.async_in_progress = false
or maybe even sugar that use case :
connection.send_query!( "SET something" )
Not sure if it's safe, or even sane, to commit at present - it WILL
blow up for any successive #send_query without a #get_result, which
shouldn't
affect neverblock or em-mysql @ present.
Mysql#c_async_query clears Mysql#async_in_progress on consecutive
calls and plays well with typical ActiveRecord use.
Thoughts ?
As per http://faemalia.net/mysqlUtils/mysql-internals.pdf
"Avoid using malloc(), which is very slow. For memory allocations that
only need to live
for the lifetime of one thread, use sql_alloc() instead. "
Not sure about libmysqld support or how that affects Ruby's GC
requirements, but may be most useful for resultset retrieval ...
Thoughts ?
On 2008/09/20, at 20:33, Roger Pack wrote:
What does this do exactly, then? Does this somehow bind the
connection to the thread then?
> Not sure if it's safe, or even sane, to commit at present - it WILL
> blow up for any successive #send_query without a #get_result, which
> shouldn't
> affect neverblock or em-mysql @ present.
I'm definitely in favor of raising if people do successive
send_queries. Make them pay :)
re: non malloc use: appears we mostly use rb_hash_new() and such,
though I do notice that slim-attributes uses malloc. I guess in Ruby
land it's tough to tell if your memory will need to span multiple
threads or not, since it could stored away and re-used later.
Not sure.
-=R
>
>> connection.send_query( "SET something" )
>> connection.async_in_progress = false
>
> What does this do exactly, then? Does this somehow bind the
> connection to the thread then?
>
Just sets mysql_struct->async_in_progress to 0 ... so the next call to
#send_query won't fail the async sequence check.
Yeah I guess I kind of like it with it named the same as the mysql gem
since it's more of a drop-in replacement. Which is nice.
I think I may still add the all_pseudo_hashes method just so that
users have the flexibility to tune their DB "to their pleasure" if
they want to try and eke out more speed. It would be convenient.
-=R