Thanks Jerry.
I run my tasks wrapped in Concurrent::Futures. A dependency graph of
these futures is implicitly defined using Concurrent.dataflow.
I did manually check connections out of the ActiveRecord connection
pool using the with_connection function. It didn't seem to make any
significant difference.
I then removed some database level foreign key constraints, using
validations and application logic to enforce it. It does seem to have
made a difference. I am guessing that inserts / updates / deletes (DML
statements) result in some kind of multi-table lock with
coarse-grained lock escalation in presence of the foreign key
constraints. Concurrent statements simply experienced lock timeout.
The debug logs I was referring to were in the rails log. What I wanted
to point out was that there was little clue from the cryptic logs
there - no obvious error messages, just failure.
Finally, there was a reason I asked about the interpreters. I have a
feeling that if we were not using MRI but Rubinius or JRuby instead,
concurrent-ruby would have used all available cores. This would mean
that two futures - one holding a lock and another waiting on it, would
be scheduled more favorably across cores. On a single core, one
holding the lock could be scheduled out while it still held the lock,
and the one waiting on it could be run when it has no hope of
acquiring the lock. This would up the odds of a lock time-out.
Regards,
Arindam