You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to mon...@googlegroups.com
I just started getting this, too. Dozens of my Sidekiq workers got "stuck" and when I printed out the backtraces of the threads, it's all Moped. Processes are stuck for hours not doing anything until I kill -9 them:
Has anyone else seen this? What kind of workarounds have you done? It's happening at various Mongoid calls in my code.
worker-prd-iad-ded-06production.log:/var/www/dashboard/shared/bundle/ruby/2.0.0/bundler/gems/moped-6af3788a1f35/lib/moped/sockets/connectable.rb:46:in `block in read'
worker-prd-iad-ded-06production.log:/var/www/dashboard/shared/bundle/ruby/2.0.0/bundler/gems/moped-6af3788a1f35/lib/moped/connection.rb:99:in `block in read'
worker-prd-iad-ded-06production.log:/var/www/dashboard/shared/bundle/ruby/2.0.0/bundler/gems/moped-6af3788a1f35/lib/moped/connection.rb:135:in `block in receive_replies'
worker-prd-iad-ded-06production.log:/var/www/dashboard/shared/bundle/ruby/2.0.0/bundler/gems/moped-6af3788a1f35/lib/moped/node.rb:615:in `block in flush'
worker-prd-iad-ded-06production.log:/var/www/dashboard/shared/bundle/ruby/2.0.0/gems/newrelic_moped-0.0.8/lib/newrelic_moped/instrumentation.rb:40:in `block in logging_with_newrelic_trace'
worker-prd-iad-ded-06production.log:/var/www/dashboard/shared/bundle/ruby/2.0.0/bundler/gems/moped-6af3788a1f35/lib/moped/cluster.rb:171:in `block in refresh'
worker-prd-iad-ded-06production.log:/var/www/dashboard/shared/bundle/ruby/2.0.0/gems/mongoid-3.1.6/lib/mongoid/contextual/mongo.rb:122:in `block in each'
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to mon...@googlegroups.com
It seems this might have been upstream firewall changes timing out outbound connections on idle workers. Not sure why Moped never timed out in the process, but in my specific case I think we figured it out.