Sphinx indexing rotation during deployment has issues

137 views
Skip to first unread message

Ngan

unread,
Aug 9, 2013, 8:45:55 AM8/9/13
to thinkin...@googlegroups.com
Hi,

We reindex our entire index pretty often (once every 3 minutes) because we have a pretty small data collection and we don't want to use delayed delta.  I notice however, that every once in a while, when we deploy our application and it happens to be the same time the reindexing is about to rotate, we'll get this error "no enabled local indexes to search" every time we hit sphinx there afterwards.  When this happens, we have to restart our app so that it picks up the new indexes.  We are reindexing with rotate so the existing index should still be there and the rotation should be seamless.  Any ideas on why this happens? And if there's anything to do about it?

Thanks,
Ngan

Ngan

unread,
Aug 9, 2013, 8:46:53 AM8/9/13
to thinkin...@googlegroups.com
Forgot to add the trace:

gems/thinking-sphinx-2.0.14/lib/thinking_sphinx/search.rb:438:in `block in populate' 
gems/thinking-sphinx-2.0.14/lib/thinking_sphinx/search.rb:606:in `call' 
gems/thinking-sphinx-2.0.14/lib/thinking_sphinx/search.rb:606:in `retry_on_stale_index' 
gems/thinking-sphinx-2.0.14/lib/thinking_sphinx/search.rb:426:in `populate' 
gems/thinking-sphinx-2.0.14/lib/thinking_sphinx/search.rb:104:in `to_a' 

Pat Allan

unread,
Aug 9, 2013, 10:57:51 AM8/9/13
to thinkin...@googlegroups.com
Hi Ngan

In 2.1.0 there's been some patches that deal with these kinds of errors - TS will now retry searches if an error crops up on the client connection (which is also persisted per thread in a connection pool, saving socket setup/teardown time). If an error crops up, a new connection is made, but if the error persists, it'll still get raised…

Give 2.1.0 a spin, see if that helps matters.

--
Pat
> --
> You received this message because you are subscribed to the Google Groups "Thinking Sphinx" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to thinking-sphi...@googlegroups.com.
> To post to this group, send email to thinkin...@googlegroups.com.
> Visit this group at http://groups.google.com/group/thinking-sphinx.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Ngan

unread,
Aug 11, 2013, 12:39:47 AM8/11/13
to thinkin...@googlegroups.com
Hi Pat,

Thanks for the help.  I tried upgrading to 2.1.0...and I'm noticing multiple instance of searched running now.  Is that normal?  Would you be able to point me to documentation of major changes with 2.1.0?

Thanks,
Ngan

Pat Allan

unread,
Aug 11, 2013, 1:11:20 AM8/11/13
to thinkin...@googlegroups.com
Hi Ngan

There's not really any documentation around the changes, I'm afraid… but what you're seeing is a process per thread of your app, plus a master daemon process, due to the persisted connection pool.

All the logic for this connection pool can be found in ThinkingSphinx::Connection:
https://github.com/pat/thinking-sphinx/blob/v2/lib/thinking_sphinx/connection.rb

The HISTORY file has a list of all relevant changes though - here's the list of what's changed between 2.0.14 and 2.1.0 - but this and some delta refactoring are the biggest items:

* Removed plugin support - Thinking Sphinx is now gem-only across all branches.
* ThinkingSphinx::Version and the thinking_sphinx:version task have been removed - it's a gem, it has a version number.
* Updating Riddle to 1.5.6 or newer.
* Requires ActiveRecord ~> 2.1 for TS 1.x releases (earlier versions were considered unsupported a few releases ago).
* Allow custom Riddle controllers - useful for Flying Sphinx to take over management of Sphinx daemon/indexing actions.
* Rejigged delta support to be generic, with local job classes that provide a clean, simple interface for third-party libraries.
* Add hooks for anything that needs to happen before indexing (such as clearing out existing delta jobs).
* Connection pool for all Sphinx client communication, with new connections built if there's any connection-related (as opposed to syntax) issues.
* Multiple-field search conditions can be done with arrays of field names as keys in the :conditions hash (Alex Dowad).
* Removed named capture in regular expressions to maintain MRI 1.8 support (Michael Wintrant).
* Support new JDBC configuration style (Kyle Stevens).

--
Pat

Ngan Pham

unread,
Aug 11, 2013, 1:15:28 AM8/11/13
to thinkin...@googlegroups.com
Hm…I was afraid of that…
So, we have 9 application servers with 10-20 processes each.  At high traffic times, we'd be seeing 181 processes for searchd?  Won't that blow memory up like crazy?  Is this normally something that everyone deals with?

Just curious…what are the benefits of persisted connection pools?

As always, thanks for the quick response Pat!

- Ngan
You received this message because you are subscribed to a topic in the Google Groups "Thinking Sphinx" group.
To unsubscribe from this group and all its topics, send an email to thinking-sphi...@googlegroups.com.

Pat Allan

unread,
Aug 11, 2013, 1:21:53 AM8/11/13
to thinkin...@googlegroups.com
I've not dealt with anything at that scale… I would presume Sphinx processes share common resources between each process, but you're probably in a better position to verify that.

Certainly, if you're seeing performance issues, then I'm happy to look at patching TS to turn off persistent connections based on a configuration option. It shouldn't be too hard to make that change.

--
Pat

Pat Allan

unread,
Aug 11, 2013, 1:26:09 AM8/11/13
to thinkin...@googlegroups.com
And the benefit is faster query times - the socket isn't being setup/cleaned up on every single search request. From the quick testing I did back when I made this change, this made a noticeable difference with Sphinx query times (no numbers at hand, but I think it was at least a 50% improvement, if not closer to 80%).

--
Pat

On 11/08/2013, at 3:15 PM, Ngan Pham wrote:

Ngan Pham

unread,
Aug 11, 2013, 10:19:24 AM8/11/13
to thinkin...@googlegroups.com, thinkin...@googlegroups.com
I see...

So I've tested it out and it's a bit of a disaster for my situation. Some search requests are getting dropped and the server that runs sphinx is hitting memory limits. I could, in theory, solve the problem by increasing ram, how ever we have fallback servers...and it would suck to have to increase those as well. I'll do some research on how people handle large sphinx requests...like airbnb.

As for my 2 cents: I never felt that there was that much of an issue with the cost of opening a connection to sphinx for every request. It is very fast for us currently, and shaving 50-80% off something that already only takes milliseconds while trading for more memory usage and less scalability isn't worth for me specifically. It would awesome if you added a configuration option to disable persistent connections! I will buy you a beer! :)

Pat Allan

unread,
Aug 12, 2013, 8:08:54 AM8/12/13
to thinkin...@googlegroups.com
The change was pretty simple:
https://github.com/pat/thinking-sphinx/commit/d5249ea0215128d5d1f916ee42882932ddb86ba7

To use it (for the 2.x releases of Thinking Sphinx):

gem 'thinking-sphinx', '~> 2.1.0',
:git => 'git://github.com/pat/thinking-sphinx.git',
:branch => 'v2',
:ref => '64da4dc7ff'

And then in an initialiser:

ThinkingSphinx.persistence_enabled = false

Will be keen to hear if everything returns to normal for you!

Ngan

unread,
Aug 12, 2013, 12:24:01 PM8/12/13
to thinkin...@googlegroups.com
You are awesome!  Thanks!

I've deployed to production and everything works great.  I'll be keeping an eye out for my original problem of errors on deployment.

Can't wait for 2.1.1 :-)

Ngan

unread,
Sep 12, 2013, 2:44:46 AM9/12/13
to thinkin...@googlegroups.com
Hey Pat,

Just had the "no enabled local indexes to search" come back again (after a while of not seeing it after deploy).  Any thoughts on why this is still happening?

Thanks,
Ngan

Pat Allan

unread,
Sep 12, 2013, 8:36:13 AM9/12/13
to thinkin...@googlegroups.com
Hi Ngan

How often is it recurring now that it's returned?

-- 
Pat

Ngan Pham

unread,
Sep 12, 2013, 10:18:16 AM9/12/13
to thinkin...@googlegroups.com, thinkin...@googlegroups.com
Once, out of 10 deploys.

Pat Allan

unread,
Sep 16, 2013, 6:04:40 AM9/16/13
to thinkin...@googlegroups.com
Hi Ngan

I'm not sure why it's cropped up again - but I'm thinking it's not worth worrying about unless it reoccurs a bit more regularly. The error isn't lasting for more than half a minute or so, right? Or did it persist for a while?

-- 
Pat

Ngan Pham

unread,
Sep 16, 2013, 10:21:23 AM9/16/13
to thinkin...@googlegroups.com, thinkin...@googlegroups.com
When it happens, the only way to fix is to restart the app ASAP.

Pat Allan

unread,
Sep 22, 2013, 5:36:19 AM9/22/13
to thinkin...@googlegroups.com
Just trying to think through why the connection pool isn't resetting when it hits this error. Don't suppose you have a stack trace on hand that I could have a look at?

Ngan

unread,
Sep 30, 2013, 12:37:36 PM9/30/13
to thinkin...@googlegroups.com
Riddle::ResponseError | Riddle::ResponseError in #search [No response from searchd (status: , version: )]

app/shared/bundle/ruby/1.9.1/gems/riddle-1.5.7/lib/riddle/client.rb:300:in `rescue in run'
app/shared/bundle/ruby/1.9.1/gems/riddle-1.5.7/lib/riddle/client.rb:228:in `run'
app/shared/bundle/ruby/1.9.1/gems/riddle-1.5.7/lib/riddle/client.rb:347:in `query'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/connection.rb:66:in `method_missing'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:439:in `block (3 levels) in populate'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:403:in `block in take_client'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/connection.rb:16:in `block in take'
app/shared/bundle/ruby/1.9.1/gems/innertube-1.1.0/lib/innertube.rb:138:in `take'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/connection.rb:13:in `take'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:401:in `take_client'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:438:in `block (2 levels) in populate'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/notifications.rb:123:in `block in instrument'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/notifications/instrumenter.rb:20:in `instrument'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/notifications.rb:123:in `instrument'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:566:in `log'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:575:in `log'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:437:in `block in populate'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:616:in `call'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:616:in `retry_on_stale_index'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:434:in `populate'
app/shared/bundle/ruby/1.9.1/bundler/gems/thinking-sphinx-64da4dc7ffe8/lib/thinking_sphinx/search.rb:276:in `total_pages'
app/releases/20130926165735/app/controllers/api/v0/auction_controller.rb:45:in `search'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_controller/metal/implicit_render.rb:4:in `send_action'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/abstract_controller/base.rb:167:in `process_action'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_controller/metal/rendering.rb:10:in `process_action'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/abstract_controller/callbacks.rb:18:in `block in process_action'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:470:in `block in _run__1248331046949264328__process_action__3007344294528156339__callbacks'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:215:in `block in _conditional_callback_around_6184'
app/releases/20130926165735/app/concerns/origin_logging.rb:40:in `set_registrar_request'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:214:in `_conditional_callback_around_6184'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:414:in `_run__1248331046949264328__process_action__3007344294528156339__callbacks'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:405:in `__run_callback'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:385:in `_run_process_action_callbacks'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:81:in `run_callbacks'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/abstract_controller/callbacks.rb:17:in `process_action'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_controller/metal/rescue.rb:29:in `process_action'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_controller/metal/instrumentation.rb:30:in `block in process_action'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/notifications.rb:123:in `block in instrument'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/notifications/instrumenter.rb:20:in `instrument'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/notifications.rb:123:in `instrument'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_controller/metal/instrumentation.rb:29:in `process_action'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_controller/metal/params_wrapper.rb:207:in `process_action'
app/shared/bundle/ruby/1.9.1/gems/activerecord-3.2.11/lib/active_record/railties/controller_runtime.rb:18:in `process_action'
app/shared/bundle/ruby/1.9.1/gems/newrelic_rpm-3.6.7.159/lib/new_relic/agent/instrumentation/rails3/action_controller.rb:38:in `block in process_action'
app/shared/bundle/ruby/1.9.1/gems/newrelic_rpm-3.6.7.159/lib/new_relic/agent/instrumentation/controller_instrumentation.rb:324:in `perform_action_with_newrelic_trace'
app/shared/bundle/ruby/1.9.1/gems/newrelic_rpm-3.6.7.159/lib/new_relic/agent/instrumentation/rails3/action_controller.rb:37:in `process_action'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/abstract_controller/base.rb:121:in `process'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/abstract_controller/rendering.rb:45:in `process'
app/releases/20130926165735/lib/error_logging.rb:16:in `process'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_controller/metal.rb:203:in `dispatch'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_controller/metal/rack_delegation.rb:14:in `dispatch'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_controller/metal.rb:246:in `block in action'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/routing/route_set.rb:73:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/routing/route_set.rb:73:in `dispatch'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/routing/route_set.rb:36:in `call'
app/shared/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:68:in `block in call'
app/shared/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:56:in `each'
app/shared/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:56:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/routing/route_set.rb:601:in `call'
app/shared/bundle/ruby/1.9.1/gems/omnicontacts-0.2.5/lib/omnicontacts/middleware/base_oauth.rb:41:in `call'
app/shared/bundle/ruby/1.9.1/gems/omnicontacts-0.2.5/lib/omnicontacts/middleware/base_oauth.rb:41:in `call'
app/shared/bundle/ruby/1.9.1/gems/omnicontacts-0.2.5/lib/omnicontacts/middleware/base_oauth.rb:41:in `call'
app/shared/bundle/ruby/1.9.1/gems/omnicontacts-0.2.5/lib/omnicontacts/builder.rb:27:in `call'
app/shared/bundle/ruby/1.9.1/gems/omniauth-1.1.4/lib/omniauth/strategy.rb:184:in `call!'
app/shared/bundle/ruby/1.9.1/gems/omniauth-1.1.4/lib/omniauth/strategy.rb:164:in `call'
app/shared/bundle/ruby/1.9.1/gems/omniauth-1.1.4/lib/omniauth/builder.rb:49:in `call'
app/shared/bundle/ruby/1.9.1/gems/newrelic_rpm-3.6.7.159/lib/new_relic/rack/error_collector.rb:43:in `call'
app/shared/bundle/ruby/1.9.1/gems/newrelic_rpm-3.6.7.159/lib/new_relic/rack/agent_hooks.rb:22:in `call'
app/shared/bundle/ruby/1.9.1/gems/newrelic_rpm-3.6.7.159/lib/new_relic/rack/browser_monitoring.rb:16:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/best_standards_support.rb:17:in `call'
app/shared/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/etag.rb:23:in `call'
app/shared/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/conditionalget.rb:25:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/head.rb:14:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/params_parser.rb:21:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/flash.rb:242:in `call'
app/shared/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/session/abstract/id.rb:210:in `context'
app/shared/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/session/abstract/id.rb:205:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/cookies.rb:341:in `call'
app/shared/bundle/ruby/1.9.1/gems/activerecord-3.2.11/lib/active_record/query_cache.rb:64:in `call'
app/shared/bundle/ruby/1.9.1/gems/activerecord-3.2.11/lib/active_record/connection_adapters/abstract/connection_pool.rb:479:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/callbacks.rb:28:in `block in call'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:405:in `_run__3425041011820436873__call__2237876334082526967__callbacks'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:405:in `__run_callback'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:385:in `_run_call_callbacks'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/callbacks.rb:81:in `run_callbacks'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/callbacks.rb:27:in `call'
app/shared/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/sendfile.rb:102:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/remote_ip.rb:31:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/debug_exceptions.rb:16:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/show_exceptions.rb:56:in `call'
app/shared/bundle/ruby/1.9.1/gems/railties-3.2.11/lib/rails/rack/logger.rb:32:in `call_app'
app/shared/bundle/ruby/1.9.1/gems/railties-3.2.11/lib/rails/rack/logger.rb:16:in `block in call'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/tagged_logging.rb:22:in `tagged'
app/shared/bundle/ruby/1.9.1/gems/railties-3.2.11/lib/rails/rack/logger.rb:16:in `call'
app/shared/bundle/ruby/1.9.1/gems/actionpack-3.2.11/lib/action_dispatch/middleware/request_id.rb:22:in `call'
app/shared/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/methodoverride.rb:21:in `call'
app/shared/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/runtime.rb:17:in `call'
app/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.11/lib/active_support/cache/strategy/local_cache.rb:72:in `call'
app/shared/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/lock.rb:15:in `call'
app/shared/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:136:in `forward'
app/shared/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:245:in `fetch'
app/shared/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:185:in `lookup'
app/shared/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:66:in `call!'
app/shared/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:51:in `call'
app/shared/bundle/ruby/1.9.1/gems/railties-3.2.11/lib/rails/engine.rb:479:in `call'
app/shared/bundle/ruby/1.9.1/gems/railties-3.2.11/lib/rails/application.rb:223:in `call'
app/shared/bundle/ruby/1.9.1/gems/railties-3.2.11/lib/rails/railtie/configurable.rb:30:in `method_missing'
.rvm/gems/ruby-1.9.3-p327/gems/passenger-4.0.5/lib/phusion_passenger/rack/thread_handler_extension.rb:77:in `process_request'
.rvm/gems/ruby-1.9.3-p327/gems/passenger-4.0.5/lib/phusion_passenger/request_handler/thread_handler.rb:140:in `accept_and_process_next_request'
.rvm/gems/ruby-1.9.3-p327/gems/passenger-4.0.5/lib/phusion_passenger/request_handler/thread_handler.rb:108:in `main_loop'
.rvm/gems/ruby-1.9.3-p327/gems/passenger-4.0.5/lib/phusion_passenger/request_handler.rb:441:in `block (3 levels) in start_threads'

Ngan

unread,
Mar 24, 2014, 5:10:01 PM3/24/14
to thinkin...@googlegroups.com
Hi Pat,

I know this thread is quite old...but this problem is still happening for us.  Even more now that we're deploying a lot more often.  Any insights?

Thanks,
Ngan

Pat Allan

unread,
Mar 26, 2014, 7:02:20 AM3/26/14
to thinkin...@googlegroups.com
I’m afraid I still have no idea why this is happening. There is code in place with the connection pooling that should clear out a dodgy connection from the pool and get a new one, so the fact that this is persisting between connections is frustrating.

It’s all the more frustrating given you’re not using persistent connections *between* requests, so each request is truly separate.

Although, just to confirm: the earlier error you reported was 'no enabled local indexes to search’, but the stack trace supplied is for ‘No response from searchd’. Is there any difference to the stack trace? Is it still a Riddle::ResponseError class being raised either way?

For more options, visit https://groups.google.com/d/optout.

Ngan

unread,
Mar 27, 2014, 11:19:58 AM3/27/14
to thinkin...@googlegroups.com
I think we're suffering this bug:

We reindex every 2-3 minutes.  The strange thing is that we actually have 2 sphinx servers.  When riddle gets a this error from the main server, it doesn't failover to the second server for some reason.  Is it because we're only failing over for some errors and not others?

To answer your question, yea it's a SphinxError either way:
ThinkingSphinx::SphinxError: no enabled local indexes to search
...

Ngan

unread,
Mar 27, 2014, 11:30:48 AM3/27/14
to thinkin...@googlegroups.com
Actually looking at our searchd logs, that bug has only hit us once, ever.

Most of the time it's this:
[Mon Mar 24 03:31:46.984 2014] [19547] WARNING: rotating index 'auction_core': cur to old rename failed: rename /.../app/releases/20140324030521/db/sphinx/production/auction_core.spl to /.../app/releases/20140324030521/db/sphinx/production/auction_core.old.spl failed: No such file or directory

I think this is because I'm rsync-ing stuff from the main sphinx server to the failover and I'm NOT ignoring the "spl"--when I should be.  I've changed the rsync command to this:

rsync -rpt --delete --exclude="*.spl" --exclude="*.old.*" --exclude="*.tmp.*" --exclude="*.new.*" #{database_path}/ #{address}:#{database_path} 2>&1

Hopefully that was what's causing the issue, and that this solves it... Let me know if you have any feedback on the rsync excludes...if I'm missing anything else.  Thanks for your help Pat, as always, you're the best!

Pat Allan

unread,
Mar 27, 2014, 7:14:03 PM3/27/14
to thinkin...@googlegroups.com
Ah, great to know that you have found a possible fix :) Hopefully it does the trick!

For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages