strange timeouts

81 views
Skip to first unread message

snacktime

unread,
Aug 5, 2009, 9:54:14 PM8/5/09
to Typhoeus
I have Typhoeus built with --enable-ares and --enable-nonblocking.
Every 70-100 requests it times out (response code 0). The timeout is
set to 500 ms. The code I have will try up to two different servers
before failing, and they always fail in pairs. If the first one
fails, so does the second. All the other requests are finishing in
under 50ms, with 10ms being the average. It's really strange, and I
don't think it's out network. The connection is never made, the
server never gets the request.

Using the same logic but with Net::Http I get no timeouts at all. I
use the same logic except I wrap both attempts with a one second timer.

Paul Dix

unread,
Aug 6, 2009, 8:52:10 AM8/6/09
to typh...@googlegroups.com
Can you provide a gist of the the code that is making the calls?
Wondering if it's an issue with the underlying libcurl easy objects.

Thanks,
Paul

snacktime

unread,
Aug 6, 2009, 3:42:30 PM8/6/09
to Typhoeus
Sure, here is the class.

class Client
include Typhoeus
def initialize
@logger = WebHook::Logger.new
end

def enqueue(job,servers,timeout=Config.client_timeout)
servers.each do |server|
begin
url = Config.server_to_uri(server)
resp = self.class.post(url, :body => job.to_json, :headers
=> {'Content-Type' => 'application/json'}, :timeout => timeout )
if resp.code.to_i == 200
@logger.debug("client enqueue: timeout = #{timeout}, time
= #{resp.time}, url = #{url} code = #{resp.code} body = #{resp.body}")
return true
else
@logger.info("client enqueue error: timeout = #{timeout},
time = #{resp.time}, url = #{url} code = #{resp.code} body = #
{resp.body}")
next
end
rescue Errno::ECONNREFUSED
@logger.info("client enqueue error: connection refused #
{url}")
next
rescue Exception => e
@logger.info("client enqueue error: #{url} #{e.to_s}")
return false
end
end
return false
end
end
Reply all
Reply to author
Forward
0 new messages