patch to limit maximum parallel connections

79 views
Skip to first unread message

anaconda

unread,
Jan 28, 2008, 7:05:50 AM1/28/08
to Capistrano
when using capistrano for a large amount of servers you get into
trouble when capistrano consumes all your memory and cpu resources.
Besides that it isn't very effective to start hundreds of threads.
I attach a simple patch for connections.rb which allows you to limit
the maximum number of parallel sessions by setting the task
option :maxsessions or by setting the enviroment variable
CAPMAXSESSIONS. Setting :maxsessions to 0 disables the limit.
Without any settings the number of parallel sessions is limited to 100
which at least can be handled by a typical server hardware.
For me this patch works without any negative side effects but I didn't
test all features of capistrano!

--- connections.rb 2008-01-28 12:39:10.000000000 +0100
+++ connections-maxsessions.rb 2008-01-28 12:45:17.000000000 +0100
@@ -1,5 +1,6 @@
require 'capistrano/gateway'
require 'capistrano/ssh'
+require 'enumerator'

module Capistrano
class Configuration
@@ -121,22 +122,30 @@
logger.trace "servers: #{servers.map { |s| s.host }.inspect}"

# establish connections to those servers, as necessary
- begin
- establish_connections_to(servers)
- rescue ConnectionError => error
- raise error unless task && task.continue_on_error?
- error.hosts.each do |h|
- servers.delete(h)
- failed!(h)
+ task.options[:maxsessions] = ENV['CAPMAXSESSIONS'] || 100 if
task.options[:maxsessions].nil?
+ task.options[:maxsessions] = servers.length if
task.options[:maxsessions] == 0
+ servers.each_slice(task.options[:maxsessions]) do |servers|
+ begin
+ establish_connections_to(servers)
+ rescue ConnectionError => error
+ raise error unless task && task.continue_on_error?
+ error.hosts.each do |h|
+ servers.delete(h)
+ failed!(h)
+ end
end
- end

- begin
- yield servers
- rescue RemoteError => error
- raise error unless task && task.continue_on_error?
- error.hosts.each { |h| failed!(h) }
- end
+ begin
+ yield servers
+ rescue RemoteError => error
+ raise error unless task && task.continue_on_error?
+ error.hosts.each { |h| failed!(h) }
+ end
+
+ # clear all sessions for this slice of servers
+ sessions.clear
+
+ end
end

private

Jamis Buck

unread,
Jan 28, 2008, 10:50:19 AM1/28/08
to capis...@googlegroups.com
Thanks for the patch! However, it looks like it resets the sessions on
every command, which drastically reduces the efficiency of Capistrano
in general. I can see your point about large numbers of servers,
though, so if you can write up a patch that keeps the sessions handy
between commands, I would consider it.

In the future, though, please post all patches to the Rails trac: http://dev.rubyonrails.org
. That way they won't get lost in the shuffle. Thanks!

- Jamis

> --~--~---------~--~----~------------~-------~--~----~
> To unsubscribe from this group, send email to capistrano-...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/capistrano
> -~----------~----~----~----~------~----~------~--~---
>

anaconda

unread,
Jan 29, 2008, 5:57:12 AM1/29/08
to Capistrano
Do you see any chance to keep the session handy without consuming too
much memory? I use Capistrano to manage up to 1200 hosts within one
'run' statement - normally no complex tasks . Before cleaning out the
sessions for every slice of servers Capistrano consumed over 2GB of
memory was getting really slow for the last servers. The only thing I
can think of at the moment is to deactivate the session clearing when
the number of servers is smaller than :maxsessions or
when :maxsessions is set to 0.

I'll take look at it and post a revised version of this patch at
http://dev.rubyonrails.org

- Andi
> smime.p7s
> 3KDownload

Jamis Buck

unread,
Jan 29, 2008, 9:53:13 AM1/29/08
to capis...@googlegroups.com
Net::SSH v2 is almost ready for plugging into a beta version of
Capistrano. I'll be really curious to know whether it cuts down on the
memory requirements in your case.

- Jamis

Reply all
Reply to author
Forward
0 new messages