question about limits

81 views
Skip to first unread message

orekdm

unread,
Jul 15, 2011, 5:40:20 PM7/15/11
to taskspooler
I just kicked off a process to feed ~2200 scripts to ts and it seems
to have frozen around job 1057. The processes are running, but when I
run ts to view the queue, it just freezes.

Is there a way to tune this to accommodate large queues? Else, are
there recommended limits that I should stay within?

Thanks!

Keith

Lluís Batlle i Rossell

unread,
Jul 17, 2011, 2:05:31 PM7/17/11
to tasks...@googlegroups.com

Hello,

thank you for pushing ts to limits. :)

You know that 'ts' works creating a process for each enqueued job, connected to
a udp socket to the server. As systems
may have troubles on big numbers of processes (even if waiting), and big numbers
of opened descriptors, I set a limit around 1000k disallowing any new
connection. Then, any new ts connection (either enqueue or listing) is blocked.

I have not come up with a good solution to this... maybe I should implement a
protocol that allows disconnection from the server, always welcoming 'listing'
connections. Something that should be transparent to the user.

That's the best idea I have about overcoming the problem. If you have any
suggestion, welcome!

Regards,
Llu�s

mark meissonnier

unread,
Jul 17, 2011, 8:26:00 PM7/17/11
to tasks...@googlegroups.com
I've run into that pb myself, And in some cases I've had to write scripts to count the size of the list and avoid sending new tasks if it's close to that limit.

It would be nice to have a standard error message saying it is not taking any new tasks at least, without hanging...
I mean it's easy to write in a managing script
if(errorTs(`ts newtask.sh`)){
  sleep 100;
  ... or something like it.

Cause for now, once it's hanging, it messes up everything... (1000 does seem small but whatever a fixed limit, you can always encounter it, so I think dealing with the limit is also part of the problem)

anyway, my 2 quick 2 cents.
Thanks
Mark
   


From: Lluís Batlle i Rossell <viri...@gmail.com>
To: tasks...@googlegroups.com
Sent: Sun, July 17, 2011 11:05:31 AM
Subject: Re: [taskspooler] question about limits
Lluís

--
You received this message because you are subscribed to the Google Groups "taskspooler" group.
To post to this group, send an email to tasks...@googlegroups.com.
To unsubscribe from this group, send email to taskspooler+unsub...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/taskspooler?hl=en-GB.

Lluís Batlle i Rossell

unread,
Jul 18, 2011, 3:04:12 AM7/18/11
to tasks...@googlegroups.com
On Sun, Jul 17, 2011 at 05:26:00PM -0700, mark meissonnier wrote:
> I've run into that pb myself, And in some cases I've had to write scripts to
> count the size of the list and avoid sending new tasks if it's close to that
> limit.

That's something I hit too rarely, so maybe that's why there is not that much care
in ts for thisc ase. :)

>
> Cause for now, once it's hanging, it messes up everything... (1000 does seem
> small but whatever a fixed limit, you can always encounter it, so I think
> dealing with the limit is also part of the problem)

I agree.
It should be easy to implement; I'll try to write it at night.

Nevertheless, I'd head towards some kind of blocking of the enqueuing, without
blocking other ts operations. After this, I'd make the enqueuing failure (with
returned errorcode) a commandline option.

Does it look fine?

Thank you Mark!

>
> ________________________________
> From: Llu�s Batlle i Rossell <viri...@gmail.com>


> To: tasks...@googlegroups.com
> Sent: Sun, July 17, 2011 11:05:31 AM
> Subject: Re: [taskspooler] question about limits
>
> On Fri, Jul 15, 2011 at 02:40:20PM -0700, orekdm wrote:
> > I just kicked off a process to feed ~2200 scripts to ts and it seems
> > to have frozen around job 1057. The processes are running, but when I
> > run ts to view the queue, it just freezes.
> >
> > Is there a way to tune this to accommodate large queues? Else, are
> > there recommended limits that I should stay within?
>
> Hello,
>
> thank you for pushing ts to limits. :)
>
> You know that 'ts' works creating a process for each enqueued job, connected to
> a udp socket to the server. As systems
> may have troubles on big numbers of processes (even if waiting), and big numbers
> of opened descriptors, I set a limit around 1000k disallowing any new
> connection. Then, any new ts connection (either enqueue or listing) is blocked.
>
> I have not come up with a good solution to this... maybe I should implement a
> protocol that allows disconnection from the server, always welcoming 'listing'
> connections. Something that should be transparent to the user.
>
> That's the best idea I have about overcoming the problem. If you have any
> suggestion, welcome!
>
> Regards,

> Llu�s


>
> --
> You received this message because you are subscribed to the Google Groups
> "taskspooler" group.
> To post to this group, send an email to tasks...@googlegroups.com.
> To unsubscribe from this group, send email to

> taskspooler...@googlegroups.com.


> For more options, visit this group at
> http://groups.google.com/group/taskspooler?hl=en-GB.
>

> --
> You received this message because you are subscribed to the Google Groups "taskspooler" group.
> To post to this group, send an email to tasks...@googlegroups.com.

> To unsubscribe from this group, send email to taskspooler...@googlegroups.com.

mark meissonnier

unread,
Jul 18, 2011, 8:44:52 AM7/18/11
to tasks...@googlegroups.com

Nevertheless, I'd head towards some kind of blocking of the enqueuing, without
blocking other ts operations. After this, I'd make the enqueuing failure (with
returned errorcode) a commandline option.

Does it look fine?

Lluis that sounds great. Basically avoiding a jam/freeze is what it's about, whatever the implementation.
Thanks again a million.

Mark

Lluís Batlle i Rossell

unread,
Jul 18, 2011, 4:53:46 PM7/18/11
to tasks...@googlegroups.com
On Mon, Jul 18, 2011 at 05:44:52AM -0700, mark meissonnier wrote:
>
> Lluis that sounds great. Basically avoiding a jam/freeze is what it's about,
> whatever the implementation.
> Thanks again a million.

Ok, time for you to test what I implemented on blocking!

It's on the hg default branch.
hg clone http://vicerveza.homeunix.net/~mercurial/cgi-bin/hgwebdir.cgi/ts#default

Can you try it?

Regards,
Llu�s.

mark meissonnier

unread,
Jul 19, 2011, 7:55:27 AM7/19/11
to tasks...@googlegroups.com
I do not have any bandwidth for the coming 48h... Will take a look at it at the end of the week.
thx
Mark



From: Lluís Batlle i Rossell <viri...@gmail.com>
To: tasks...@googlegroups.com
Sent: Mon, July 18, 2011 1:53:46 PM

Subject: Re: [taskspooler] question about limits
Lluís.


--
You received this message because you are subscribed to the Google Groups "taskspooler" group.
To post to this group, send an email to tasks...@googlegroups.com.
To unsubscribe from this group, send email to taskspooler+unsub...@googlegroups.com.

Lluís Batlle i Rossell

unread,
Jul 19, 2011, 1:33:52 PM7/19/11
to tasks...@googlegroups.com
On Tue, Jul 19, 2011 at 04:55:27AM -0700, mark meissonnier wrote:
> I do not have any bandwidth for the coming 48h... Will take a look at it at the
> end of the week.

Don't worry, Mark!

Keith, could you try it? Or anyone else?
Let me note that this tarball link should also work for you:
http://vicerveza.homeunix.net/~mercurial/cgi-bin/hgwebdir.cgi/ts/archive/80751242a508.tar.gz

Avoiding mercurial can be a plus to get this tested. :)

Lluís Batlle i Rossell

unread,
Jul 22, 2011, 5:09:59 PM7/22/11
to tasks...@googlegroups.com
On Tue, Jul 19, 2011 at 04:55:27AM -0700, mark meissonnier wrote:
> I do not have any bandwidth for the coming 48h... Will take a look at it at the
> end of the week.
> thx
> Mark

The changes worked well for me even on cygwin. Cygwin gave an impressingly low
number of simultaneous jobs... due to open fd limits I imagine. But I could run
'ts' while the enqueuing script was blocked by a ts enqueue command.

mark meissonnier

unread,
Jul 23, 2011, 11:05:14 AM7/23/11
to tasks...@googlegroups.com
so what I'm observing is that when you reach the limit,
inserting a job that's above the limit,
ts hangs,
when you kill it (ctrl-c),
you can see the job failed (was killed)...

0    finished   /tmp/ts-out.UreE8G   0        20.00/0.00/0.00 sleep 20
995  finished   (...)                -1       0.00/0.00/0.00 sleep 20

mark@jack:~/Downloads/ts-80751242a508$ ./ts -i 995
Exit status: killed by signal 9
Command: sleep 20

Is that the expected behavior?
The good news is the taskspooler is still responsive on previous tasks inserted, once the excess is inserted (which wasn't the case before - used to break).
The bad news is the insertion of the "excess job" hangs, which means the script handling job insertion will hang as well...
Is there anyway for ts to return an "error message" as you insert the "straw that breaks the camel's back"?

Thanks for the good work.
Very appreciated...
Cheers

Mark



From: Lluís Batlle i Rossell <viri...@gmail.com>
To: tasks...@googlegroups.com
Sent: Mon, July 18, 2011 12:04:12 AM

Subject: Re: [taskspooler] question about limits

On Sun, Jul 17, 2011 at 05:26:00PM -0700, mark meissonnier wrote:
> I've run into that pb myself, And in some cases I've had to write scripts to
> count the size of the list and avoid sending new tasks if it's close to that
> limit.

That's something I hit too rarely, so maybe that's why there is not that much care
in ts for thisc ase. :)

>
> Cause for now, once it's hanging, it messes up everything... (1000 does seem
> small but whatever a fixed limit, you can always encounter it, so I think
> dealing with the limit is also part of the problem)

I agree.
It should be easy to implement; I'll try to write it at night.

Nevertheless, I'd head towards some kind of blocking of the enqueuing, without
blocking other ts operations. After this, I'd make the enqueuing failure (with
returned errorcode) a commandline option.

Does it look fine?

Thank you Mark!

>
> ________________________________
> From: Lluís Batlle i Rossell <viri...@gmail.com>

> To: tasks...@googlegroups.com
> Sent: Sun, July 17, 2011 11:05:31 AM
> Subject: Re: [taskspooler] question about limits
>
> On Fri, Jul 15, 2011 at 02:40:20PM -0700, orekdm wrote:
> > I just kicked off a process to feed ~2200 scripts to ts and it seems
> > to have frozen around job 1057.  The processes are running, but when I
> > run ts to view the queue, it just freezes.
> >
> > Is there a way to tune this to accommodate large queues?  Else, are
> > there recommended limits that I should stay within?
>
> Hello,
>
> thank you for pushing ts to limits. :)
>
> You know that 'ts' works creating a process for each enqueued job, connected to
> a udp socket to the server. As systems
> may have troubles on big numbers of processes (even if waiting), and big numbers
> of opened descriptors, I set a limit around 1000k disallowing any new
> connection. Then, any new ts connection (either enqueue or listing) is blocked.
>
> I have not come up with a good solution to this... maybe I should implement a
> protocol that allows disconnection from the server, always welcoming 'listing'
> connections. Something that should be transparent to the user.
>
> That's the best idea I have about overcoming the problem. If you have any
> suggestion, welcome!
>
> Regards,
> Lluís

>
> --
> You received this message because you are subscribed to the Google Groups
> "taskspooler" group.
> To post to this group, send an email to tasks...@googlegroups.com.
> To unsubscribe from this group, send email to

> For more options, visit this group at
> http://groups.google.com/group/taskspooler?hl=en-GB.
>
> --
> You received this message because you are subscribed to the Google Groups "taskspooler" group.
> To post to this group, send an email to tasks...@googlegroups.com.
> To unsubscribe from this group, send email to taskspooler+unsub...@googlegroups.com.

> For more options, visit this group at http://groups.google.com/group/taskspooler?hl=en-GB.
>

--
You received this message because you are subscribed to the Google Groups "taskspooler" group.
To post to this group, send an email to tasks...@googlegroups.com.
To unsubscribe from this group, send email to taskspooler+unsub...@googlegroups.com.

Lluís Batlle i Rossell

unread,
Jul 23, 2011, 3:39:45 PM7/23/11
to tasks...@googlegroups.com
On Sat, Jul 23, 2011 at 08:05:14AM -0700, mark meissonnier wrote:
> Is that the expected behavior?
> The good news is the taskspooler is still responsive on previous tasks inserted,
> once the excess is inserted (which wasn't the case before - used to break).
> The bad news is the insertion of the "excess job" hangs, which means the script
> handling job insertion will hang as well...

Yes, this is what I meant it to do. Therefore the enqueuing script does not need
handle anything specially. The same way as with pipes on unix.

> Is there anyway for ts to return an "error message" as you insert the "straw
> that breaks the camel's back"?

I could add a command line option. I'll try to get it working.

Thank you!

Lluís Batlle i Rossell

unread,
Jul 28, 2011, 1:52:19 PM7/28/11
to tasks...@googlegroups.com
On Sat, Jul 23, 2011 at 08:05:14AM -0700, mark meissonnier wrote:
> Is there anyway for ts to return an "error message" as you insert the "straw
> that breaks the camel's back"?

There it is:
http://vicerveza.homeunix.net/~mercurial/cgi-bin/hgwebdir.cgi/ts/archive/dc46c806c6f4.tar.gz

The parameter '-B' makes ts exit with result 2 in the case of a full queue on
the server, instead of waiting. Without that parameter, the client will wait
until the server allocates the job.

The server will have a place for a new job once a job finished, or an enqueued
job was removed.

I chose 'B' quite at random. If anyone has a suggestion for a better letter,
I'll update it.

mark meissonnier

unread,
Jul 28, 2011, 4:32:30 PM7/28/11
to tasks...@googlegroups.com
great!!


From: Lluís Batlle i Rossell <viri...@gmail.com>
To: tasks...@googlegroups.com
Sent: Thu, July 28, 2011 10:52:19 AM
Subject: [taskspooler] Exit at the full queue, optional
Reply all
Reply to author
Forward
0 new messages