client appears to constantly reserve jobs.

31 views
Skip to first unread message

Ian Docherty

unread,
Oct 22, 2012, 6:54:57 AM10/22/12
to beansta...@googlegroups.com
We have a situation that only occurs on our production environment, and we can't reproduce an our test/uat systems.

What appears to happen is that we put a job on a queue, and the client reserves the job ten times (which is the maximum number of times we allow) at which point we bury the job.

Our client code is basically (some detail omitted) as follows.

sub consume {
    my $self = shift;

    my $job;
    RESERVE: {
        my $bs = $self->beanstalk;
        $bs->watch('my_queue');
        $job = $bs->reserve;
        my $stats = $job->stats;
        if ($stats->reserves > 10) {
            carp "Job failed max reserves. Burying.";
            $job->bury;
            redo RESERVE;
        }
    };
    return $job;
}

The fact that the job is being buried (we get the error in the log) means that at least the client is calling 'consume'.

We don't get any other debug messages in the log file (even a log message called immediately after calling consume).

We don't think we are getting any client timeout errors (that is checked in the consume method but not shown in the above code) and in any case the job is failing very quickly. We are taking the default timeout (120 seconds) (we will provide a timeout in the next release) and the max_reserves error is occurring within seconds.

So, the hypothesis is that the client is crashing (even though we don't get any error messages in the log file).

I note from a separate question in this mailing list that a possible cause is 'the client connection to the server closes'. Is there anything I would expect to see in a log file that I could check for this happening?

Kind Regards
Ian


Ian Docherty

unread,
Oct 24, 2012, 3:52:29 AM10/24/12
to beansta...@googlegroups.com
Apologies to the list for the 'noise'.

The problem turned out to be another instance of the program running on a test environment, connecting to the production environment (yes I know, firewalls, I was just as surprised) which, if it reserved the jobs before the production machine could process them would cause the number of retries to be exceeded.

Sorry.
Reply all
Reply to author
Forward
0 new messages