Re: delete a job following reboot

46 views
Skip to first unread message

Chad Kouse

unread,
Aug 29, 2012, 5:13:46 PM8/29/12
to beansta...@googlegroups.com
Yeah. That's a hard problem depending on where in the process your consumer died. You can only delete a job you have reserved and if you close/reopen the connection to beanstalkd then you lose that client reservation that it relies on. You will need to code around this in your consumer just as you would with a transactional db that dies after committing a transaction but before telling the client it finished. 

--
Chad Kouse

On Wednesday, August 29, 2012 at 6:59 AM, Ben Nagy wrote:

Hi,

What's the best way to handle this - client reserves a job, client has to reboot, after reboot client recovers, does stuff and deletes the job (still within TTR). It looks like delete won't work if beanstalkd thinks you're a different connection to the person that reserved the job...

Thanks!

ben

--
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To view this discussion on the web visit https://groups.google.com/d/msg/beanstalk-talk/-/V_VbBVjyXJsJ.
To post to this group, send email to beansta...@googlegroups.com.
To unsubscribe from this group, send email to beanstalk-tal...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.

Ben Nagy

unread,
Sep 1, 2012, 1:19:51 AM9/1/12
to beansta...@googlegroups.com
On Thursday, August 30, 2012 2:58:52 AM UTC+5:45, chadkouse wrote:
Yeah. That's a hard problem depending on where in the process your consumer died. You can only delete a job you have reserved and if you close/reopen the connection to beanstalkd then you lose that client reservation that it relies on. You will need to code around this in your consumer just as you would with a transactional db that dies after committing a transaction but before telling the client it finished. 

The only other way I can think of to handle this is to reserve / delete the job immediately, but the the client has to absolutely guarantee it will be correctly processed, which is tricky, and avoiding that kind of workflow is why I chose beanstalkd in the first place! 

I guess I was hoping that there was some way I could emulate this behaviour with the existing API - otherwise it seems like a possible solution would be to patch the protocol to allow and 'override' for DELETE, even if it appears the job is reserved to someone else. Even if used incorrectly it shouldn't cause too many issues, since clients already need to handle the chance that they delete too late and find the job has already been put back in ready.

Are there any horrible consequences to that approach that I have missed?

Cheers,

ben 

jab_doa

unread,
Sep 2, 2012, 1:46:47 PM9/2/12
to beansta...@googlegroups.com
Hi,

we had the same problem. Basically what we needed is a behaviour like this:
* worker reserves job -> job gets reserved
* worker dies -> job gets buried

This would be handy for a lot of jobs where the error should get reviewed. In other cases its ok to just release the job and run it again.


Jan

Ben Nagy

unread,
Sep 3, 2012, 12:36:55 AM9/3/12
to beansta...@googlegroups.com
 Hi Jan,

This is not the same problem at all. I'm not concerned with a worker dying unexpectedly. I have a pattern where a worker MUST reboot in order to successfully complete the work unit. Once the reboot is done, the worker finishes work, and now wants to delete the job as successfully completed. If anything goes wrong along the way I still want the TTR to kick in and the job to get released.

Cheers,

ben

Chad Kouse

unread,
Sep 3, 2012, 9:16:32 AM9/3/12
to beansta...@googlegroups.com
Would it be possible to have your consumer remote control a different server to process the job and remotely issue the reboot command?  Then polling the remote server to see when it comes back up?  In other words the consumer wouldn't actually do the work or reboot but would be responsible for telling a 3rd party server what to do?  

This way you wouldn't lose the reference to the reserved job, however it would add some complexity and additional components to your process. 

--
Chad Kouse

--
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To view this discussion on the web visit https://groups.google.com/d/msg/beanstalk-talk/-/90p17QWXPnUJ.

Ben Nagy

unread,
Sep 3, 2012, 11:39:33 AM9/3/12
to beansta...@googlegroups.com
On Monday, September 3, 2012 7:01:42 PM UTC+5:45, chadkouse wrote:
Would it be possible to have your consumer remote control a different server to process the job and remotely issue the reboot command?  Then polling the remote server to see when it comes back up?  In other words the consumer wouldn't actually do the work or reboot but would be responsible for telling a 3rd party server what to do?  

This way you wouldn't lose the reference to the reserved job, however it would add some complexity and additional components to your process.

I think that would just leave me with the same level of complexity but at one remove. I went over the existing protocol again, and it seems like I can't do this with beanstalkd, which is extremely vexing. Even if I hacked in the delete command to delete jobs in other people's reserved queues, there's still every chance that the connection would error while the client was rebooting, and the job would then get released anyway. Bury would work, if you could bury with a timeout :/ Maybe I could use a new tube, one per client, and use that as a 'shelf' of sorts.

Anyway, thanks for the ideas.

Cheers,

ben
Reply all
Reply to author
Forward
0 new messages