Hi,
I'm looking at the new Streams API and trying to work out if it's a best fit for my use case. The problem I'm having is with acknowledgement.
I have an external system placing requests onto a queue (Beanstalk in this case), once the external system receives the OK from beanstalk, like an event during replay, I'm committed to preforming that job.
My plan was to build a ActorPublisher based Beanstalk client based on the existing Akka IO code, then reserve the jobs on the beanstalk server. This starts a time to live on the beanstalk server, if I do not delete or release the job within that window, the server will assume I'm dead and release the job back onto the queue (
https://github.com/kr/beanstalkd/wiki/faq#how-does-ttr-work). This is perfect in a HA system, because if my Akka node dies, beanstalk will simply reset the job for another node to process.
My problem is once my ActorPublisher gives the job to the "onNext" method, it has no way to track if and when the job is completed, thus the only time it can delete from beanstalk is straight away and then I just lost the whole advantage of the job TTL.
I'm leaning towards a Sink such that the stream has to be `BeanstalkPublisher ~> F1 ~> F2 ~> BeanstalkDeleter`, but this means the data type flowing though F1 and F2 has to include the job id. This makes Flow composition a little more annoying as I had planned to reuse F1 and F2 but without a Beanstalk source.
Am I missing something obvious, or is this the only way to achieve an ACK back to the upstream publisher once F2 has completed processing? (Note in this example F2 is the persistence point)
Cheers,
Dom