Kafka Streams currently supports at-least-once processing guarantees in the presence of failure. This means that if your stream processing application fails, no data records are lost and fail to be processed, but some data records may be re-read and therefore reprocessed.
- recordCollector.flush(); that does producer.flush() that is ok.
at this point all the records are sent and we wait until all future are marked as done (completed successfully or with an error)
- offsets are committed
If the future from recordColletor is done with an error we have this:
private final Callback callback = new Callback() {
@Override
public void onCompletion(RecordMetadata metadata, Exception exception) {
if (exception == null) {
TopicPartition tp = new TopicPartition(metadata.topic(), metadata.partition());
offsets.put(tp, metadata.offset());
} else {
log.error("Error sending record: " + metadata, exception);
}
}