Hi,
We're constantly facing issues with StreamEventJournalP and jobs based on it. The main problem is that the event journal capacity is fixed and the capacity is evenly distributed across partitions. During peak situations it can easily happen in our case that one partition receives too many events and the journal overflows, leading to loss of events. Basically no capacity configuration is ideal then - it is either too small (for such situations) or too large in total. Besides, there seems to be no chance to handle the overflow events and notify users.
I've now written my own source processor which listens to map events, buffers items in MPSCQueue and then emits then. I'd like to know your opinion - how could this solution be worse than StreamEventJournalP? I understand that the message buffer won't be backed up by other nodes, but this will be covered by an extra process finding forgotten messages (a scheduled job calling IMap.localKeySet with a predicate).
Source processor:
private transient IMap<Object, T> map;
private transient MPSCQueue<T> resultsQueue;
private transient Traverser<T> traverser;
...
@Override
protected void init(@Nonnull Context context) {
map = context.jetInstance().getMap(mapName);
map.addLocalEntryListener((EntryAddedListener<Object, T>) added -> resultsQueue.offer(added.getValue()));
resultsQueue = new MPSCQueue<>(null);
traverser = () -> resultsQueue.poll();
}
@Override
public boolean complete() {
emitFromTraverser(traverser);
return false;
}