If the appender crashes during a write, the incomplete message is truncated on the next write.
The index and data could be out of sync on disk and in the event of a power failure, the file could be corrupted. If the index is more up to date, the entries will appear to be full of null bytes, if the data is more up to date, the entries will be lost.
The best way to prevent loss is to use replication.
If a reader updates a record, you should use the thread safe operations. This will be visible to other readers however there is no event driven way for another reader to know this was one (without reading the same data)
You could use CRC32 but replication is better imho. I suggest adding the check sum to the end of the message.
The vanilla chronicle and queue version 4 don't need the length as well. The record will only be the length you wrote.
Instead of a status I would have the timestamp processed. This gives the field a dual purpose. For extra monitoring you might want;
- The time the record is written
- The time the record is first read
- The time the record was processed.
By performing the "first read" as a compare and swap from 0 you can assign the record to exactly one reader. If this fails the record was read already (possibly another worker)
You might like to record which worker it was assigned to.
Lastly you can have another process looking to see if a record is taking a long time to be read or a long time to be processed. If a record is taking a long processing time you can trigger a stack trace to start to see why it is taking a long time.
--
You received this message because you are subscribed to the Google Groups "Chronicle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-chronicl...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
At the moment it is not possible to sync without writing a record. You could add a dummy message periodically.