Hello,
I suggest using Chronicle Queue v4.x I have some examples of how to ensure a reader does reprocess messages it has already processed in my blog. See microservices part 2 & 3 in particular.
https://vanilla-java.github.io/
You can also store the index in Chronicle Map by saving the index and resetting the Tailer to play from that point. When the index rolls between cycles you could delete older files.
Regards, Peter.
--
You received this message because you are subscribed to the Google Groups "Chronicle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-chronicle+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to java-chronicl...@googlegroups.com.
Hello Sam,
Are you using Vanilla Chronicle? If so, you can determine from the index when the cycle has changed. At this point you can delete the old directory. We haven't supported V3 for quite some time so I don't remember all the details.
Best regards, Peter.
To unsubscribe from this group and stop receiving emails from it, send an email to java-chronicle+unsubscribe@googlegroups.com.
I've managed to get hold of Chronicle v4.5.15.
I am using a persistent map to keep the index and it is working well.
The only issue I have is with deleting old files.
Is there a simple way to get the file path given a cycle? Right now I am using a StoreFileListener to add the file path to a map with the cycle as a key. When I am retrieving elements and see the cycle had changed, I look up the map and delete that file. I am getting issues with the delete though since the file being deleted is still open it seems.
Any advice?
ChronicleQueue q = ….
final File file = q.file();
q.close();
if (file.isDirectory())
IOTools.shallowDeleteDirWithFiles((file));
IOTools is in chronicle core
This will delete all of the queue files meaning unprocessed elements will be lost.
I want to delete just the files that have been completely read hence why I keep track of the cycle numbers.
Any way of getting the path of the file given a cycle?
Thanks
int cycle = ((SingleChronicleQueue) queue).rollCycle().toCycle(<index>)
((SingleChronicleQueue) queue).listCyclesBetween( lowerCycle, upperCycle)from the cycle you can work out the filename, as an example, this codes shows you how you can create a tree map keyed on key=cycle, value=filenamesee net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.StoreSupplier#cycleTreeprivate NavigableMap<Long, File> cycleTree() {
final File parentFile = path;
if (!parentFile.exists())
throw new AssertionError("parentFile=" + parentFile.getName() + " does not exist");
final RollingResourcesCache dateCache = SingleChronicleQueue.this.dateCache;
final NavigableMap<Long, File> tree = new TreeMap<>();
final File[] files = parentFile.listFiles((File file) -> file.getName().endsWith(SUFFIX));
for (File file : files) {
tree.put(dateCache.toLong(file), file);
}
return tree;
}Rob
When I called File.delete() it fails. It seems the file is still held by Chronicle.
When does Chronicle release the file so that it can be deleted? When I debug, I see that the listeners onRelease method isn't called even when the queue moved on to a new cycle.
I have tested on Linux and the files are being deleted as expected.
I do see some messages saying "Rolled n times to find the next cycle file. This can occur if your appenders have not written anything for a while ... "
Currently I have a roll cycle of test seconds so I'm guessing the above error is due to no data being written for gaps longer than one second. Is this safe?
Thanks
One last thing, if I leave out '.rollCycle' when constructing the queue, what is the default behaviour?
I'd prefer to have the file roll on a weekly basis.
public AbstractChronicleQueueBuilder(File path) {
this.rollCycle = RollCycles.DAILY;
this.blockSize = 64L << 20;
this.path = path;
this.wireType = WireType.BINARY_LIGHT;
this.epoch = 0;
this.bufferCapacity = 2 << 20;
this.indexSpacing = -1;
this.indexCount = -1;
}
TEST_SECONDLY("yyyyMMdd-HHmmss", 1000, 1 << 15, 4), // only good for testing
MINUTELY("yyyyMMdd-HHmm", 60 * 1000, 2 << 10, 16), // 64 million entries per minute
TEST_HOURLY("yyyyMMdd-HH", 60 * 60 * 1000, 16, 4), // 512 entries per hour.
HOURLY("yyyyMMdd-HH", 60 * 60 * 1000, 4 << 10, 16), // 256 million entries per hour.
TEST_DAILY("yyyyMMdd", 24 * 60 * 60 * 1000, 8, 1), // Only good for testing - 63 entries per day
TEST2_DAILY("yyyyMMdd", 24 * 60 * 60 * 1000, 16, 2), // Only good for testing
TEST4_DAILY("yyyyMMdd", 24 * 60 * 60 * 1000, 32, 4), // Only good for testing
SMALL_DAILY("yyyyMMdd", 24 * 60 * 60 * 1000, 8 << 10, 8), // 512 million entries per day
DAILY("yyyyMMdd", 24 * 60 * 60 * 1000, 16 << 10, 16), // 4 billion entries per day
LARGE_DAILY("yyyyMMdd", 24 * 60 * 60 * 1000, 32 << 10, 32), // 32 billion entries per day
XLARGE_DAILY("yyyyMMdd", 24 * 60 * 60 * 1000, 128 << 10, 256), // 2 trillion entries per day
HUGE_DAILY("yyyyMMdd", 24 * 60 * 60 * 1000, 512 << 10, 1024), // 256 trillion entries per day
;
Thanks a lot for all your help!
For deleting rolled files when running in Windows (detected using os.name sysprop) I registered the file with a scheduled task that periodically attempts to delete registered files. Eventually, the file handle is released and deletion succeeds. Works ok.