Hey All,
I'm doing compaction with a filter to remove some data from a datasource, but I've found that, if the filter causes all the rows in the time chunk to be removed, the compaction task publishes 0 segments and the original segment that still has the bad data remains. I feel like there must be a flag or something I could specify to cause the original segment to be dropped if the compaction tasks causes all the rows to be removed, but I haven't found it in the documentation yet? Does anyone have any thoughts?
I see the `dropExisting` flag, but, it's still marked as beta and normally isn't needed for compaction even with filtering. Everything works as intended with the filtering so long as there is remaining data in the segment after the compaction with filtering runs. The old segment is overshadowed and dropped and the newly compacted segment takes its place.
Thanks,
Dan