Hi,
I asked the same question in Druid User forum, but did not receive any reply. So try here.
But the data I uploaded is always changed, so I need to reload it again and avoid duplicates and collisions if data was already loaded.
But all info about Hadoop Batch Ingestion,
Lookups.
Is it possible to update existing Druid data during Kafka streams?
In other words, I need to rewrite the old values with new ones using Kafka indexing service (streams from Kafka).
May be any kind of setting to rewrite duplicates?
Thanks!