Hello there,
I'm totally new to Druid, and I'd like to know if it could fit our need.
We have a kafka-streams app producing time-series data we are currently storing in SQL DB (actually timescaleDB). We envisage to change this storage for, maybe, druid.
This kafka-streams app produces outputs in the form of KTables in kafka jargon, where each record represent an update on a given key. Which means the same key can appear several times.
To store this data, we thus need to work in UPSERT mode. From what I read in druid docs, it does not seem that druid can consume a kafka topic with this UPSERT mechanism.
What's the proper way of doing this in Druid ?
Folks using kafka streams here, how do you do ?
Thanks in advance
Mathieu