Thank you for trying it out, Dongkyu.
Yes, the metrics are definitely on my TODO list. I will see if I can generate .mdl (metric descriptor language) that takes the Druid Metrics via Druid Emitters.
I haven't really explored the capability of CDH for the metrics, but it shouldn't be difficult looking at the Kafka example.
Other TODO items are:
- expose more configuration properties
- add configuration validators and safe guards for memory and threads configurations
- add pull-deps command for plugins
- add rolling restart
- test Druid upgrade
- clean up the CSD and parcel build tools for production use.
- setup repo
Some configuration values are not user serviceable for now. The deep storage support is pretty much HDFS only.
Nonetheless, I was able to ingest via Kafka Indexing Service as well as batch ingestion using Hadoop.
At the moment I'm working on ETL extension. I will improve the CSD after the ETL.
kamusahamnida,
Kenji Noguchi