--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/f2ef2b77-1619-44b2-8e2b-8a0d46a3008e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
One follow up question on this ? when we day it can scale to petabytes of data we need that much space on the historical nodes? this means if I have 1 petabyte of data to analyze, I will need atleast 1 petabyte(without replication) on HDFS(deepstorage) and 1 petabyte of total disk space on historical nodes?
--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/99275276-10ec-4e99-90b5-e89fcde58421%40googlegroups.com.
Finally my understanding is,
Deep storage is not for data scalability. It is just for data backup or data recovery.
All segments need to be located on historical before query.
So, If you have 1TB segments to be queried, Your historical need to have more than 1TB disk.