How many classes? How many producers/owners.
What type of GTS selectors?
We have mixed PR #141 and #143. Query performances are much more better than before (around 5x better) :DWe still have issues with datapoints ingestion :/Stuck around 80k dps | 0.4 req/s | 90MbsThe load is pretty low 7/56There is some field of improvement, regarding this issue ?
--
You received this message because you are subscribed to a topic in the Google Groups "Warp 10 users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/warp10-users/rPxAPrzuaD8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to warp10-users...@googlegroups.com.
To post to this group, send email to warp10...@googlegroups.com.
Visit this group at https://groups.google.com/group/warp10-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/warp10-users/ccaa395d-4701-4898-833e-8ceef11ed138%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I have 6 acceptors and 8 selectors. Batch size is between 5 to 300Mb.GTS count is constant so no new GTS at all.36 chunks of 1h.The ingestion rate is slower now - around 50k dps. Java gc is pretty stressful :/
To unsubscribe from this group and all its topics, send an email to warp10-users+unsubscribe@googlegroups.com.
opt/java8/bin/java -javaagent:/opt/warp/bin/jmx_prometheus_javaagent-0.7.jar=127.0.0.1:9101:/opt/warp/etc/jmx_prometheus.yml -Djava.net.preferIPv4Stack=true -Djava.security.egd=file:/dev/./urandom -Djava.awt.headless=true -Dlog4j.configuration=file:/opt/warp/etc/log4j.properties -Dsensision.server.port=9100 -Dsensision.events.dir=/opt/sensision/data/metrics/ -Dsensision.default.labels=cell=inmemory -Xms64g -Xmx300g -XX:+UseG1GC -cp etc:/opt/warp/bin/warp10-1.2.5-rc6-16-g4abacc1.jar io.warp10.standalone.Warp /opt/warp/etc/warp.conf >> /opt/warp/nohup.out 2>&1
To unsubscribe from this group and all its topics, send an email to warp10-users...@googlegroups.com.
You received this message because you are subscribed to the Google Groups "Warp 10 users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to warp10-users...@googlegroups.com.
To post to this group, send email to warp10...@googlegroups.com.
Visit this group at https://groups.google.com/group/warp10-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/warp10-users/b7ef786d-9c3c-4fd8-b290-b62bc4d200df%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to warp10-users+unsubscribe@googlegroups.com.
# netstat -na | grep 8080|grep ESTABLISHED|wc -l
33
To unsubscribe from this group and stop receiving emails from it, send an email to warp10-users...@googlegroups.com.
To post to this group, send email to warp10...@googlegroups.com.
Visit this group at https://groups.google.com/group/warp10-users.
--
You received this message because you are subscribed to the Google Groups "Warp 10 users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to warp10-users+unsubscribe@googlegroups.com.
To post to this group, send email to warp10...@googlegroups.com.
Visit this group at https://groups.google.com/group/warp10-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/warp10-users/0c962c70-2aa4-4d04-965f-660ba773204d%40googlegroups.com.
You received this message because you are subscribed to the Google Groups "Warp 10 users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to warp10-users+unsubscribe@googlegroups.com.
To post to this group, send email to warp10...@googlegroups.com.
Visit this group at https://groups.google.com/group/warp10-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/warp10-users/7654d23c-fd16-40b8-891c-5eb1ff809830%40googlegroups.com.
Is your dataset supposed to fit in memory? Are you executing some warpscript which fiddles with lots of data?
--
You received this message because you are subscribed to the Google Groups "Warp 10 users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to warp10-users+unsubscribe@googlegroups.com.
To post to this group, send email to warp10...@googlegroups.com.
Visit this group at https://groups.google.com/group/warp10-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/warp10-users/7b423e4a-1c9e-4fd0-8aaf-7179fc33ce72%40googlegroups.com.
Thanks for your gc.log.The memory consumption of the actual datapoints (the memory reported via sensision) does not take into account the memory overhead of the various objects.The memory footprint of a GTSEncoder is a little less than 200 bytes, assuming you have 36 chunks per GTS and 25M GTS as you mentioned in a previous message, this means you have for the GTSEncoder alone a total south of 200 * 36 * 25M = 180G, not counting the actual data, but once your number of GTS has stabilized, this won't evolve.If you reduce your chunk count to 18 chunks of 12 minutes, you will save ~90G of memory.During the period that your gc log covers, how did the number of GTS evolve?
On Thu, Jan 19, 2017 at 8:49 PM, Kevin GEORGES <k4ge...@gmail.com> wrote:
Please find attach my gc logsYes, if for a seven bytes by datapoint storage -> 36 buckets * 6min * 400kdp * 7 = ~37GBSame behaviour with or without queries
Le mardi 17 janvier 2017 14:56:05 UTC+1, Mathias Herberts a écrit :Could you share your GC logs?Is your dataset supposed to fit in memory? Are you executing some warpscript which fiddles with lots of data?
--
You received this message because you are subscribed to the Google Groups "Warp 10 users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to warp10-users...@googlegroups.com.
To post to this group, send email to warp10...@googlegroups.com.
Visit this group at https://groups.google.com/group/warp10-users.
Thanks for your gc.log.The memory consumption of the actual datapoints (the memory reported via sensision) does not take into account the memory overhead of the various objects.The memory footprint of a GTSEncoder is a little less than 200 bytes, assuming you have 36 chunks per GTS and 25M GTS as you mentioned in a previous message, this means you have for the GTSEncoder alone a total south of 200 * 36 * 25M = 180G, not counting the actual data, but once your number of GTS has stabilized, this won't evolve.If you reduce your chunk count to 18 chunks of 12 minutes, you will save ~90G of memory.During the period that your gc log covers, how did the number of GTS evolve?
On Thu, Jan 19, 2017 at 8:49 PM, Kevin GEORGES <k4ge...@gmail.com> wrote:
Please find attach my gc logsYes, if for a seven bytes by datapoint storage -> 36 buckets * 6min * 400kdp * 7 = ~37GBSame behaviour with or without queries
Le mardi 17 janvier 2017 14:56:05 UTC+1, Mathias Herberts a écrit :Could you share your GC logs?Is your dataset supposed to fit in memory? Are you executing some warpscript which fiddles with lots of data?
--
You received this message because you are subscribed to the Google Groups "Warp 10 users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to warp10-users...@googlegroups.com.
To post to this group, send email to warp10...@googlegroups.com.
Visit this group at https://groups.google.com/group/warp10-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/warp10-users/fa213d3e-c6e5-484c-abf0-4e5a25104330%40googlegroups.com.