Not able to send multiple data points(more than 64) at a time on HTTP API

59 views
Skip to first unread message

Veerendra

unread,
Mar 8, 2017, 7:41:13 AM3/8/17
to OpenTSDB
Hello,
While checking tcollector, I found that, if there are more data points like 160, the TSDB is not accepting: it is throwing HTTP response code 400. Then I found the issue-->https://github.com/OpenTSDB/opentsdb/issues/324.

I did tsd.http.request.enable_chunked == true and increased tsd.http.request.max_chunk_siz to 14000, after that I was able to send 70 data points but not more that that. Why tcollector is trying to send more data points at time? if this is the issue why would they implemented like this? or am I missing anything? or should break down the json dictionary into multiple?

Data points example

[
   {
      "timestamp":1488964600,
      "metric":"net.sockstat.num_sockets",
      "value":102.0,
      "tags":{
         "host":"ultron",
         "type":"tcp"
      }
   },
   {
      "timestamp":1488964600,
      "metric":"net.sockstat.num_timewait",
      "value":17.0,
      "tags":{
         "host":"ultron"
      }
   },
   {
      "timestamp":1488964600,
      "metric":"net.sockstat.sockets_inuse",
      "value":79.0,
      "tags":{
         "host":"ultron",
         "type":"tcp"
      }
   }
]

Thanks,
Veerendra

ManOLamancha

unread,
Apr 25, 2017, 4:30:40 PM4/25/17
to OpenTSDB
On Wednesday, March 8, 2017 at 4:41:13 AM UTC-8, Veerendra wrote:
Hello,
While checking tcollector, I found that, if there are more data points like 160, the TSDB is not accepting: it is throwing HTTP response code 400. Then I found the issue-->https://github.com/OpenTSDB/opentsdb/issues/324.

I did tsd.http.request.enable_chunked == true and increased tsd.http.request.max_chunk_siz to 14000, after that I was able to send 70 data points but not more that that. Why tcollector is trying to send more data points at time? if this is the issue why would they implemented like this? or am I missing anything? or should break down the json dictionary into multiple

There is a disconnect with the TCollector and TSD. You can bump up that max request size pretty large and you won't suffer any real trouble. I'd like to default the chunked to true in the future. 

Jonathan Creasy

unread,
Apr 25, 2017, 4:58:48 PM4/25/17
to ManOLamancha, OpenTSDB
If you file a tcollector bug, I can resolve the disconnect probably.

Veeru

unread,
Apr 26, 2017, 6:58:34 AM4/26/17
to Jonathan Creasy, clars...@gmail.com, open...@googlegroups.com
I already opened issue -> https://github.com/OpenTSDB/tcollector/issues/365. @ManOLamancha, I already did that, increased tsd.http.request.max_chunk_siz to some max number, but the tsdb crashed!

Regards,
Veerendra. K
--
Thanks & Regards,
Veerendra.Kakumanu

ManOLamancha

unread,
May 27, 2017, 5:57:43 PM5/27/17
to OpenTSDB, jona...@ghostlab.net, clars...@gmail.com
On Wednesday, April 26, 2017 at 3:58:34 AM UTC-7, Veerendra wrote:
I already opened issue -> https://github.com/OpenTSDB/tcollector/issues/365. @ManOLamancha, I already did that, increased tsd.http.request.max_chunk_siz to some max number, but the tsdb crashed!

Hi, then it's possible there was so much data it OOM'd the TSD. Did you see anything in stderr or stdout from server? 
Reply all
Reply to author
Forward
0 new messages