tickerplant : slow consumer impact on other subscriber/consumer

426 views
Skip to first unread message

Rajkumar

unread,
Apr 17, 2020, 11:52:25 AM4/17/20
to AquaQ kdb+/TorQ
Hi,

Several times we noticed that one tickerplant slow subscriber/consumer impacted to other consumer.

So would like to understand :
  • how tickerplant maintain consumer buffer, is that dedicated buffer for each consumer or it's just one with pointer ref. 
  • what's default buffer size, can be configurable ?
  • what's best practice to isolate individual consumer impact on tickerplant/other consumer health
  • Also any advise for latency sensitive tickerplant setup where app/feed published data as come (without any batch/queue) so one message at a time and frequency can be 1 to 36k per second.





mbrown

unread,
Apr 21, 2020, 6:04:28 AM4/21/20
to AquaQ kdb+/TorQ
Hi Rajkumar

Subscribers that are slower than your tickrate can't update everytime a new tick comes in, so their messages will wait in the message queue. You can see which handles have messages queued with the command variable .z.W. Queued messages don't use a dedicated buffer, but the memory space of the process itself. The process will fail if memory is exhausted on the host or if it hits a -w limit.

The slower consumers could be set up to subscribe to a chained tickerplant, which subscribes to the main tickerplant and then publishes batches of updates on a timer. This way, slow subscribers are receiving updates as quickly as they can, and faster consumers still benefit from the minimum latency main tickerplant without being impacted by the slower subscriber.

There are examples of chained tickerplants within kx's kdb+ tick and AquaQ's TorQ:

It could also be helpful to have some form of monitoring of the message queues, periodically checking if subscribers need to be cut off in order to minimise impact on other subscribers.

TorQ provides a utility to check for slow subscribers and cut them off by closing the connection handle with hclose. This wipes the output queue.
The functionality is enabled on TorQ's chained ticker plant by default, but not the main tickerplant.
The  script will periodically check  the byte size of the queue for all the handles on the process to see if they have exceeded a set cut-off threshold. In order to be cut off, a handle will have to exceed the threshold a set number of times in a row; this avoids premature cut offs. 

Hope this helps!
M
Reply all
Reply to author
Forward
0 new messages