There's probably too many variables and questions for anyone to know what's up here, but some lines of inquiry:
1. Where are you observing the message counts? Are you observing the message counts in a kdb subscriber? Is this subscriber an RDB (aka is it storing all of the messages in memory)? If so, then it's possible that this RDB is consuming more and more memory and will eventually have to request more memory from the OS once it hits a threshold (heap size in kdb increases as a step function). The blips you're observing might be caused by this as your blips seem to occur at a pretty regular interval.
2. Or are you observing the message counts in the C# subscriber? If so, which C# interface are you using? Is it the most recent version? Are you doing any particular logic on the incoming messages?
3. Are your processes pegged to specific CPU cores? Are they sharing CPU cores with other processes on the machine? There could be some contention there.
4. Is your tickerplant logging to disk? Is this local disk or mounted volume? How performant is the disk? Slow logging can affect tickerplant performance.
To answer your question "Is there a chance that the socket connection from the KDB+ side is queuing up on purpose?" - kdb will not be holding back messages if it's not being impeded either by some resource contention, a slow consumer, non-performant hardware or some other custom logic in the tickerplant stack.
Your best bet for profiling tickerplant latency is to timestamp the data before and after every hop:
a. Timestamp when feed sent
b. Timestamp when TP received, timestamp before TP sent
c. Timestamp when CTP received, timestamp before CTP sent
d. Timestamp when subscriber received
e. You could also set up some monitoring of the outgoing queues in each tickerplant (.z.W)
Using all of that info you should be able to narrow down where any delay/queueing is.