Why would SocketChannel be slower when sending a single msg instead of 1k msgs after proper warmup?

478 views
Skip to first unread message

J Crawford

unread,
Apr 12, 2017, 3:56:22 PM4/12/17
to mechanical-sympathy
The SO question has the source codes of a simple server and client that demonstrate and isolate the problem. Basically I'm timing the latency of a ping-pong (client-server-client) message. I start by sending one message every 1 millisecond. I wait for 200k messages to be sent so that the HotSpot has a chance to optimize the code. Then I change my pause time from 1 millisecond to 30 seconds. For my surprise my write and read operation become considerably slower.

I don't think it is a JIT/HotSpot problem. I was able to pinpoint the slower method to the native JNI calls to write (write0) and read. Even if I change the pause from 1 millisecond to 1 second, problem persists.

I was able to observe that on MacOS and Linux.

Does anyone here have a clue of what can be happening?

Note that I'm disabling Nagle's Algorithm with setTcpNoDelay(true).


Thanks!

-JC

Michael Barker

unread,
Apr 12, 2017, 5:41:18 PM4/12/17
to mechanica...@googlegroups.com
I'm not what is going on, but you could try eliminating a few things.

1) Is it a TCP protocol behaviour (e.g. slow start)?  Test this by implementing the same thing using UDP.
2) Are your CPUs going to sleep?  Test by locking the CPU frequency scaling and/or setting the power management for the CPU to performance mode.

Mike.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Greg Young

unread,
Apr 12, 2017, 6:27:34 PM4/12/17
to mechanica...@googlegroups.com
You are likely measuring wrong and just have not figured out how yet.
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-symp...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

Todd Montgomery

unread,
Apr 12, 2017, 6:38:40 PM4/12/17
to mechanical-sympathy
Mike has the best point, I think. 30 seconds between sends will cause the congestion window to close. Depending on what is in use (CUBIC vs. Reno), this will change behavior.

-- Todd


> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

J Crawford

unread,
Apr 12, 2017, 7:48:23 PM4/12/17
to mechanical-sympathy
Thanks, Mike. See comments below:

1) Will do. Good idea!
2) Good idea too. Not sure how to do that but I'll google and find out.

I'll get back with my findings. Thanks again!

-JC

J Crawford

unread,
Apr 12, 2017, 9:54:16 PM4/12/17
to mechanical-sympathy
Hi Todd,

I'm trying several TCP Congestion algorithms here: westwood, highspeed, veno, etc.

No luck so far, but there are many more I haven't tried. I'm using this answer to change the TCP congestion algo: http://unix.stackexchange.com/a/278217

Does anyone know what TCP congestion algorithm is the best for low-latency? Or the best for the single message scenario I've described? This looks like an important configuration for trading, when a single order needs to go out after some time and you don't want it to go out at a slower speed.

Thanks!

-JC

Todd Montgomery

unread,
Apr 12, 2017, 11:23:17 PM4/12/17
to mechanical-sympathy
The short answer is that no congestion control algorithm is suited for low latency trading and in all cases, using raw UDP will be better for latency. Congestion control is about fairness. Latency in trading has nothing to do with fairness.

The long answer is that to varying degrees, all congestion control must operate at high or complete utilization to probe. Those based on loss (all variants of CUBIC, Reno, etc.) must be operating in congestion avoidance or be in slow start. Those based on RTT (Vegas) or RTT/Bottleneck Bandwidth (BBR) must be probing for more bandwidth to determine change in RTT (as a "replacement" for loss).

So, the case of sending only periodically is somewhat antithetical to the operating point that all congestion control must operate at while probing. And the reason all appropriate congestion control algorithms I know of reset upon not operating at high utilization.

You can think of it this way.... the network can only sustain X msgs/sec, but X is a (seemingly random) nonlinear function of time. How do you determine X at any given time without operating at that point? You can not, that I know of, predict X without operating at X.


> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

J Crawford

unread,
Apr 13, 2017, 12:31:13 AM4/13/17
to mechanical-sympathy
Ok, this is a total mystery. Tried a bunch of strategies with no luck:

1. Checked the cpu frequency with i7z_64bit. No variance in the frequency.

2. Disabled all power management. No luck.

3. Changed TCP Congestion Control Algorithm. No luck.

4. Set net.ipv4.tcp_slow_start_after_idle to false. No luck.

5. Tested with UDP implementation. No luck.

6. Placed the all sockets in blocking mode just for the heck of it. No luck, same problem.

I'm out of pointers now and don't know where to run. This is an important latency problem that I must understand as it affects my trading system.

Anyone who has any clue of what might be going on, please throw some light. Also, if you run the provided Server and Client code in your own environment/machine (over localhost/loopback) you will see that it does happen.

Thanks!

-JC
--
Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Michael Barker

unread,
Apr 13, 2017, 12:37:28 AM4/13/17
to mechanica...@googlegroups.com
Rewrite the test in C to eliminate the JVM as the cause of the slowdown?


> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

J Crawford

unread,
Apr 13, 2017, 12:45:57 AM4/13/17
to mechanical-sympathy
Very good idea, Mike. If I only knew C :) I'll try to hire a C coder on UpWork.com or Elance.com to do that. It shouldn't be hard for someone who knows C network programming. I hope...

Thanks!

-JC
--
Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

singh.janmejay

unread,
Apr 13, 2017, 2:56:22 AM4/13/17
to mechanica...@googlegroups.com
Congestion-control and other tcp param tuning shouldn't change latency
of send/write. They can affect read for sure.

I'd check L1 and LLC cache misses, branch prediction stats, TLB misses
etc (check 'perf list' for details). If this doesn't show a very sharp
difference, I'd trace the impl.

If you have verified that the latency comes from system-call, I'd
trace syscall downwards (use perf probe or ftrace (tracefs) directly).

With uprobe, you can trace the code between write0 and sys_write.
>>>>>>> > email to mechanical-symp...@googlegroups.com.
>>>>>>> > For more options, visit https://groups.google.com/d/optout.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Studying for the Turing test
>>>>>>>
>>>>>>> --
>>>>>>> You received this message because you are subscribed to the Google
>>>>>>> Groups "mechanical-sympathy" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>>> send an email to mechanical-symp...@googlegroups.com.
>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "mechanical-sympathy" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to mechanical-symp...@googlegroups.com.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "mechanical-sympathy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to mechanical-symp...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-symp...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Regards,
Janmejay
http://codehunk.wordpress.com

Kirk Pepperdine

unread,
Apr 13, 2017, 2:59:48 AM4/13/17
to mechanica...@googlegroups.com

Normally when I run into “can’t scale down” problems in Java you have to be concerned about methods on the critical path not being hot enough to be compiled. However I’d give this one a low probability because the knock on latency is typically 2-3x what you’d see under load. So, this seems some how connected to a buffer with a timer. Under load you get fill and fire and of course the scale down is fire on timeout ‘cos you rarely is ever fill.

Have you looked at this problem using Wireshark or a packet sniffer in your network? Another trick is to directly instrument the Socket read, write methods. You can do that with BCI or simply just hand modify the code and preload it on the bootstrap class path.

I have some skeletial client/server code in C. It just needs to morphed to your test case. I can’t see me getting that done today unless I get blocked on what I need to get done.

Kind regards,
Kirk

To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.

Gil Tene

unread,
Apr 13, 2017, 3:36:08 AM4/13/17
to mechanica...@googlegroups.com
If I read this right, you are running this on localhost (according to SO code). If that's the case, there is no actual network, and no actual TCP stack... UDP or TCP won't make a difference then, and neither will any TCP tweaking. I think this rules out the network, the switch, the NICs, and most of the OS's network stack. 

Now you're looking at the JVM, the OS scheduling, power management, cache behavior, etc.

Some more things to play with to rule out or find some insight:

- Rule out de-optimization (you may be de-optimizing when the if (totalMessagesSent == WARMUP) triggers). Do this by examining at -XX:+PrintCompilation output

- Rule out scheduling and cpu migration effects: use isolcpus and pin your processes to specific cores

- How do you know that you actually disabled all power management? I'd monitor cstate and pstate to see what they actually are over time. 
  Cool anecdote: We once had a case where something in the system was mysteriously elevating cstate away from 0 after we set to to 0. We never did find out what it was. The case was "resolved" with a cron job that set cstate to 0 every minute (yuck. I know).

- Slew with different interval time in your tests to find out how long the interval needs to be before you see the perf drop. The value at which this effect starts may be an interesting hint

Wojciech Kudla

unread,
Apr 13, 2017, 3:44:49 AM4/13/17
to mechanical-sympathy

I'd also monitor /proc/interrupts and /proc/softirqs for your target cpu


On Thu, 13 Apr 2017, 08:36 Gil Tene, <g...@azul.com> wrote:
If I read this right. You are running this on localhost (according to SO code). If that's the case, there is no actual network, and no actual TCP stack... UDP or TCP won't make a difference then, and neither will any TCP tweaking. I think this rules out the network, the switch, the NICs, and most of the OS's network stack. 

Now you're looking at the JVM, the OS scheduling, power management, cache behavior, etc.

Some more things to play with to rule out or find some insight:

- Rule out de-optimization (you may be de-optimizing when the if (totalMessagesSent == WARMUP) triggers). Do this by examining at -XX:+PrintCompilation output

- Rule out scheduling and cpu migration effects: use isolcpus and pin your processes to specific cores

- How do you know that your actually disabled all power management. I'd monitor cstate and pstate to see what they actually are over time. 
  Cool anecdote: We once had a case where something in the system was mysteriously elevating cstate away from 0 after we set to to 0. We never did find out what it was. The case was "resolved" with a cron job that set cstate to 0 every minute (yuck. I know).

- Slew with different interval time in your tests to find out how long the interval needs to be before you see the perf drop. The value at which this effect starts may be an interesting hint


On Wednesday, April 12, 2017 at 12:56:22 PM UTC-7, J Crawford wrote:
The SO question has the source codes of a simple server and client that demonstrate and isolate the problem. Basically I'm timing the latency of a ping-pong (client-server-client) message. I start by sending one message every 1 millisecond. I wait for 200k messages to be sent so that the HotSpot has a chance to optimize the code. Then I change my pause time from 1 millisecond to 30 seconds. For my surprise my write and read operation become considerably slower.

I don't think it is a JIT/HotSpot problem. I was able to pinpoint the slower method to the native JNI calls to write (write0) and read. Even if I change the pause from 1 millisecond to 1 second, problem persists.

I was able to observe that on MacOS and Linux.

Does anyone here have a clue of what can be happening?

Note that I'm disabling Nagle's Algorithm with setTcpNoDelay(true).


Thanks!

-JC

--

J Crawford

unread,
Apr 13, 2017, 11:01:49 AM4/13/17
to mechanical-sympathy
Thanks for everyone who threw some ideas. I was able to prove that it is *not* a JIT/HotSpot de-optimization.

First I got the following output when I used "-XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintInlining":

    Thu Apr 13 10:21:16 EDT 2017 Results: totalMessagesSent=100000 currInterval=1 latency=4210 timeToWrite=2514 timeToRead=1680 realRead=831 zeroReads=2 partialReads=0
      77543  560 % !   4       Client::run @ -2 (270 bytes)   made not entrant
    Thu Apr 13 10:21:39 EDT 2017 Results: totalMessagesSent=100001 currInterval=30000 latency=11722 timeToWrite=5645 timeToRead=4531 realRead=2363 zeroReads=1 partialReads=0

Even a single branch like "this.interval = totalMessagesSent >= WARMUP ? 30000 : 1;" will trigger the "made not entrant". I even tried to make it a method, but still got "made not entrant".

So I thought: THAT'S IT!

Nope, that was *not* it :)

I got rid of the branch (i.e. IF) by replacing "this.interval = totalMessagesSent >= WARMUP ? 30000 : 1;" with:

    // not mathematical equivalent but close enough for our purposes
    // for totalMessagesSent >= WARMUP it return 30001 instead of 30000
    this.interval = (totalMessagesSent / WARMUP * 30000) + 1;

Then I ran it again with "-XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintInlining" and got:

    Thu Apr 13 10:36:00 EDT 2017 Results: totalMessagesSent=99998 currInterval=1 latency=4122 timeToWrite=2476 timeToRead=1626 realRead=756 zeroReads=2 partialReads=0
    Thu Apr 13 10:36:00 EDT 2017 Results: totalMessagesSent=99999 currInterval=1 latency=4041 timeToWrite=2387 timeToRead=1630 realRead=760 zeroReads=2 partialReads=0
    Thu Apr 13 10:36:00 EDT 2017 Results: totalMessagesSent=100000 currInterval=1 latency=4223 timeToWrite=2504 timeToRead=1690 realRead=739 zeroReads=2 partialReads=0
    Thu Apr 13 10:36:22 EDT 2017 Results: totalMessagesSent=100001 currInterval=30001 latency=10245 timeToWrite=5457 timeToRead=4755 realRead=2223 zeroReads=6 partialReads=0
    Thu Apr 13 10:36:45 EDT 2017 Results: totalMessagesSent=100002 currInterval=30001 latency=10908 timeToWrite=4648 timeToRead=6237 realRead=719 zeroReads=7 partialReads=0
    Thu Apr 13 10:37:08 EDT 2017 Results: totalMessagesSent=100003 currInterval=30001 latency=10126 timeToWrite=4088 timeToRead=6005 realRead=2077 zeroReads=11 partialReads=0

So no JIT/HotSpot de-optimization.

I've also ruled out thread issues by pinning my thread to an isolated CPU core through affinity and isolcpus.

This behavior happens in MacOS and in Linux. I wonder if it also happens on Windows. And in C. That would give us more clues.

Thanks,

-JC
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

Kirk Pepperdine

unread,
Apr 13, 2017, 11:39:48 AM4/13/17
to mechanica...@googlegroups.com
On Apr 13, 2017, at 5:01 PM, J Crawford <latency...@mail.com> wrote:

Thanks for everyone who threw some ideas. I was able to prove that it is *not* a JIT/HotSpot de-optimization.

Confirms my inkling and fits with my experience with HotSpot issues. The inflation in latency is just way too large for that.

Interesting that the depot was an OSR event.

Try making the messages that you send be a size closer to the MTU.

Kind regards,
Kirk

To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.

Martin Thompson

unread,
Apr 13, 2017, 12:17:38 PM4/13/17
to mechanical-sympathy
OSR can be avoided if you put the body of your loops in their own methods so they get normal JIT support but this is unlikely to explain such a significant step in latency.

As Gil mentions using loopback will give very different results to a real network. The Linux kernel bypasses OSI layer 2 for loopback so no QDiscs. For example Nagle not only does not apply on loopback, it WILL also increase latency a little when disabled, really!

Have you measured L1 and L2 cache hit and miss rates in each case? Even with ISOCPUS the Intel private caches (L1 & L2) are inclusive with the shared L3 so that if the L3 has to evict lines then they need to go from the corresponding L1/L2 caches. You can use CAT (Cache Allocation Technology), CoD (Cluster on Die), or separate sockets to help avoid this.

J Crawford

unread,
Apr 13, 2017, 12:22:19 PM4/13/17
to mechanical-sympathy
Hi Martin! Thanks for trying to help out. I'm indeed testing this on loopback. Can you give me pointers on how to measure L1 and L2 cache hit/miss? I've never done that before. I was able to confirm that it also happens on Windows. We are getting close to understanding this mystery.

Thanks!

-JC

Martin Thompson

unread,
Apr 13, 2017, 12:31:48 PM4/13/17
to mechanica...@googlegroups.com
Try the various Linux "perf events" tools, e.g. $ perf record ..., or some of the following to get more focused in.

https://github.com/RRZE-HPC/likwid



--

J Crawford

unread,
Apr 17, 2017, 11:28:32 AM4/17/17
to mechanical-sympathy

> I have some skeletial client/server code in C. It just needs to morphed to your test case. I can’t see me getting that done today unless I get blocked on what I need to get done.

Hello Kirk, I'm still banging my head trying to understand this latency issue. Did you have time to use your C code to try to reproduce this problem? I'm not a C programmer, but if you are busy I can try to adapt your skeletal client/server C code to the use-case in question.

I'm currently clueless and unable to make progress. It happens on MacOS, Linux and Windows so it does not look like a OS-related issue. Looks more like a JVM or CPU issue.

Thanks!

-JC
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

Kirk Pepperdine

unread,
Apr 18, 2017, 3:10:26 AM4/18/17
to mechanica...@googlegroups.com
Some code written, I’ll take this offline


To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.

Nikolay Tsankov

unread,
Apr 19, 2017, 2:15:24 AM4/19/17
to mechanica...@googlegroups.com
Hi,

Could it be caused by speculative execution/the tight wait loop? You can probably test in C with a pause instruction in the loop...

Best,
Nikolay  


> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

J Crawford

unread,
Apr 20, 2017, 1:22:13 AM4/20/17
to mechanical-sympathy
Hi Nikolay,

Thanks for trying to help. Can you elaborate on  "speculative execution" and how do you think it could be affecting the socket latency?

My tight loop for pausing is indeed working (the program actually "pauses" as expected) so not sure what you mean.

Thanks again!

-JC
--

Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Nikolay Tsankov

unread,
Apr 20, 2017, 7:30:05 AM4/20/17
to mechanica...@googlegroups.com
Hi,

I was talking about your server spin-waiting. Not sure it applies in your case, but from http://x86.renejeschke.de/html/file_module_x86_id_232.html

When executing a "spin-wait loop," a Pentium 4 or Intel Xeon processor suffers a severe performance penalty when exiting the loop because it detects a possible memory order violation. The PAUSE instruction provides a hint to the processor that the code sequence is a spin-wait loop. The processor uses this hint to avoid the memory order violation in most situations, which greatly improves processor performance. For this reason, it is recommended that a PAUSE instruction be placed in all spin-wait loops.

On second thought, this probably is far less impact-full than the latency spike you observe


> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Kirk Pepperdine

unread,
Apr 20, 2017, 10:22:08 AM4/20/17
to mechanica...@googlegroups.com
Hi,

I think this is an interesting thought but you should be able to easily see this by performing a native thread dump of the JVM when it’s stalled. Why I don’t think it’s something like this is that thread dumps are good at telling you what is happening but kind of less useful for telling what isn’t, but should be happening. I’m looking for a timer or load condition that is a root and that will not show up using any profiling technique.

Kind regards,
Kirk

To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages