You're presizing your hash tables by performing 10% of the inserts in the up method, so you're not benchmarking much of the 'pauselessness' - only 3 method calls in the java.util.HashMap test will involve a resize (10M inserts = ~16M capacity; *2^3 = ~128M, which can accommodate 100M elements), so it is nor surprising you only see it in the very long tail.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
Hi
If the number of entries are large then yes only the higher percentiles are affected but the later the long and it could jump into the second range for resizing. Quite a long pause for some processes where latency is critical.
The other aspect to take into account is the fact that if you have multiple maps, you multiply the occurence of these pauses. Impacting a bigger percentile.
My 2 cents
GG
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
# Warmup Iteration 1: n = 21133, mean = 1 us/op, p{0.00, 0.50, 0.90, 0.95, 0.99, 0.999, 0.9999, 1.00} = 0, 1, 1, 1, 4, 21, 109, 120 us/op
# Warmup Iteration 1: n = 33701, mean = 1 us/op, p{0.00, 0.50, 0.90, 0.95, 0.99, 0.999, 0.9999, 1.00} = 0, 0, 1, 1, 2, 4, 61, 74 us/op...
# Warmup Iteration 1: n = 26027, mean = 1 us/op, p{0.00, 0.50, 0.90, 0.95, 0.99, 0.999, 0.9999, 1.00} = 0, 1, 1, 1, 3, 17, 71, 11534 us/op...and:
# Warmup Iteration 1: n = 31790, mean = 1 us/op, p{0.00, 0.50, 0.90, 0.95, 0.99, 0.999, 0.9999, 1.00} = 0, 0, 1, 1, 5, 17, 61, 12272 us/op...
--
You received this message because you are subscribed to a topic in the Google Groups "mechanical-sympathy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/mechanical-sympathy/g-iCw1HbZ-o/unsubscribe.
To unsubscribe from this group and all its topics, send an email to mechanical-symp...@googlegroups.com.
long expectedStartTimeNsecDeltaPerOp = 1000000000L / requestedOpsPerSec;
long expectedStartTimeNsec = System.nanoTime();
while (testNotComplete()) {
long actualStartTimeNsec;
while (expectedStartTimeNsec <= (actualStartTimeNsec = System.nanoTime())) {
doOperation();
long measuredEndTimeNsec = System.nanoTime();
long coordinatedOperaionLatencyNsec = measuredEndTimeNsec - actualStartTimeNsec;
long operationLatencyNsec = measuredEndTimeNsec - expectedStartTimeNsec;
coordinatedLatencyHistogram.recordValue(coordinatedOperaionLatencyNsec);
latencyHistogram.recordValue(operationLatencyNsec);
expectedStartTimeNsec += expectedStartTimeNsecDeltaPerOp;
}
TimeUnit.NANOSECONDS.sleep(someSmallIntervalNsec); // Optional. Can do nothing here spin instead.
}
long expectedStartTimeNsecDeltaPerOp = 1000000000L / requestedOpsPerSec;
long testStartTime = System.nanoTime();
long elapsedTime = 0;
while (testNotComplete()) {
long expectedStartTimeNsec = System.nanoTime(); // Avoid counting sleep time against ops.
elapsedTime = expectedStartTimeNsec - testStartTime;
while (nOps < requestedOpsPerSec * elapsedTime) {
long actualStartTimeNsec = System.nanoTime();
doOperation();
long measuredEndTimeNsec = System.nanoTime();
long coordinatedOperaionLatencyNsec = measuredEndTimeNsec - actualStartTimeNsec;
long operationLatencyNsec = measuredEndTimeNsec - expectedStartTimeNsec;
coordinatedLatencyHistogram.recordValue(coordinatedOperaionLatencyNsec);
latencyHistogram.recordValue(operationLatencyNsec);
expectedStartTimeNsec += expectedStartTimeNsecDeltaPerOp;
expectedStartTimeNsec = Math.min(expectedStartTimeNsecDeltaPerOp, measuredEndTimeNsec); // needed to avoid accumulating gifts
elapsedTime = measuredEndTimeNsec - testStartTime;
nOps++;
}
TimeUnit.NANOSECONDS.sleep(someSmallIntervalNsec); // Optional. Can spin instead.
}
--
You received this message because you are subscribed to a topic in the Google Groups "mechanical-sympathy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/mechanical-sympathy/g-iCw1HbZ-o/unsubscribe.
To unsubscribe from this group and all its topics, send an email to mechanical-symp...@googlegroups.com.
Hi, A minor note Poisson processes have exponential arrival time Distribution. There's some discussion here of candidate inter arrival time distributions. http://perfdynamics.blogspot.co.uk/2010/05/load-testing-think-time-distributions.html?m=1
Given I have an interest in open workload simulations I would be interested if you could send the gory details of where to start hacking from?
Like most people my free time is variable but I would have thought Gil's method should be achievable to start with.
Presumably there is also a difference between one thread driving the load with calculating adjustments, and requests overlapping where the mean latency starts to exceed the mean inter arrival time? Open workloads can have worse latency due to the lack of back off. Not thinking about whether it is possible to implement.
Thanks
Alex