Using JMH to benchmark collections

884 views
Skip to first unread message

awei...@voltdb.com

unread,
Mar 6, 2014, 1:32:02 PM3/6/14
to
Hi all,

According to a flight recorder profile my application spends 6% of it's time in ImmutableMap or HashMap get. Of the call sites, most of them are lookup tables that I would expect to be cached fairly well since the cardinality of values being inspected is small as is the cardinality of the entire map. I have done what I can to reduce the number of lookups and now I trying to see if I can improve on the performance and locality of the maps. I compared ImmutableHashMap, HashMap, and Trove's Long to object hash map.


  1. Benchmark                                            Mode   Samples         Mean   Mean error    Units
  2. JMH3.MyBenchmark.testCowMap                          avgt        10        6.467        0.051    ns/op
  3. JMH3.MyBenchmark.testCowMapCounter                   avgt        10        8.160        0.146    ns/op
  4. JMH3.MyBenchmark.testHashMap                         avgt        10        7.516        0.029    ns/op
  5. JMH3.MyBenchmark.testHashMapCounter                  avgt        10        8.840        0.145    ns/op
  6. JMH3.MyBenchmark.testImmutableMap                    avgt        10        5.699        0.109    ns/op
  7. JMH3.MyBenchmark.testImmutableMapCounter             avgt        10        7.328        0.050    ns/op
  8. JMH3.MyBenchmark.testTMap                            avgt        10        7.795        0.053    ns/op
  9. JMH3.MyBenchmark.testTMapCounter                     avgt        10        8.325        0.102    ns/op
  10. JMH3.MyBenchmark.testVolatileImmutableMap            avgt        10        5.745        0.045    ns/op
  11. JMH3.MyBenchmark.testVolatileImmutableMapCounter     avgt        10        7.297        0.172    ns/op

For starters how is my benchmark hygiene?

The maps are all pretty close and I can't help but wonder if the performance results would be different if the benchmark accurately represented a real world use case where things aren't always in L1 cache. How can I compare these data structures so that the costs accurately represent how they would perform in my application.

Thanks,
Ariel

Aleksey Shipilev

unread,
Mar 6, 2014, 1:39:20 PM3/6/14
to mechanica...@googlegroups.com
On 03/06/2014 10:15 PM, awei...@voltdb.com wrote:
> For starters how is my benchmark hygiene?

Looks good, modulo two things:
* I would be afraid of using division ("%") in nanobenchmarks;
* In some cases you do the auto-boxing on $value -- is that the intent?

> The maps are all pretty close and I can't help but wonder if the
> performance results would be different if the benchmark accurately
> represented a real world use case where things aren't always in L1
> cache. How can I compare these datastructures so that the costs
> accurately represent how they would perform in my application.

IMO, it is one of the cases where you probably need to have the shuffled
list of keys, and walk through the list of keys, polling the element
from the map, and sinking it back into the blackhole, as in:

@GenerateMicroBenchmark
public void testCowMap(BlackHole bh) {
for (String k : keys) {
bh.consume(m_cowMap.get(value));
}
}

If you then reshuffle $keys in @Setup(Scope.Iteration), you can have the
average random access cost for the collection. That should contrast out
the locality wins from the map implementations nicely.

Thanks,
-Aleksey.

awei...@voltdb.com

unread,
Mar 6, 2014, 3:12:54 PM3/6/14
to mechanica...@googlegroups.com
Hi,

Thanks now I think I am getting somewhere. I think I need to run a lot more test sizes and graph to really see what is going on, but ImmutableMap starts to get quite slow, and Trove get's faster pretty quickly and is never much slower than HashMap.


Benchmark                                   (m_count)   Mode   Samples         Mean   Mean error    Units
JMH3.MyBenchmark.testCowMap                      1024   avgt         5    18582.845      234.249    ns/op
JMH3.MyBenchmark.testCowMap                       128   avgt         5      996.411        3.943    ns/op
JMH3.MyBenchmark.testCowMap                        16   avgt         5      131.175        9.319    ns/op
JMH3.MyBenchmark.testCowMap                       256   avgt         5     2393.750       88.229    ns/op
JMH3.MyBenchmark.testCowMap                        32   avgt         5      278.281       16.156    ns/op
JMH3.MyBenchmark.testCowMap                       512   avgt         5     6283.906      315.368    ns/op
JMH3.MyBenchmark.testCowMap                        64   avgt         5      545.406        8.250    ns/op
JMH3.MyBenchmark.testHashMap                     1024   avgt         5    11116.434      441.358    ns/op
JMH3.MyBenchmark.testHashMap                      128   avgt         5     1192.682       38.543    ns/op
JMH3.MyBenchmark.testHashMap                       16   avgt         5      131.985        1.475    ns/op
JMH3.MyBenchmark.testHashMap                      256   avgt         5     2578.242       84.384    ns/op
JMH3.MyBenchmark.testHashMap                       32   avgt         5      269.071        0.170    ns/op
JMH3.MyBenchmark.testHashMap                      512   avgt         5     5467.981      129.064    ns/op
JMH3.MyBenchmark.testHashMap                       64   avgt         5      557.057       27.698    ns/op
JMH3.MyBenchmark.testImmutableMap                1024   avgt         5    18583.994      707.223    ns/op
JMH3.MyBenchmark.testImmutableMap                 128   avgt         5      913.665        7.975    ns/op
JMH3.MyBenchmark.testImmutableMap                  16   avgt         5      109.837        2.221    ns/op
JMH3.MyBenchmark.testImmutableMap                 256   avgt         5     2163.555       59.064    ns/op
JMH3.MyBenchmark.testImmutableMap                  32   avgt         5      242.334        2.021    ns/op
JMH3.MyBenchmark.testImmutableMap                 512   avgt         5     5839.833       32.638    ns/op
JMH3.MyBenchmark.testImmutableMap                  64   avgt         5      495.163        2.683    ns/op
JMH3.MyBenchmark.testTMap                        1024   avgt         5     8572.425      130.532    ns/op
JMH3.MyBenchmark.testTMap                         128   avgt         5     1130.815       25.054    ns/op
JMH3.MyBenchmark.testTMap                          16   avgt         5      141.371        2.886    ns/op
JMH3.MyBenchmark.testTMap                         256   avgt         5     2252.436       26.358    ns/op
JMH3.MyBenchmark.testTMap                          32   avgt         5      281.937        2.609    ns/op
JMH3.MyBenchmark.testTMap                         512   avgt         5     4381.460        7.444    ns/op
JMH3.MyBenchmark.testTMap                          64   avgt         5      568.282        4.086    ns/op
JMH3.MyBenchmark.testVolatileImmutableMap        1024   avgt         5    18136.249      244.302    ns/op
JMH3.MyBenchmark.testVolatileImmutableMap         128   avgt         5      966.954      190.930    ns/op
JMH3.MyBenchmark.testVolatileImmutableMap          16   avgt         5      115.455        3.071    ns/op
JMH3.MyBenchmark.testVolatileImmutableMap         256   avgt         5     2244.283       69.049    ns/op
JMH3.MyBenchmark.testVolatileImmutableMap          32   avgt         5      254.552        4.415    ns/op
JMH3.MyBenchmark.testVolatileImmutableMap         512   avgt         5     5990.018      123.352    ns/op
JMH3.MyBenchmark.testVolatileImmutableMap          64   avgt         5      520.538        3.223    ns/op
 
Is there a way to group the runs with the same parameter together? It would make comparing the results easier.

Auto-boxing is intentional and I would like to pay the overhead and penalize collections that require it. That made me wonder if the small value results are skewed due to auto-boxing caching. I modified the code to use values starting > 512 and that didn't cause a different result with 16 entries.

Thanks,
Ariel

Aleksey Shipilev

unread,
Mar 6, 2014, 3:21:08 PM3/6/14
to mechanica...@googlegroups.com
On 03/07/2014 12:12 AM, awei...@voltdb.com wrote:
> Is there a way to group the runs with the same parameter together? It
> would make comparing the results easier.

Not sure about that, group by benchmark name makes more sense to us. You
might notice the parameter values are sorted
lexicographically-as-strings: this is a presentation glitch, and it is
already fixed in HEAD, the visual comparison is much simpler when param
values are monotonically increasing. 0.5 hopefully promotes tomorrow,
with that fix on board.

Or, dump the data in CSV and analyze with 3rd party tool :)

-Aleksey.

Justin Mason

unread,
Mar 7, 2014, 5:17:32 AM3/7/14
to mechanica...@googlegroups.com

On Thu, Mar 6, 2014 at 6:15 PM, <awei...@voltdb.com> wrote:
According to a flight recorder profile my application spends 6% of it's time in ImmutableMap or HashMap get. Of the call sites, most of them are lookup tables that I would expect to be cached fairly well since the cardinality of values being inspected is small as is the cardinality of the entire map. I have done what I can to reduce the number of lookups and now I trying to see if I can improve on the performance and locality of the maps. I compared ImmutableHashMap, HashMap, and Trove's Long to object hash map.

in a previous optimization scenario, I realised that my usage of ImmutableMap/HashMap was mostly on maps with very small cardinality (approx less than 10, iirc).  Replacing those with ArrayLists of [key, value] pairs gained some surprisingly large speedups -- turns out a fast O(n) can outperform a slow O(1). ;)   Might be worth benchmarking?

--j.

Alan Stange

unread,
Mar 7, 2014, 9:53:26 AM3/7/14
to mechanica...@googlegroups.com, j...@jmason.org


Can't one do better than O(n) with ArrayLists, especially if the data is immutable?  Presumably you can sort the List on construction and then do a binary search on the key List for all subsequent lookups.    Fewer than 10 elements might be a case where a linear search is still a win.

-- Alan

awei...@voltdb.com

unread,
Mar 7, 2014, 11:18:26 AM3/7/14
to
Hi,

I am so unbelievably sick of hitting reply to author in Google groups and having my messages disappear without a trace. Or maybe that isn't what I am doing and Google groups just drops my messages.

I tested a few things, code http://pastebin.com/AAR2mn61 and results http://pastebin.com/7Py3yyvT
Benchmark                                 (m_count)   Mode   Samples         Mean   Mean error    Units
JMH3.MyBenchmark.testArrayBinary                 16   avgt         5      198.829       23.230    ns/op
JMH3.MyBenchmark.testArrayBinary                 24   avgt         5      320.971       13.531    ns/op
JMH3.MyBenchmark.testArrayBinary                 32   avgt         5      468.476        6.551    ns/op
JMH3.MyBenchmark.testArrayBinary                  8   avgt         5       74.012        0.885    ns/op
JMH3.MyBenchmark.testArrayScan                   16   avgt         5      129.399        0.862    ns/op
JMH3.MyBenchmark.testArrayScan                   24   avgt         5      231.715        5.599    ns/op
JMH3.MyBenchmark.testArrayScan                   32   avgt         5      359.104       18.624    ns/op
JMH3.MyBenchmark.testArrayScan                    8   avgt         5       48.804        0.495    ns/op
JMH3.MyBenchmark.testHashMap                     16   avgt         5      158.899        2.509    ns/op
JMH3.MyBenchmark.testHashMap                     24   avgt         5      240.234       20.686    ns/op
JMH3.MyBenchmark.testHashMap                     32   avgt         5      316.633        7.020    ns/op
JMH3.MyBenchmark.testHashMap                      8   avgt         5       81.146        9.180    ns/op
JMH3.MyBenchmark.testImmutableMap                16   avgt         5      111.630        2.572    ns/op
JMH3.MyBenchmark.testImmutableMap                24   avgt         5      173.245        6.796    ns/op
JMH3.MyBenchmark.testImmutableMap                32   avgt         5      240.638        5.967    ns/op
JMH3.MyBenchmark.testImmutableMap                 8   avgt         5       54.846        1.372    ns/op
JMH3.MyBenchmark.testImmutableSortedMap          16   avgt         5      436.898       10.064    ns/op
JMH3.MyBenchmark.testImmutableSortedMap          24   avgt         5      760.812      261.168    ns/op
JMH3.MyBenchmark.testImmutableSortedMap          32   avgt         5      975.969       29.037    ns/op
JMH3.MyBenchmark.testImmutableSortedMap           8   avgt         5      175.758        8.764    ns/op

I do wonder if my binary search and array scan are as efficient as they can be. I should probably be testing this with my actual use case which has 23 keys that are Class objects.

Thanks,
Ariel

Rüdiger Möller

unread,
Mar 8, 2014, 12:32:27 PM3/8/14
to mechanica...@googlegroups.com
Actually trove isn't that good (slow resizing, especially as collections grow).
I had better results using these


From what I can say is that you should eliminate boxed numerics. Frequently micro benchmarks do not tell the fullstory here. Especially effects of cache eviction caused by (temporary) Object allocation affect "real systems" a lot more than benchmarks.
Also note that many of open addressing hashmaps out there are not that cache friendly. With current cpu+cache architecture double hashing (Afaik trove does this) is way worse compared to linear probing.

What's also notable is, that the JIT is not reliable in inlining. It will do a good job, however if you want to be sure, do it manually in very core routines. Especially as the JIT might decide to *not* inline in your real application, but does in micro benchmarks. I also had some adventures where addition of a statement all over a sudden decreased performance as Hotspot did not inline ..

see for a really ugly example (significantly faster than all other maps for my specific use case) Line 181

Am Donnerstag, 6. März 2014 19:15:13 UTC+1 schrieb awei...@voltdb.com:
Hi all,

According to a flight recorder profile my application spends 6% of it's time in ImmutableMap or HashMap get. Of the call sites, most of them are lookup tables that I would expect to be cached fairly well since the cardinality of values being inspected is small as is the cardinality of the entire map. I have done what I can to reduce the number of lookups and now I trying to see if I can improve on the performance and locality of the maps. I compared ImmutableHashMap, HashMap, and Trove's Long to object hash map.


  1. Benchmark                                            Mode   Samples         Mean   Mean error    Units
  2. JMH3.MyBenchmark.testCowMap                          avgt        10        6.467        0.051    ns/op
  3. JMH3.MyBenchmark.testCowMapCounter                   avgt        10        8.160        0.146    ns/op
  4. JMH3.MyBenchmark.testHashMap                         avgt        10        7.516        0.029    ns/op
  5. JMH3.MyBenchmark.testHashMapCounter                  avgt        10        8.840        0.145    ns/op
  6. JMH3.MyBenchmark.testImmutableMap                    avgt        10        5.699        0.109    ns/op
  7. JMH3.MyBenchmark.testImmutableMapCounter             avgt        10        7.328        0.050    ns/op
  8. JMH3.MyBenchmark.testTMap                            avgt        10        7.795        0.053    ns/op
  9. JMH3.MyBenchmark.testTMapCounter                     avgt        10        8.325        0.102    ns/op
  10. JMH3.MyBenchmark.testVolatileImmutableMap            avgt        10        5.745        0.045    ns/op
  11. JMH3.MyBenchmark.testVolatileImmutableMapCounter     avgt        10        7.297        0.172    ns/op

For starters how is my benchmark hygiene?

The maps are all pretty close and I can't help but wonder if the performance results would be different if the benchmark accurately represented a real world use case where things aren't always in L1 cache. How can I compare these data structures so that the costs accurately represent how they would perform in my application.

Thanks,
Ariel

Martin Thompson

unread,
Mar 10, 2014, 7:41:13 AM3/10/14
to mechanica...@googlegroups.com
Fastutil can be a good option. However be aware that the open addressing maps implementation for remove can cause long cluster chains to form that have really nasty performance issues. You often need to rehash at appropriate times if you remove items. For this reason I implemented my own maps that on remove do a sliding compaction for open addressing to keep lookup optimal.

awei...@voltdb.com

unread,
Mar 10, 2014, 3:34:20 PM3/10/14
to mechanica...@googlegroups.com
Does the issue with remove exist when items are regularly removed from the map? For instance if you are tracking in flight processes and they only last a few milliseconds will you still suffer from long chains?

Martin Thompson

unread,
Mar 10, 2014, 3:41:05 PM3/10/14
to mechanica...@googlegroups.com
The issue comes about with open addressing maps that use a tombstone to mark a removed item without compacting the chain. This if the collection does not see significant numbers of removes. I've seen cases where performance completely tanks.

awei...@voltdb.com

unread,
Mar 10, 2014, 4:23:06 PM3/10/14
to mechanica...@googlegroups.com
I still don't follow when I will run into this situation and with which map implementations. I found that NonBlockingHashMap would reliably OOM if I added and removed incrementing integers. Is that something that can happen with open addressing as implemented by Trove and Fastutil?

Generally speaking my map usages are either fixed lookup tables that are completely rewritten or in flight state tracking with IDs that will never be reused, will only be in the map for a few milliseconds, and will be removed FIFO. If there is a hiccup the maps will have to grow accommodate a backlog of several thousand things until backpressure kicks in at the producers, but that is a failure state where all it has to do is not break.

In this situation would consistently removing every entry prevent long chains from forming? I can afford to go with pretty low load factors if that will keep the chain length and frequency down. I am not clear on how tombstones get removed in open address hash maps.

Resize performance isn't important for me because the maps grow to a steady state that isn't very large.

Thanks,
Ariel

Martin Thompson

unread,
Mar 10, 2014, 4:42:55 PM3/10/14
to mechanica...@googlegroups.com
An example of the issue I've seen is with fastutil for the follow class.


However reading the notes things have changed.


Faster Hash Tables

fastutil 6.1.0 changes significantly the implementation of hash-based classes. Instead of double hashing, we use linear probing. This has some consequences:

    • the classes are now about two times faster;
    • deletions are effective—there is no “marking” of deleted entries (the claim that this was impossible with open addressing was, of course, wrong);
    • given a size and a load factor, the backing array of a table will be in general larger (in the worst case about two times larger);
    • it is no longer possible to set the growth factor of the table, which is fixed at 2 (the old methods to control the growth factor are now no-ops—they are kept just for backward compatibility);
    • there are efficient implementations of big sets.

I need to look into the code but they seem to have addressed the issue. The "marking" I believe is the old tombstone implementation. Since writing my own implementations I've not revisited.

If you can imagine how it goes when the removed items are only marked then the chains will grow and cluster. Maps that reach a "natural" size and then churn objects are the worst case scenario unless you compact the chains on remove.

To answer specific questions see below:

On Monday, 10 March 2014 20:23:06 UTC, awei...@voltdb.com wrote:
I still don't follow when I will run into this situation and with which map implementations. I found that NonBlockingHashMap would reliably OOM if I added and removed incrementing integers. Is that something that can happen with open addressing as implemented by Trove and Fastutil?

This is exactly the problem I witnessed with fastutil and drove me to write my own. However "my own" is just a variation on the great work of Donald Knuth.
 
Generally speaking my map usages are either fixed lookup tables that are completely rewritten or in flight state tracking with IDs that will never be reused, will only be in the map for a few milliseconds, and will be removed FIFO. If there is a hiccup the maps will have to grow accommodate a backlog of several thousand things until backpressure kicks in at the producers, but that is a failure state where all it has to do is not break.

With open addressing then your hiccup could get particularly nasty if combined with a large resize and resulting rehash.
 
In this situation would consistently removing every entry prevent long chains from forming? I can afford to go with pretty low load factors if that will keep the chain length and frequency down. I am not clear on how tombstones get removed in open address hash maps.

No. See above.

Resize performance isn't important for me because the maps grow to a steady state that isn't very large.

That is what we all hope :-)

Rüdiger Möller

unread,
Mar 10, 2014, 8:14:14 PM3/10/14
to mechanica...@googlegroups.com
Yeah removing is often an issue .. just as I think about it I don't even implement it on my maps and got into the habit to use open addressed maps for mostly static lookup tables only.
I did not run into the issue with fastutil 6, however my primary use case is fast update+lookup. Worth a test how good their solution works :-)

Ariel Weisberg

unread,
Mar 13, 2014, 2:13:33 PM3/13/14
to mechanica...@googlegroups.com
Hi,

Thank you Martin, Rüdiger, and Aleksey this has been very helpful.

I am wondering if I should be benchmarking with my full application and a profiler. It's difficult to take small changes like map type and measure an end to end change in performance, but the profiler I am using makes the impact of various changes obvious. You can see the thing you optimize disappear and all the other things grow a little bit.

The problem with using end to end measurements is that I think I may still be suffering from lock contention around Selector.wakeup() which is the only lock I haven't been able to remove and can't split (at least not for a single socket). I think most performance improvement don't yield much other than a little more contention. Performance is also so variable between runs that it would be time consuming to collect enough data to see the impact. Contention is tough to predict because the profiler only reports periods of contention > 1 millisecond so many sub-millisecond contentions may be unreported.

I had one instance where I replaced a hash map with an immutable map and the lookup disappeared from the profile. Is the profiler inaccurate or did the work really "go away"? I am using Flight Recorder which doesn't sample at safe points and it produces a much more believable profile then most other profilers in terms of matching my mental cost model of the relative sizing of how fast things should be.

I have to deal with CRC32c performance first before I can come back to the rest of the profile which is dominated by map lookups, concurrent queues, and serialization. That means I won't get to testing the different open address implementations until I am done with the checksum overhead.

Thanks,
Ariel

Rüdiger Möller

unread,
Mar 13, 2014, 3:22:18 PM3/13/14
to
Yes, optimization is a black art :-). Profilers help in finding the "big ones". But at a certain point they might even mislead you.

At first you need to split your measurement/optimization into singlethreaded and concurrency-realated ones. Since I spend the last week optimizing my serialization (hunting 2 digit nano gains), here is my list of surprising candidates in single threaded optimization

  • avoid tmp allocation. Profilers rarely show this but effects on locality frequently matter a lot. Just check what your program is allocating in a typical test run. Effects show up in throughput, not in a profiler
  • HashMap lookups aren't that cheap. Don't use Collections with boxed types. Remove any redundant hash lookup (happens accidentally as code evolves). I had significant gains by caching hash lookup results. If it is likely to do the same lookup several times in a row, caching last result and compare before doing another hash can improve.
  • Instanceof can be expensive !! I had an "instanceof Serializable". Removement sped up from 1400ns to 1200ns, which is a lot given that my program has been optimized to death. If possible use x.class == Other.class, even better define constants and compare them instead. (like "getTypeId() { return MY_ID; }". only works if the classes to be checked are a limited set ofc.
  • Because of branchprediction, highly predictable checks (like "if (DEBUG)" or "if (fastmode)" are very cheap.
  • switch() can be slower than if else chains, if there are frequent pathes. Don't know why, maybe I am wrong here, but I had this in my code.
  • comparing pointers is more expensive than comparing ints (maybe related to 64bit with "compressed refs").
  • Locality: Try to reduce pointer chasing. E.g. if you have a configuration object with some flags etc. and some processing reads from this, you can improve locality by extracting the relevant parameters at init time and store them redudantely.
Note this list gets relevant at the very end of optimization, if usual stuff already has been done and you have no possibility left to improve algorithmic behaviour and memory management.

Peter Lawrey

unread,
Mar 13, 2014, 5:02:12 PM3/13/14
to mechanica...@googlegroups.com
Comparing compressed refs should be the same as comparing two ints as this is basically what they are.  I say should, but you might be right that they are not optimised as heavily.


On 14 March 2014 06:18, Rüdiger Möller <moru...@gmail.com> wrote:
Yes, optimization is a black art :-). Profilers help in finding the "big ones". But at a certain point they might even mislead you.

At first you need to split your measurement/optimization into singlethreaded and concurrency-realated ones. Since I spend the last week optimizing my serialization (hunting 2 digit nano gains), here is my list of surprising candidates in single threaded optimization

  • avoid tmp allocation. Profilers rarely show this but effects on locality frequently matter a lot. Just check what your program is allocating in a typical test run. Effects show up in throughput, not in a profiler
  • HashMap lookups aren't that cheap. Don't use Collections with boxed types. Remove any redundant hash lookup (happens accidentally as code evolves). I had significant gains by caching hash lookup results. If it is likely to do the same lookup several times in a row, caching last result and compare before doing another hash can improve.
  • Instanceof can be expensive !! I had an "instanceof Serializable". Removement sped up from 1400ns to 1200ns, which is a lot given that my program has been optimized to death. If possible use x.class == Other.class, even better define constants and compare them instead. (like "getTypeId() { return MY_ID; }". only works if the classes to be checked are a limited set ofc.
  • Because of branchprediction, highly predictable checks (like "if (DEBUG)" or "if (fastmode)" are very cheap.
  • switch() can be slower than if else chains, if there are frequent pathes. Don't know why, maybe I am wrong here, but I had this in my code.
  • comparing pointers is more expensive than comparing ints (maybe related to 64bit with "compressed refs").
  • Locality: Try to reduce pointer chasing. E.g. if you have a configuration object with some flags etc. and some processing reads from this, you can improve locality by extracting the relevant parameters at init time and store them redudantely.
Am Donnerstag, 13. März 2014 19:13:33 UTC+1 schrieb Ariel Weisberg:

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Rüdiger Möller

unread,
Mar 14, 2014, 4:59:01 AM3/14/14
to mechanica...@googlegroups.com
Ofc, its dangerous to make fast conclusions. Frequently it really depends on the surrounding code wether it makes a difference and frequently the "theory" behind an observed improvement is completely wrong :-). Its just a list of things that one might have a look on.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages