Superscalar CPUs, byte endianness, perf, and benchmarks.

789 views
Skip to first unread message

Kevin Burton

unread,
Sep 1, 2013, 11:58:09 PM9/1/13
to mechanica...@googlegroups.com
A few of my benchmarking adventures in the last few months didn't make a ton of sense and I'm still trying to track down exactly why... 

One is using a big endian vs little endian ByteBuffer.  They seem to be somewhat on par in terms of performance.  Big endian on x86 in my benchmarks was slower, but not SHOCKINGLY slower.

I'm wondering how much of this might have to do with superscalar CPUs instruction parallelism.  It might be that two benchmarks execute at the same rate but one uses far more instructions to execute (which are resources that could otherwise be used).  

Tools like caliper might need to be augmented to support perf and the modern processor tooling to expose this utilization.


Peter Lawrey

unread,
Sep 2, 2013, 2:29:19 AM9/2/13
to mechanica...@googlegroups.com

My understanding is there is a single machine code instruction to swap the bytes around and this is done in the cpu. Compare this to the cost of cache access and bound checks and swapping the bytes doesn't make much difference.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Kevin Burton

unread,
Sep 2, 2013, 3:02:29 AM9/2/13
to mechanica...@googlegroups.com
I mean the compiler would have to be VERY smart because ByteBuffer is doing all this stuff manually... 

The following code is essentially how bytebuffer does putLong and getLong... I was going to adopt this code but instead decided to just let unsafe do everything natively. 


    private static byte long7(long x) { return (byte)(x >> 56); }
    private static byte long6(long x) { return (byte)(x >> 48); }
    private static byte long5(long x) { return (byte)(x >> 40); }
    private static byte long4(long x) { return (byte)(x >> 32); }
    private static byte long3(long x) { return (byte)(x >> 24); }
    private static byte long2(long x) { return (byte)(x >> 16); }
    private static byte long1(long x) { return (byte)(x >>  8); }
    private static byte long0(long x) { return (byte)(x      ); }

    private long makeLong(byte b7, byte b6, byte b5, byte b4,
                          byte b3, byte b2, byte b1, byte b0) {

        return ((((long)b7         ) << 56) |
                  (((long)b6 & 0xff) << 48) |
                  (((long)b5 & 0xff) << 40) |
                  (((long)b4 & 0xff) << 32) |
                  (((long)b3 & 0xff) << 24) |
                  (((long)b2 & 0xff) << 16) |
                  (((long)b1 & 0xff) <<  8) |
                  (((long)b0 & 0xff)      ));

    }

    public void setLong(long index , long x) {
        byteSlab.setByte(index    , long7(x));
        byteSlab.setByte(index + 1, long6(x));
        byteSlab.setByte(index + 2, long5(x));
        byteSlab.setByte(index + 3, long4(x));
        byteSlab.setByte(index + 4, long3(x));
        byteSlab.setByte(index + 5, long2(x));
        byteSlab.setByte(index + 6, long1(x));
        byteSlab.setByte(index + 7, long0(x));
    }

    public long getLong(long index) {
        return makeLong(byteSlab.getByte(index    ),
                        byteSlab.getByte(index + 1),
                        byteSlab.getByte(index + 2),
                        byteSlab.getByte(index + 3),
                        byteSlab.getByte(index + 4),
                        byteSlab.getByte(index + 5),
                        byteSlab.getByte(index + 6),
                        byteSlab.getByte(index + 7));
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

Martin Thompson

unread,
Sep 2, 2013, 3:34:25 AM9/2/13
to mechanica...@googlegroups.com
If unaligned access is supported for DirectByteBuffer, as on x86, the code can call Long.reverseBytes() which has an intrinsic applied. There are lots of tricks like rotates and table lookups, or using bswap for this.

Michael Barker

unread,
Sep 2, 2013, 3:37:14 AM9/2/13
to mechanica...@googlegroups.com
I benchmarked something similar for our marshalling code and I found that Long.reverseBytes() + Unsafe.putLong() was about 80% of the performance of Unsafe.putLong() on an Intel Westmere.

Mike.


To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.

Michael Barker

unread,
Sep 2, 2013, 3:45:19 AM9/2/13
to mechanica...@googlegroups.com
I would recommend having a look at the assembler for both cases, it will probably give you a better idea of what is going on.

Mike.

Rüdiger Möller

unread,
Sep 2, 2013, 9:46:48 AM9/2/13
to mechanica...@googlegroups.com
using non-native ordered buffers has even worse effects if it comes to arrays. You can't copy a primitive array using copyMemory(), but have to loop using getXX, putXX in order to get byte order corrected. I personally helped by using the native byte order of my primary target platform (x86) and apply reordering for distinct byte ordered platforms. This is an issue when doing copy memory from/to on-heap arrays to unsafe memory.

Wojciech Kudla

unread,
Sep 2, 2013, 4:24:07 PM9/2/13
to mechanica...@googlegroups.com
This is an issue when doing copy memory from/to on-heap arrays to unsafe memory.

Rudiger,

I'm very interested in learning how you manage to guarantee consistency when conducting unsafe access on on-heap objects.

 

Am Montag, 2. September 2013 05:58:09 UTC+2 schrieb Kevin Burton:
A few of my benchmarking adventures in the last few months didn't make a ton of sense and I'm still trying to track down exactly why... 

One is using a big endian vs little endian ByteBuffer.  They seem to be somewhat on par in terms of performance.  Big endian on x86 in my benchmarks was slower, but not SHOCKINGLY slower.

I'm wondering how much of this might have to do with superscalar CPUs instruction parallelism.  It might be that two benchmarks execute at the same rate but one uses far more instructions to execute (which are resources that could otherwise be used).  

Tools like caliper might need to be augmented to support perf and the modern processor tooling to expose this utilization.


Peter Lawrey

unread,
Sep 2, 2013, 5:09:19 PM9/2/13
to mechanica...@googlegroups.com
Whether you are working on heap or off heap, The operations are the same ones that AtomicReference, AtomicInteger and AtomicLong use.  Basically getVolatileInt/Long, putOrderedInt/Long, compareAndSwapInt/Lomg and if needed putVolatileInt/Long

Wojciech Kudla

unread,
Sep 2, 2013, 5:31:48 PM9/2/13
to mechanica...@googlegroups.com
Peter, 

I fully agree, but Rudiger mentioned looping with getXX, putXX over memory region that's supposed to contain array entries. Maybe I'm missing something here but on-heap stuff may change it's memory location over time, hence my question about preserving consistency.


2013/9/2 Peter Lawrey <peter....@gmail.com>

Peter Lawrey

unread,
Sep 3, 2013, 1:06:03 AM9/3/13
to mechanica...@googlegroups.com
Your question is valid, I missed that point.

You can use getXxxx provided you have done a getVolatile first and a putXxx provided you have done a putVolatile/Ordered last.  Java 8 will have explicit memory barriers in Unsafe.

If you repeatedly do getXxx, or just a putXxx, you will eventually get the right value.  The JVM doesn't yet optimise the access like it can a normal field.

Rüdiger Möller

unread,
Sep 3, 2013, 7:48:34 AM9/3/13
to
I am talking of

copyMemory(java.lang.Object p1, long p2, java.lang.Object p3, long p4, long p5);

since one passes the object handle, you don't have to deal with adress resolution. You can obtain base array offsets+size using

public native int arrayBaseOffset(java.lang.Class p1);

this way you can copy an int, long arrays to byte arrays and put these using Buffer.putByte( byte[] ) in case. Somewhat special but there are cases where this comes in handy. Ofc it does not make sense to do an extra copy, but there are cases where this speeds things a lot. E.g. if one serializes an int array to a output, one can use a simple copyMemory to write/read the array at once (in native byte order).




Am Montag, 2. September 2013 05:58:09 UTC+2 schrieb Kevin Burton:

Wojciech Kudla

unread,
Sep 3, 2013, 4:25:06 PM9/3/13
to mechanica...@googlegroups.com
Peter,

My experience shows there is no such thing as consistent sequential unsafe access/manipulation for on-heap objects. A series of volatile loads or stores would still remain a set of atomic but completely independent operations. Let's say the array gets moved by the gc when I'm half-way through with looping on that array. How do I guarantee consistency throughout the whole loop?
And additional question: I agree that repeatedly doing getXxx will most likely yield the right value in the end, but how do I know when to stop? I can't CAS on it as I don't know what value to expect in the first place.



2013/9/3 Peter Lawrey <peter....@gmail.com>

Peter Lawrey

unread,
Sep 3, 2013, 4:40:26 PM9/3/13
to mechanica...@googlegroups.com
Before the GC starts, it brings all the threads to a safe point and interestingly Unsafe.copyMemory has no safe points.  This has two impacts.

- a GC will not occur half way in between a copyMemory.
- if you copyMemory enough data you can prevent a GC from starting.

When you are copying, you don't need to know what the previous value was.  Either you are reading after a read barrier, in which case you have only the current value, or you have a write barrier after performing writes and you don't need the previous value (though the CPU cache still loads it unfortunately)

Where you have a problem is if two threads attempt to write to the same region concurrently because you can end up with part of one copy and part of the other, but you shouldn't be trying IMHO.

Peter.

Wojciech Kudla

unread,
Sep 3, 2013, 4:48:59 PM9/3/13
to mechanica...@googlegroups.com
Peter,

Since you were refering to getXXX and putXXX my impression was we were discussing operations completely different from Unsafe.copyMemory() which most of us are familiar with. Maybe I misinterpreted your post. Thanks for the clarification.


2013/9/3 Peter Lawrey <peter....@gmail.com>

Kevin Burton

unread,
Sep 3, 2013, 6:27:05 PM9/3/13
to mechanica...@googlegroups.com
I assume the getXXX and putXXX  version first obtained the pointer to the byte[] and then called getXXX and putXXX  based on that pointer?

This would make sense (regarding the comment about this being problematic) considering a GC could relocate the byte[]..

My version is just using unsafe for everything.  Including copying the byte[]... I have a 'direct buffer' equivalent now but I need to work on a heap version next.

Kevin


2013/9/3 Peter Lawrey <peter....@gmail.com>



2013/9/3 Peter Lawrey <peter....@gmail.com>


2013/9/2 Peter Lawrey <peter....@gmail.com>
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

Peter Lawrey

unread,
Sep 4, 2013, 1:46:07 AM9/4/13
to mechanica...@googlegroups.com
That is my understanding, however it would do it without a safe point in between i.e. a GC cannot be performed just anywhere. (Though this is how it appears at the Java level)
The downside of this is these safepoint checks is they can really slow down your system in some cases and make it behave badly, but that is a different issue. At least you can be sure a GC cannot be started without every thread reaching one of these safe points.


To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.

Martin Thompson

unread,
Sep 4, 2013, 3:20:22 AM9/4/13
to mechanica...@googlegroups.com
The on-heap data will not change during the duration of any single call to unsafe because the thread is not at safepoint then. You can consider each call atomic and only carry values between calls, never an address. The address is resolved on each call by passing it to the Unsafe method.

Unsafe.copyMemory is safe while executing because the thread is not at safepoint.


On Monday, September 2, 2013 10:31:48 PM UTC+1, Wojciech Kudla wrote:
Peter, 

I fully agree, but Rudiger mentioned looping with getXX, putXX over memory region that's supposed to contain array entries. Maybe I'm missing something here but on-heap stuff may change it's memory location over time, hence my question about preserving consistency.


2013/9/2 Peter Lawrey <peter....@gmail.com>
Whether you are working on heap or off heap, The operations are the same ones that AtomicReference, AtomicInteger and AtomicLong use.  Basically getVolatileInt/Long, putOrderedInt/Long, compareAndSwapInt/Lomg and if needed putVolatileInt/Long
On 2 September 2013 22:24, Wojciech Kudla <wojciec...@gmail.com> wrote:
This is an issue when doing copy memory from/to on-heap arrays to unsafe memory.

Rudiger,

I'm very interested in learning how you manage to guarantee consistency when conducting unsafe access on on-heap objects.

Am Montag, 2. September 2013 05:58:09 UTC+2 schrieb Kevin Burton:
A few of my benchmarking adventures in the last few months didn't make a ton of sense and I'm still trying to track down exactly why... 

One is using a big endian vs little endian ByteBuffer.  They seem to be somewhat on par in terms of performance.  Big endian on x86 in my benchmarks was slower, but not SHOCKINGLY slower.

I'm wondering how much of this might have to do with superscalar CPUs instruction parallelism.  It might be that two benchmarks execute at the same rate but one uses far more instructions to execute (which are resources that could otherwise be used).  

Tools like caliper might need to be augmented to support perf and the modern processor tooling to expose this utilization.


--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

Kirk Pepperdine

unread,
Sep 4, 2013, 4:17:51 AM9/4/13
to mechanica...@googlegroups.com
So what you are saying is that there is potentially a whole class of problems when using Unsafe that are related to safe pointing.. nice!

Martin Thompson

unread,
Sep 4, 2013, 4:35:20 AM9/4/13
to mechanica...@googlegroups.com
Not quite. Unsafe ops are intrinsic and therefore have *no* safepoint check. The thing to watch out for is using Unsafe.copyMemory() or Unsafe.setMemory() for a very large region because this has the potential for impacting TTSP. 

If on JRockit you have a safepoint check because Unsafe is JNI and not intrinsics.



2013/9/3 Peter Lawrey <peter....@gmail.com>


2013/9/2 Peter Lawrey <peter....@gmail.com>
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.

Kirk Pepperdine

unread,
Sep 4, 2013, 4:41:38 AM9/4/13
to mechanica...@googlegroups.com
On 2013-09-04, at 7:46 AM, Peter Lawrey <peter....@gmail.com> wrote:

That is my understanding, however it would do it without a safe point in between i.e. a GC cannot be performed just anywhere. (Though this is how it appears at the Java level)
The downside of this is these safepoint checks is they can really slow down your system in some cases and make it behave badly, but that is a different issue. At least you can be sure a GC cannot be started without every thread reaching one of these safe points.

I believe in this case the thread will pass through a safe point on the way into C and on the way back to Java. But this isn't always the case in with every call if I understand things correctly. For example, I believe a thread sitting on a socket is considered to be at a safe-point. Otherwise GC in servers wouldn't work.

-- Kirk

Kirk Pepperdine

unread,
Sep 4, 2013, 4:44:23 AM9/4/13
to mechanica...@googlegroups.com


> The on-heap data will not change during the duration of any single call to unsafe because the thread is not at safepoint then. You can consider each call atomic and only carry values between calls, never an address. The address is resolved on each call by passing it to the Unsafe method.
>
> Unsafe.copyMemory is safe while executing because the thread is not at safepoint.

Oops, my last email may sound like it's in contradiction to this point.. to be clear, the call will be made on entry and the on exit or on any reference to an item in heap but in the meantime it's not at a safe point just as Martin has stated here.

-- Kirk

Peter Lawrey

unread,
Sep 4, 2013, 4:44:41 AM9/4/13
to mechanica...@googlegroups.com
I admit, what is considered a safe point, isn't clear to me. I imagine you are right along with any blocking, waiting, yielding, sleeping operation. 

AFAIK some operations are not considered safe pointed by HotSpot.

Martin Thompson

unread,
Sep 4, 2013, 4:45:04 AM9/4/13
to mechanica...@googlegroups.com
Once in JNI code the thread is at safepoint. So if you block reading on a socket you are in the native code at this point and therefore at safepoint for that thread.

Kirk Pepperdine

unread,
Sep 4, 2013, 4:49:11 AM9/4/13
to mechanica...@googlegroups.com

On 2013-09-04, at 10:45 AM, Martin Thompson <mjp...@gmail.com> wrote:

> Once in JNI code the thread is at safepoint. So if you block reading on a socket you are in the native code at this point and therefore at safepoint for that thread.

How does one handle the case where the C code accesses data in Java heap?


Peter Lawrey

unread,
Sep 4, 2013, 4:51:16 AM9/4/13
to mechanica...@googlegroups.com
I assume that if you stick the the ugly "helper" methods the JVM provides you should be ok.  I would assume if you bypass these you are asking for trouble.


Martin Thompson

unread,
Sep 4, 2013, 5:01:08 AM9/4/13
to
This is why you have to call back into the heap via the handles which can do the safepoint check.  Never access the heap objects directly from the native side.

Most safepoint checks are performed on Java method return, back edge of counted loops, and JNI transitions.  For all Peter's examples of sleeping, yielding, and blocking you have transitioned to native code and done the safepoint check on the way.

Martin...


Gil Tene

unread,
Sep 4, 2013, 5:01:47 PM9/4/13
to mechanica...@googlegroups.com
Here is a collection of statement about "what is a safepoint" that attempt to be both correct and somewhat precise:

1. A thread can be at a safepoint or not be at a safepoint. When at a safepoint, the thread's representation of it's Java machine state is well described, and can be safely manipulated and observed by other threads in the JVM. When not at a safepoint, the thread's representation of the java machine state will NOT be manipulated by other threads in the JVM. [Note that other threads do not manipulate a thread's actual logical machine state, just it's representation of that state. A simple example of changing the representation of machine state is changing the virtual addresss that a java reference stack variable points to as a result of relocating that object. The logical state of the reference variable is not affected by this change, as the reference still refers to the same object, and two references variable referring to the same object will still be logically equal to each other even if they temporarily point to different virtual addresses].

2. "Being at a safepoint" does not mean "being blocked" (e.g. JNI code runs at a safepoint), but "being blocked" always happens at a safepoint.

3. The JVM may choose to reach a global safepoint (aka Stop-The-World), where all threads are at a safepoint and can't leave the safe point until the JVM decides to let it do so. This is useful for doing all sorts of work (like certain GC operations, deoptimization during class loading, etc.) that require ALL threads to be at a well described state.

4. Some JVMs can bring individual threads to safepoint without requiring a global safepoint. E.g. Zing uses the term Checkpoint (first published in [1]) to describe a JVM mechanism that individually passes threads through thread-specidfic safepoints to perform certain very short operations on individual thread state without requiring a Stop-The-Wolrd pause.

5. When you write Unsafe java code, you must assume that a safepoint MAY occur between any two bytecodes. 

6. Unsafe calls are not required to have safepoints within them (and many/most don't), but they MAY include one or more safepoints. E.g. an unsafe memoryCopy MAY include a periodic safepoint opportunities (e.g. take a safepoint every 16KB). Zing sure does, as we do a lot of under the hood work to keep TTSP in check.

7. All [practical] JVMs apply some highly efficient mechanism for frequently crossing safepoint opportunities, where the thread does not actually enter a safepoint unless someone else indicates the need to do so. E.g. most call sites and loop backedges in generated code will include some sort of safepoint polling sequence that amounts to "do I need to go to a safepoint now?". Many HotSpot variants (OpenJDK and Oracle JDK) currently use a simple global "go to safepoint" indicator in the form of a page that is protected when a safepoint is needed, and unprotected otherwise. The safepoint polling for this mechanism amounts to a load from a fixed address in that page. If the load traps with a SEGV, the thread knows it needs to go to enter a safepoint. Zing uses a different, per-thread go-to-safepoint indicator of similar efficiency.

8. All JNI code executes at a safepoint. No Java machine state of the executing thread can be changed or observed by it's JNI code while at a safepoint. Any manipulation or observation of Java state done by JNI code is achieved via JNI API calls that leave the safepoint for the duration of the API call, and then enter a safepoint again before returning to the calling JNI code. This "crossing of the JNI line" is where most of the JNI call and API overhead lies, but it's fairly quick (entering and leaving safepoint normally amounts to some CAS operations).




 

[1] Click, Tene, Wolf "The Pauseless GC Algorithm" 2005 at https://www.usenix.org/legacy/events/vee05/full_papers/p46-click.pdf



2013/9/3 Peter Lawrey <peter....@gmail.com>



2013/9/3 Peter Lawrey <peter....@gmail.com>


2013/9/2 Peter Lawrey <peter....@gmail.com>
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Rüdiger Möller

unread,
Sep 4, 2013, 7:09:34 PM9/4/13
to mechanica...@googlegroups.com
Very insightful post, Gil. Thanks for sharing.
Reply all
Reply to author
Forward
0 new messages