Iconsidered posting on Stack Overflow, but the question strikes me as being far too subjective since I can't think of a reasonable technical explanation for Microsoft's choice in this matter. But this question has bugged me for so long and the issue keeps coming up in one of my projects, and I have never actually seen an attempt at explaining this:
When they had to choose, they didn't know of the standard, chose the 'other' system, and everyone else just went along for the ride.No shady business, just an unfortunate design decision that was carried along because backward compatibility is the name of Microsoft's game.
Once you throw out the fixed-function pipeline, you deal directly with "clip-space". The OpenGL Specification defines clip-space as a 4D homogeneous coordinate system. When you follow the transforms through normalized device coordinates, and down to window space, you find this.
Window space is in the space of a window's pixels. The origin is in the lower-left corner, with +Y going up and +X going right. That sounds very much like a right-handed coordinate system. But what about Z?
That's a left-handed coordinate system. Yes, you can change the depth test from GL_LESS to GL_GREATER and change the glDepthRange from [0, 1] to [1, 0]. But the default state of OpenGL is to work in a left-handed coordinate system. And none of the transforms necessary to get to window space from clip-space negate the Z. So clip-space, the output of the vertex (or geometry) shader is a left-handed space (kinda. It's a 4D homogeneous space, so it's hard to pin down the handedness).
In the fixed-function pipeline, the standard projection matrices (produced by glOrtho, glFrustum and the like) all transform from a right-handed space to a left-handed one. They flip the meaning of Z; just check the matrices they generate. In eye space, +Z moves towards the viewer; in post-projection space, it moves away.
I [...] was asked to choose a handedness for the Direct3D API. I chose a left handed coordinate system, in part out of personal preference. [...] it was an arbitrary choice.
They are both essentially equivalent, as one can be easily transformed into the other. The only advantage I can find for the left-handed system is: as objects are farther away from the observer, in any direction (x, y, or z), the distance is a higher value. But I have no idea if this is why Microsoft chose one over the other.
It's pure history. In ancient days the early cave-graphics programmers thought of the monitor (teletype? stonetype?) viewing surface as two dimensional graph paper. In math and engineering the usual conventions for plotting data points on graph paper is: x=right, y=up. Then one day, about a week after the invention of the silicon wheel, someone thought of 3D graphics. When the candle-bulb of this idea blinked on above their head, for whatever reason, they choose to add Z= away from viewer. (Ouch, my right hand hurts just imagining that.)
They had no idea that someday their far descendants would become engineers, scientists, fine artists, commercial artists, animators, product designers etc and find 3D graphics useful. All these fine modern people use right-handed coordinate systems to be consistent with each other and the more established math texts and physics conventions.
It is foolish to base the 3D coordinate system on the display surface. It's the model that counts - the triangles and polygons and planes describing a house, chair, overweight green ogre or galaxy. Nowadays we all design and model stuff in right-handed XYZ systems, and do so in terms of the model's world, even before thinking how it'll be rendered. The camera is added at some point, possibly made to fly around in crazy ways, and it's invisible infrastructure that converts the model to pixels that within its bowels must twerk around with coordinated system transforms.
Just to add to the confusion, some graphics libraries recognize that CRTs scan the image from top to bottom, and so have Y=down. This is used even today in all windowing systems and windows managers - X11, fvwm, gtk+, Win31 API, etc. How new-fangled 3D GUI systems like Clutter, Beryl etc deal with Z, is a separate issue from 3D graphics modeling. This need concern only applications programmers and GUI designers.
The thing to understand is that a HUGE amount of programmer time has been wasted converting between left-handed and right-handed coordinate systems, and even more programmer time has been wasted remembering which system was needed at any particular instant.
There are enough coordinate systems in common use already, without doubling the number by introducing a handedness question. See Minkler & Minkler, "Aerospace Coordinate Systems and Transformations". If you are in the aerospace coordinate business, doing e.g. flight simulation, you NEED that book.
The other possibility, that they knew right-handed systems were the industry standard, and they deliberately made DirectX left-handed, so as to make it HARD for people to convert code that used DirectX to use OpenGL instead, does not bear consideration. Were I to discover that this was indeed the case, I would find it necessary to embark on a new and presumably short-lived career as an axe-murderer in Redmond.
At that time the standard graphics text used in the RenderMorphics offices was "Principles of Interactive Computer Graphics" by Newmann and Sproul which does everything using left handed coordinates. It's the same book that I used in college. Looking at the D3D code you could even see them using variable names which matched the equations in the book.
To all those who think there is no advantage to right- or left-handedness, you are absolutely wrong. The right-handedness of Cartesian co-ordinate systems comes from the definition of the vector cross product. For XYZ basis vectors u, v and w, w = u X v.
This definition is fundamental to vector mathematics, vector calculus and much of physics. Suppose that you are trying to simulate electromagnetic interactions. You have a magnetic field vector, M, and a charged particle moving in that field with velocity v. Which way does the charge accelerate? Easy - in the direction of M X v. Except some idiot thought it'd be fun to make your display system left-handed, so it accelerates in the direction of -M X v.
Basically any physical interaction between two vector quantities is defined as right-handed, and so any time you need to simulate that interaction, you'd better hope your graphics system is right-handed or you will need to remember to cart around negative signs in all your math.
Neither is ultimately better than the other - you're mapping 3D coordinates to a 2D surface, so the third dimension (which actually makes things 3D) can be chosen arbitrarily, pointing at the viewer or into the screen. You're going to put things through a 4x4 matrix anyway, so there is no technical reason to choose one over the other. Functionally, one could argue:
Conclusion: There is some consensus for the X axis, but for the other two, both directions can be argued for, yielding two right-handed and two left-handed configurations, and they all make sense.
It's perfectly capable of supporting both RH and LH systems. If you look in the SDK documentation you'll see functions such as D3DXMatrixPerspectiveFovLH and D3DXMatrixPerspectiveFovRH. Both work, and both produce a projection matrix that can be used successfully with your co-ordinate system of choice. Hell, you can even use column-major in Direct3D if you wish; it's just a software matrix library and you are not required to use it. On the other hand, if you want to use it with OpenGL you'll find that it also works perfectly well with OpenGL too (which is perhaps the definitive proof of the matrix library's independence from Direct3D itself).
So if you want to use an RH system in your program, just use the -RH version of the function. If you want to use an LH system, use the -LH version. Direct3D doesn't care. Likewise, if you want to use an LH system in OpenGL - just glLoadMatrix an LH projection matrix. None of this stuff is important and it's nowhere near the huge issue that you sometimes see it made out to be.
I hate to tell ya, but the 'hand' of a set of coordinates is actually subjective -- it depends on which way you imagine you are looking at things. An example, from the OpenGL cube map spec: if you imagine being inside the cube looking out, the coordinate system is left-handed, if you think of looking in from the outside it is right handed. This simple fact causes endless grief, because to get a numerically correct answer all of your assumptions have to be consistent, even though locally it doesn't actually matter which assumption you make.
The OpenGL spec is officially "handedness neutral" and goes to great lengths to avoid this issue -- that is, to put it off onto the programmer. However there is a "hand change" between eye coordinates and clip coordinates: in model and eye space we are looking in, in clip and device space we are looking out. I'm always struggling with the consequences of this no doubt accidental complication.
The cross-product is our friend, because it lets you generate a definitely right handed or left handed 3D coordinate system from a pair of vectors. But which hand it is depends on which of those vectors you call "X".
Industry standard of coordinate actually came from desk coordinate, when people draw everything on desk, which is horizontal plane. X is left/right and Y is front/back, so it just make Z be upward and that was born Right-Handed Coordinate System
So what Right-Handed Coordinate System would cause?It make +Z be backward and -Z be forwardWhat the hell is this?Up is + and Down is -Right is + and Left is -BUT Forward is - and Backward is + ???It so stupid if we making game that character running in 3D world but we need to minus z to go forward. That's why game programmer most appreciate DirectX
3a8082e126