Imagine that you are doing some form of texturing that's based on 3D position. For simplicity, let's assume we want something like
color C = noise(P);
but of course, it could be a more complex procedural pattern, or a projected texture map, or whatever. So in this case, you'd see a noise pattern on the model and it would look fine for one still image.
Now, what happens in an animation when the camera moves or if the model itself moves with respect to world space? The numeric coordinates of P for a particular spot on the model, expressed in camera or world space, will be different from frame to frame. Therefore, you will see an animation of the texture sort of flowing over the model instead of sticking to it.
So the solution is to define another coordinate system (not world or camera) that moves rigidly with the model. For some models, "object" space will be fine, but for others it is inadequate. So you may wish to associate another coordinate system with the shader itself (which you may move rigidly with the model, or move independently of the model to help you place the pattern as you want it), and shade based on that coordinate system. THAT is shader space, and you'd use it like this:
point Pshad = transform ("common", "shader", P);
color C = noise(Pshad);
Now as long as you move the local coordinate system of the shader declaration rigidly with respect to the model, the pattern will "stick" to the model as it moves (or as the camera moves).
(If the model is not only moving rigidly, but also deforming, then you will in addition want to use a "reference mesh" or "Pref", rather than raw P, but that's a whole other set of issues to explain.)