This is a long and complex topic, and my thoughts are still fluid here. I want to sit down and prototype, but there's a bunch of things going on that keep distracting me. Effectively, I think we have conflated several concepts with RoutedCommands:
1) The ICommand part with it's Execute/CanExecute methods ... basically this is just a form of method dispatch that lends itself to declarative systems
2) Common Verbs or Actions -- these are what we now call built-in commands
3) The mapping of basic InputGestures (Ctrl-C) to these Commands, the InputGesture collection on RoutedCommand
ICommand is the important part for M-V-VM and these should be exposed off the ViewModels. I find most scenarios are handled completely by this.
To understand 2) and 3) it is useful to not overload the term Command, so imagine for the moment that we had a slightly different design.
First, we define something called Verbs. A Verb is just a thing with a name, like "Paste", it almost could be a string, but type-safety is nice. In this design we would have ApplicationVerbs.Paste instead of ApplicationCommands.PasteCommand. These could also be called Actions or Intents (and I have another name I'll reveal below).
Second, we define a way to map Verbs to ICommands.
class VerbBinding
{
public ICommand Command { ... };
public Verb Verb { .... };
}
Any UIElement could define VerbBindings just like they can CommandBindings and InputBindings today.
Third, we have ways to map input to Verbs.
class InputToVerbBinding
{
public InputGesture InputGesture { ... };
public Verb Verb { ... };
}
These could be defined "globally" in the input system, or scoped to tree elements.
In this design, the View maps basic input like keystrokes and mouse and touch gestures (all InputGestures) either to ICommands directly on the ViewModel or maps them to generic Verbs like Copy and Paste. Verbs in turn act like input and route through the visual tree until they find a Binding that maps them to a ICommand on the ViewModel. Imagine we had a VerbBinding which took Verb and an ICommand and called execute on the ICommand whenever the Verb was handled. So for example, a menu might contain Verb="ApplicationVerbs.Paste" and there would also be a default key binding that would map Ctrl-V to ApplicationVerbs.Paste and the developer might also decide to map TwoFingerTouch to ApplicationVerbs.Paste. Whenever the menu was hit or the Ctrl-V key was pressed, the PasteVerb would be fired and route just like input until it was handled by a VerbBinding and directed to the ViewModel. (One nuance is that TextBox and other controls may also handle common Verbs like Paste...but let's set that aside for a moment).
If you squint at this design, you start to realize that Verbs act just like InputGestures. And funnily enough if you look in the input system you find we already have precedence for taking one input event and turning it into another one: we turn Stylus input gestures into Mouse gestures so that applications that are not specifically programmed to handle a Stylus will still work. Similarly, in the future we will cause dragging a finger across a touch screen fire not only TouchMove but MouseMove gestures so that apps written before Touch was supported will still work (with limitations). So InputToVerbBinding could just be a way to extend the input system to map one set of InputGestures to another generically. More abstractly, if we introduce a Touch gesture that means Paste, if the system just adds a global InputToVerbBinding, then any app that handles the Paste Verb will be future proofed.
Hmmm...does that mean Verbs are just InputGestures? I mentioned I had another name for Verb in mind. How about "AbstractGesture"? AbstractGesture would just be a peer to KeyGesture and MouseGesture (lousy name though...VerbGesture?). If Verbs are InputGestures, then we no longer need a special VerbBinding, InputBinding is sufficient. I also mentioned that there was a nuance that controls need to handle common Verbs. Well, controls can handle InputGestures and if Verbs are a type of InputGesture...so we're done. Alternatively and more abstractly, TextBox can be thought of as a ViewModel for a string property on your model...but I don't blame you if your head starts spinning now.
In the final design, we get rid of RoutedCommand and add a new sub-class of InputGesture called Verb. CommandBinding goes away in favor of reusing InputBinding. The InputGesture property of RoutedCommand is replaced by a new input extensibility that allows us to map one InputGesture to another. ApplicationCommands, EditingCommands etc. become collections of common verbs and their default mappings from other InputGestures. I'd probably invent a new thing like the InputToVerbBinding I mentioned, but I don't have a good name for it.
Now with that background, let me answer your question. Commands should be defined and executed within the view model using ICommand (and see my other remarks about DelegateCommand). RoutedCommands commands should be thought of like Verbs in the design above, but they are really have nothing to do with ICommand. Avoid them if you can..this will also make it easier to port to Silverlight. Don't define new RoutedCommands. I find SaveCommand and the like to be useless...any real application has its own way of routing Save through it's internal model and shouldn't be using the UI framework for that. This leaves only the scenario where you want part of your UI to trigger editing operations against another part...the canonical example being Cut, Copy, Paste on the main menu working against TextBoxes in your forms. For this and similar scenarios, I have nothing better today than to use the built-in RoutedCommands, but mentally think "Verb".
I would really appreciate the feedback of the Disciples on this. I'm not as sure of this thinking as the write up above may make it seem...