How do you bind Commands in MVVM?

308 views
Skip to first unread message

Corrado Cavalli

unread,
Nov 1, 2008, 10:44:08 AM11/1/08
to wpf-di...@googlegroups.com

Hi Disciples,

Glad to see that you’ve all survived PDC (BTW: Just finished downloading ton of sessions, now I’ve to go buy the time to watch them… J)

I’ve (another) question for you about commands: What’s your favorite’s way to bind Command to the handler in M-V-VM? As you know there are many alternatives starting from Dan’s CommandModel to Josh’s CommandSink.

I’ve seen that Josh on his Bootcamp demo choose to add them to commandbinding via XAML, what’s your favorite way (if any) and why?

Both have pros and cons:

 

Exposing ICommand: Non need to use CommandBinding so you can test your ViewModel via UnitTesting easily, No Text, No InputBindings.

Exposing RotedCommands: Text,InputBindings support but requires wiring via CommandBindings and can’t easily Unit Test the ViewModel since I have Commands aren’t wired to the handler

 

Any hint?

 

Thanks

Corrado

 

 

 

Josh Smith

unread,
Nov 1, 2008, 11:44:03 AM11/1/08
to wpf-di...@googlegroups.com
I prefer exposing properties of type ICommand on my ViewModel objects.  I used to think that routed commands were the way to go, but eventually decided that they are unnecessarily complicated for most scenarios.  You should ask yourself, what benefit do I get by using a routed command?  If it's just the ability to have display text, why not create your own interface that implements ICommand, that adds in a DisplayText property?  Then your UI can bind to that DisplayText property, just as it would if it was binding to a RoutedUICommand.

Josh

Corrado Cavalli

unread,
Nov 1, 2008, 12:33:33 PM11/1/08
to wpf-di...@googlegroups.com

I agree and that’s exactly what I normally do, but I never considered Text property detail, your solution sounds good.

What about InputGestures?

 

I’m really curious about how commands are handled in Blend, maybe John can give some inner tricks… ;-)

 

Thanks again

Corrado

Mike Brown

unread,
Nov 1, 2008, 12:53:30 PM11/1/08
to wpf-di...@googlegroups.com
Funny thing...you can do command bindings in XAML so you can expose ICommand and have it fired by input gestures. I created a class that I called DelegatingCommand. The design made it into the Prism framework (and they go beyond that even by providing a full Event Broker implementation).

John Gossman

unread,
Nov 1, 2008, 4:37:45 PM11/1/08
to wpf-di...@googlegroups.com
In general Blend uses ICommand and data-binding, almost always to DelegateCommands (Bea's husband Eric implemented this in 2004, if got royalties he'd be a rich man today).  Over time we stopped using the CanExecute in most cases and instead tend to bind IsEnabled to some property on the ViewModel.  This leads me to believe the eventual solution is to have an easy syntax for just pointing an Event at a delegate on the ViewModel...though the presence of ICommands does give one a hint of what methods are designed to be called from the View.

CommandBindings are evil.

InputGestures are oh so close, but we made a few mistakes that make them harder to use than they should be.  I'll dig up an attached behavior I wrote for at least making Menus easier...something I eventually want in the runtimes.  Blend has a much more sophisticated system for menus that handles dynamic menus (MRU lists etc.) and properties like Checked. 

Mark Smith

unread,
Nov 1, 2008, 5:04:44 PM11/1/08
to wpf-di...@googlegroups.com
> CommandBindings are evil.

I'd be interested to hear you expound on this statement John -- would
you advocate all commands be defined and executed within the view
model then? How about built-in commands - especially those involving
editing, navigation and media?

thanks,
mark

John Gossman

unread,
Nov 1, 2008, 8:50:12 PM11/1/08
to wpf-di...@googlegroups.com
This is a long and complex topic, and my thoughts are still fluid here.  I want to sit down and prototype, but there's a bunch of things going on that keep distracting me.  Effectively, I think we have conflated several concepts with RoutedCommands:

1)  The ICommand part with it's Execute/CanExecute methods ... basically this is just a form of method dispatch that lends itself to declarative systems
2)  Common Verbs or Actions -- these are what we now call built-in commands
3)  The mapping of basic InputGestures (Ctrl-C) to these Commands, the InputGesture collection on RoutedCommand


ICommand is the important part for M-V-VM and these should be exposed off the ViewModels.  I find most scenarios are handled completely by this. 

To understand 2) and 3) it is useful to not overload the term Command, so imagine for the moment that we had a slightly different design.

First, we define something called Verbs.  A Verb is just a thing with a name, like "Paste", it almost could be a string, but type-safety is nice.  In this design we would have ApplicationVerbs.Paste instead of ApplicationCommands.PasteCommand.  These could also be called Actions or Intents (and I have another name I'll reveal below).

Second, we define a way to map Verbs to ICommands.

class VerbBinding
{
     public ICommand Command { ... };
     public Verb Verb { .... };
}

Any UIElement could define VerbBindings just like they can CommandBindings and InputBindings today.    

Third, we have ways to map input to Verbs. 

class InputToVerbBinding
{
   public InputGesture InputGesture { ... };
   public Verb Verb { ... };
}

These could be defined "globally" in the input system, or scoped to tree elements.

In this design, the View maps basic input like keystrokes and mouse and touch gestures (all InputGestures) either to ICommands directly on the ViewModel or maps them to generic Verbs like Copy and Paste.  Verbs in turn act like input and route through the visual tree until they find a Binding that maps them to a ICommand on the ViewModel.  Imagine we had a VerbBinding which took Verb and an ICommand and called execute on the ICommand whenever the Verb was handled.  So for example, a menu might contain Verb="ApplicationVerbs.Paste" and there would also be a default key binding that would map Ctrl-V to ApplicationVerbs.Paste and the developer might also decide to map TwoFingerTouch to ApplicationVerbs.Paste.  Whenever the menu was hit or the Ctrl-V key was pressed, the PasteVerb would be fired and route just like input until it was handled by a VerbBinding and directed to the ViewModel.   (One nuance is that TextBox and other controls may also handle common Verbs like Paste...but let's set that aside for a moment). 

If you squint at this design, you start to realize that Verbs act just like InputGestures.  And funnily enough if you look in the input system you find we already have precedence for taking one input event and turning it into another one:  we turn Stylus input gestures into Mouse gestures so that applications that are not specifically programmed to handle a Stylus will still work.    Similarly, in the future we will cause dragging a finger across a touch screen fire not only TouchMove but MouseMove gestures so that apps written before Touch was supported will still work (with limitations).  So InputToVerbBinding could just be a way to extend the input system to map one set of InputGestures to another generically.  More abstractly, if we introduce a Touch gesture that means Paste, if the system just adds a global InputToVerbBinding, then any app that handles the Paste Verb will be future proofed.

Hmmm...does that mean Verbs are just InputGestures?  I mentioned I had another name for Verb in mind.  How about "AbstractGesture"? AbstractGesture would just be a peer to KeyGesture and MouseGesture (lousy name though...VerbGesture?).  If Verbs are InputGestures, then we no longer need a special VerbBinding, InputBinding is sufficient.  I also mentioned that there was a nuance that controls need to handle common Verbs.  Well, controls can handle InputGestures and if Verbs are a type of InputGesture...so we're done.  Alternatively and more abstractly, TextBox can be thought of as a ViewModel for a string property on your model...but I don't blame you if your head starts spinning now.

In the final design, we get rid of RoutedCommand and add a new sub-class of InputGesture called Verb.  CommandBinding goes away in favor of reusing InputBinding.  The InputGesture property of RoutedCommand is replaced by a new input extensibility that allows us to map one InputGesture to another.  ApplicationCommands, EditingCommands etc. become collections of common verbs and their default mappings from other InputGestures.  I'd probably invent a new thing like the InputToVerbBinding I mentioned, but I don't have a good name for it.

Now with that background, let me answer your question.  Commands should be defined and executed within the view model using ICommand (and see my other remarks about DelegateCommand).   RoutedCommands commands should be thought of like Verbs in the design above, but they are really have nothing to do with ICommand.  Avoid them if you can..this will also make it easier to port to Silverlight. Don't define new RoutedCommands.   I find SaveCommand and the like to be useless...any real application has its own way of routing Save through it's internal model and shouldn't be using the UI framework for that.  This leaves only the scenario where you want part of your UI to trigger editing operations against another part...the canonical example being Cut, Copy, Paste on the main menu working against TextBoxes in your forms.  For this and similar scenarios, I have nothing better today than to  use the built-in RoutedCommands, but mentally think "Verb".

I would really appreciate the feedback of the Disciples on this.  I'm not as sure of this thinking as the write up above may make it seem...

Josh Smith

unread,
Nov 1, 2008, 9:19:25 PM11/1/08
to wpf-di...@googlegroups.com
Perhaps I'm oversimplifying this, but I think most people would be happy if they could do this:

<Window.InputBindings>
  <KeyBinding Command="{Binding Path=MyCommand}" Key="F3" />
</Window.InputBindings>

Basically, folks need a way to map a command on the ViewModel to a key or mouse gesture.  Unfortunately, the InputBinding's Command property cannot be bound since it isn't a DP, so this won't work.  Even if it was a DP, I don't think it would have an inheritance context, so it wouldn't pick up the ViewModel object referenced by the Window's DataContext.  Regardless, that's what I'd like to have.

Josh

John Gossman

unread,
Nov 1, 2008, 9:29:09 PM11/1/08
to wpf-di...@googlegroups.com
Yup...it sucks.  One of the things I want to fix is making data binding work in this situation...and you're right, we need a DP and the inheritance context.

Corrado Cavalli

unread,
Nov 2, 2008, 1:58:56 AM11/2/08
to wpf-di...@googlegroups.com

I really like to have something between ICommand and RoutedUICommand that doesn’t require CommandBinding so that I can have all features (Text,InputGesture) but I can easily use it in a MVVM scenario.

I like that idea to have inputbindings/text defined by ViewModel but I want to avoid using CommandBindings so that I can easily test my ViewModel.

 

Corrado

 

From: wpf-di...@googlegroups.com [mailto:wpf-di...@googlegroups.com] On Behalf Of John Gossman
Sent: domenica 2 novembre 2008 02:29
To: wpf-di...@googlegroups.com
Subject: [WPF Disciples] Re: How do you bind Commands in MVVM?

 

Yup...it sucks.  One of the things I want to fix is making data binding work in this situation...and you're right, we need a DP and the inheritance context.

Mike Brown

unread,
Nov 2, 2008, 4:17:10 AM11/2/08
to wpf-di...@googlegroups.com
John,
   A lot of what you mentioned reminds me of the ComponentModel Verbs (and the Cider Extensibility concepts as well) so I was able to follow along. I think this could be tremendously powerful because theoretically, you could create Verbs declaratively. At that point, you'd be a step closer to being able to define your ViewModel in a repository (like say Oslo's)...hell forget a step closer, you'd be there.
 
Recently, our office paid David Anderson (of Agile fame) to come and deliver a series of training sessions. One of the sessions was on Color based modeling. In that session, he mentioned a book by David C. Hay called Data Model Patterns (I've read the first two chapters and it's already improved my view of Data Modeling...it's DDD before DDD was a buzzword. Also there's a link to a later release of his that I've got queued to read after the first...the foreword alone has me excited). Anyway, one of the things that David provides in the early introduction of the book is a text syntax for defining an Entity-Relationship Model. The syntax is well defined enough to be the basis for a textual DSL for defining your domain model. I'm diving into Azure for now, but would definitely like to explore defining an "M" DSL based on this syntax if no one else picks it up by the time I come back up for air.
 
Paul Stovell and I both did a brief series of posts on our blogs about a concept called the Domain Tree (that would be congruent to the Visual Tree, only the tree would be composed of your domain entities rather than visual elements. I think we saw a glimpse of this with the Acropolis preview. And definitely feel that the "Oslo" repository can be leveraged to do something similar. In fact I think Oslo could be used to create a fully Metadata driven application as I've discussed a few times in here before...I'm not sure why I'm so enamored by this concept, but I am.
 
Anyway, if John didn't get your head spinning around...I'm sure my flights of fancy probably put it into a blender. But if you're still following me, let me know what you think.

Reply all
Reply to author
Forward
0 new messages