Hey guys,
Does anyone have experience using the command pattern? I’m looking for a few things that this pattern seems to help with.
Undo/redo (preferable multi-level and persistent)
Action logging (what the user has done, in which order, and where things went wrong using which arguments)
I took a whack at it and found it rather straightforward to make a scripting-language out of it.
There’s an example run at the bottom, but the ‘gist’ of it is this:
______________________________________________________
| |
| Command Pattern - Demonstration |
| Author: Marcus Ottosson <mar...@abstractfactory.io> |
|______________________________________________________|
* Available commands
cls
create
data
delete
exit
help
history
redo
undo
update
verbosity
command> create key value
command> create age 5
command> create length 1.57
command> data
age=5
length=1.57
key=value
command> undo
command> redo
command> help update
Update existing value in DATASTORE
Args:
key: Identifier for value
value: Value for identifier
Precondition:
`key` must already exist
Example:
command> update age 5
The main questions are about two aspects of it’s design:
History is stored as class attributes
Which means that commands add themselves to history, which wouldn’t work too great if they are accessed from separate threads.
History is stored as Python objects, as opposed to simple strings
Which means that history would be tricky to serialise and persist on disk or across a network.
Thoughts?
Best,
Marcus
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOAc6RTpxT%3Dtppa4GLrvWKZvN_A9qsC0riZQN27NJPcAhQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Hey Tony,
Thanks, that makes sense.
Part of me feels that it might be a bit heavy-handed to pickle, as the commands are little more than a name and an argument, arguments being plain-old-data in this case. Something I think could potentially be enforced if I continue down this path.
A data-interchange format or protocol sounds reasonable and something I’ll look into. You’re thinking in terms of a pre-defined dictionary via something like JSON?
{
'command': 'create',
'args': ['age', 5]
}
Best,
Marcus
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsR535D6ZVmnh7jZEsyXzm39FjcuWci5eyRmcACKDPCv9Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOC_DyZSO0UOw8pJe6Cbt3%2BmZWTP6yNaTQfehLgxe8sEmQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOCjJXcO_4v4Mohd26XsVN17dA5CA7thLOfoQJqjCuePZQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA3hZvtZEJ19AER7Bp2DhZe62xWXVVpPGG1frEsPdWSsmw%40mail.gmail.com.
Hey guys, thanks for your suggestions.
Here’s the implementation of the pattern in an asynchronous manner using ZeroMQ.
https://github.com/mottosso/patterns/tree/master/python/zerocommand
The goal of this implementation is to discuss, document and understand some of the higher-level issues with asynchronism in use with the Command Pattern. I’ve attempted to document the source code in such a way that is should be digestible from top-to bottom like a book, starting with server.py. If anything is unclear, let me know so I can patch it up.
To run it, you’ll need PyZMQ, installable via pip:
$ pip install pyzmq
The implementation then uses two shells that communicate with each other; which applies equally well to shells running on different computers/continents.

shell A
$ python server.py
shell B
$ python client.py
The implementation I posted earlier had a logical flaw; a misinterpretation of the Command Pattern. This description helped me figure out how to approach it:
The waiter (Invoker) takes the order from the customer on his pad. The order is then queued for the order cook and gets to the cook (Receiver) where it is processed. In this case the actors in the scenario are the following: The Client is the customer. He sends his request to the receiver through the waiter, who is the Invoker. The waiter encapsulates the command (the order in this case) by writing it on the check and then places it, creating the ConcreteCommand object which is the command itself. The Receiver will be the cook that, after completing work on all the orders that were sent to him before the command in question, starts work on it. Another noticeable aspect of the example is the fact that the pad for the orders does not support only orders from the menu, so it can support commands to cook many different items. - http://www.oodesign.com/command-pattern.html
In the previous implementation, each command was capable of performing its own actions; which in this scenario means that each note on the waiters checklist was capable of cooking the food for the customer. Which is nonsense. Only the chef (Receiver) knows how to perform those commands, the waiter simply hold onto them.
In the current implementation, the Datastore is the Receiver and is what ultimately performs the commands requested by the client. (You, via the shell, in this case).
Another important thing to note are three different targets for commands; previously inseparable and intertwined:
Server commands consists of the main create, update delete commands that communicate with the datastore. Client only commands don’t operate on the server, and include things such as cls and history to clear the shell and visualise history respectively. Server only doesn’t communicate with the datastore, but merely returns information about the server; in this case, a list of available commands.
There were numerous questions raised and design decisions made in this example but to try and keep this thread linear (i.e synchronous) I figure I’d throw out one question at a time, and since we’re on the topic of protobuf, let’s start with that.
It’s funny you should mention protobuf. As commands, like RPCs, are generally sent with arbitrarily long argument lists and protobuf, being a “weakest-link” protocol, doesn’t support it which in practice means that each command would require their own protobuf protocol.
In the current implementation, JSON is used as a means of encapsulating and transferring commands across shells. What would you consider some of the benefits of using (something like) protobuf in relation to the Command Pattern in general?
Best,
Marcus
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsRthjd27JvW%2B_v3eODnRVJ-zLN3gVwkW7QthEbG38cpBg%40mail.gmail.com.
When you are defining a function signature for cross-language compatability, you can't really do variadic functions. Some languages don't support it. So if you need variadic support then you accept something like a list argument that can be arbitrarily long. Or a dict for keyword args that are arbitrary.
Thrift is the same way.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOAcqtEN3bYMSOB4AKL_00kLTZHp%2BNK%3Dzis%2BZEKVPS-CjQ%40mail.gmail.com.
Some languages don’t support it.
Mm, this is what I meant by it being a “weakest-link” approach. An arbitrarily long list seems like a good compromise though, and makes sense. Thanks.
Since you are using something like this, what do you feel is beneficial/problematic about it? Off the top of my head, it seems like there will be quite a lot more code and re-compiling of protocols during initial development. Would you agree that protobuf and c/o are better suited for a mature API then it is during the development of one? I can imagine that the explicitness of protobuf can help avoid mistakenly altering protocols or misinterpreting them.
Best,
Marcus
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA3BVZPAyBCUE%3DbmAJS4Km2WdtNDdnt3%3DE6KY5W8OUAqqA%40mail.gmail.com.
Well it has the goal of being consistent and compatible. It also enforces validation of the protocol. I don't really see it as something you swap in later when you deem your API to be mature. Early on in dev you can freely break compatability and alter the structures entirely. Once you deploy a stable release you can then start adhering to the rules to keep it compatible.
I see it more as a nice tool for defining your interface contract, and not so much of something that is a burden during dev. Thrift, for instance, takes almost no time to regen the bindings. I have also integrated the process into my build process so that it regens automatically when the mtime of the spec file is newer than the generated bindings.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBi_oa6JM-zXc4GWsORUo6mk88KPbLrHHV%2Be88OjFEJag%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA0S-QK%2BpsYg_Da-jkiZ8w1y4Jj54%3DijFxKkNCZdAvHUtw%40mail.gmail.com.
Actually yes, thought I haven't done it. They designed thrift in a modular way where you have:
* tranports
* protocols
* server/socket
So you can mix and match different protocols with different transports and a different server type.
Transport is how to send and receive the data between a server and a clients. Like httpor tcp sockets.
Protocol is how to encode and decode the messages. Like binary or json or some text protocol.
Servers are like single threaded, thread pool, thread per connection, async.
So you could implement your own protocol. You could even use protobuf over thrift RPC. I think they have zmq protocols for thrift.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOA4hosVADMzoRjAXKzZsi2sAT1o1oiXHug-%3Dr8BT6oRMg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA0kSfgZJgegYyj%3DRJnby8ToRTVC8%2Bo76wTxnPEAR1mi%3DA%40mail.gmail.com.
Because it still enforces the interface and types all the same. You are just defining how to serialize the object onto the wire and turn it back into an object on the other side.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBaXmukuineO8VUt1Gee1C2xu98K%3DinnsZgF_ufx3LBBQ%40mail.gmail.com.
Because it still enforces the interface and types all the same.
Yes, but your example isn’t dependent on enforcing types and interfaces and is equally valid (and cool) without.
You mentioned that you could use Thrift without protobuf (sorry to keep calling it protobuf, what is it called?). Could you try and illustrate a scenario in which having protobuf is especially beneficial, and perhaps one in which it shows its weakness?
They have client/server implementations to run Thrift RPC over ZMQ as the transport:
That’s cool, good to know. Maybe we could send each other messages one day. :)
Best,
Marcus
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA2fB4oZWO25Ptr1H35J_0dkeR1fwwzb06trmM5bzruw1w%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOB4sGp6dVN8Re6RTRovcuw%2BTXN%3D15jDNEJpFV%2B%3DPYSBGA%40mail.gmail.com.
Yes you are probably right.
This topic is about JSON versus Protobuf for transmitting commands across processes; let’s refer to these as unstructured versus structured respectively. To be explicit, and for those not familiar with protobuf or schemas and what not, here’s what we’re comparing:
Unstructured is fast, more prone to error
Structured is slow, less prone to error
unstructured - client
>>> message = {
'command': 'create',
'args': ['age', '5'],
'id': 'uuid' # Unique id for client
}
>>> packed_message = json.dumps(message)
>>> socket.send(packed_message)
>>> return_value = socket.recv()
# Blocks until server responds
>>> # No post-processing required
unstructured - server
>>> packed_message = socket.recv()
>>> message = json.loads(packed_message)
>>> command = message['command']
>>> command_class = commands.get(command) # Get ConcreteCommand
>>> command_instance = command_class(receiver)
>>> return_values = command_instance.do()
>>> message = {
'status': 'ok',
'message': return_value,
'id': 'uuid' # Unique id for command
}
>>> packed_message = json.loads(message)
>>> socket.send(packed_message)
Compared to:
structured - client
>>> import CommandMessage # protobuf protocol
>>> import ReplyMessage
>>> message = CommandMessage()
>>> message.command = 'create'
>>> message.args = ['age', '5']
>>> message.id = 'uuid'
>>> packed_message = message.dump()
>>> socket.send(message)
>>> reply = socket.recv()
# Blocks until server responds
>>> return_value = CommandMessage.load(reply)
# Done
structured - server
>>> import CommandMessage # Note dependency on both sides
>>> import ReplyMessage
>>> request = socket.recv()
>>> message = CommandMessage.load(request)
>>> command = message.command
>>> command_class = commands.get(command) # Get ConcreteCommand
>>> command_instance = command_class(receiver)
>>> return_value = command_instance.do()
>>> message = ReplyMessage()
>>> message.status = 'ok'
>>> message.message = return_value
>>> message.id = 'uuid'
>>> packed_message = message.dump()
>>> socket.send(packed_message)
# Done
structured - build
message Person {
required int32 id = 1;
required string name = 2;
optional string email = 3;
}
Note that structured in this case requires an extra step; build.
Here are some pros and cons of each.
Unstructured is:
However, unstructured also means:
>>> message = {
'comand': 'create'
...
}
# With structured message, errors hit you square in the face
>>> message.comand = 'create'
Error: "command" isn't defined in the protocol
So the question is, in which scenarios do the extra work involved in working structured messages pay off? If anyone has or can think of examples, that’d be really great.
Thanks.
Best,
Marcus
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA1m4RpFTKtAmVoZKiYo5AfHqx2czz62UczfdgLoC8P4uQ%40mail.gmail.com.
Unstructured is fast, more prone to error
Structured is slow, less prone to error
I should say, “..prone to human error”. The computer doesn’t care either way.
I think you have made a completely inaccurate statement here. Or at least one that is ambiguous. You referred to a structured message as slower, and an unstructured message as faster because "it requires no parsing".
When you send the json message across the wire, it will be a single message that will be read entirely and then it has to go through the json load process to be parsed and turned into an object on the other end. It also can only represent a few types in its standard spec.
Protobuf has a binary format which can be smaller in size and faster to read because as it reads each index it can lookup the type from the proto and know what to read for the value.
Have you done tests that suggest this fast/slow categorization is valid? Depending on your message size it could either be true, false, or negligible
https://code.google.com/p/thrift-protobuf-compare/wiki/Benchmarking
In that test, protobuf ended up being smaller and faster overall and was only slowest in the object creation component of the test.
I'm just mentioning this because it immediately struck me as strange that you would represent the options by this criteria (fast/error prone, slow/less error prone). It usually depends on your message structure and size and required profiling.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBr0gSNLeDMk2uVZfK-s_c_g5w%3D88%3DU9u3BzmoA%3DNaLmw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBr0gSNLeDMk2uVZfK-s_c_g5w%3D88%3DU9u3BzmoA%3DNaLmw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA0pHAAgzw2zjN%2Bj0MQgsdMYh-cRL4rRQttWYLAu21OTWw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOAMGtKXS6%3DJvKxJChsTSkBce1O6VkB5E_3eiYrtCm590Q%40mail.gmail.com.
Hey Justin,
I hope this isn’t coming across as criticism for how you do things personally (as I take it you are working with something similar using a similar approach) but more as exploring the various approached in which commands can be sent across processes with the intent on weighing the benefits with the risks.
Why do you care if a command message is human readable? The wire protocol should not matter to the user creating a message. They would go through an interface either way that produces the message transparently to them.
If I understand you correctly, you are referring the the end-user of structured messages; I’m referring to the developer of them. Someone will have to read and write the code that make up the protocol, and for that person, there is obviously more work involved up-front.
E.g. it takes more research and more typing to write a protobuf, a message using protobuf and a build script to automatically build the protobuf, then it does in simply writing a in-line dict and send it across.
Again, the goal is not to prove you or anyone else right or wrong. I’m looking for concrete examples of where one is more suitable over the other.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA0FW-KWDM8wS2Cs70gp7mo7qxnCxk4qAzdG8hLHdd66Fw%40mail.gmail.com.
from addressbook_pb2 import *
me = Person(name='Justin', id=1, phone=[Person.PhoneNumber(number='555-1212')])
import json
HOME=1
me = {
"name": "Justin",
"id": 1,
"email": "",
"phone": [ {"number": "555-1212", "type": HOME } ]
}
me_msg = json.dumps(me)
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBUotm6YH8OLa%2B7yqrmjO6s4JB2mpQ%3DduY2K0ZccuOZAA%40mail.gmail.com.
Thanks, Justin. That is some good info.
I can see two branches of our discussion; one in which clients and servers are tightly connected, say in a corporate environment where everyone shares libraries and network, such as protobuf. The other is communicating with an unknown server (e.g. across the internet), one which provides a service of which we know little about.
I think our current topic lies in the former arena; the corporate one. And I think this is where I’m mostly interested in investigating too, so I’ll skip the cons of working with an unknown server in regards to your example.
The only con I can think of at this point is the additional dependency; both technically and in terms of learning. I suspect that as I get deeper into messaging, problems will arise, problems that libraries such as protobuf is designed to solve. But at this point, the extra work involved is simply overwhelming compared to the few benefits gained.
For clarity, I’ll list the ones I gathered here.
Protobuf:
Adds a dependency
Requires an extra build step
And what you get is:
Each of which fades away when compared to figuring out bigger beasts such as figuring out the overall architecture of an application and its communication channels across a network or whether or not to use REQ/REP, PUB/SUB, PUSH/PULL etc. for network communication. Before handling such issues, it seems unreasonable also drag around extra build steps, learning curves and dependencies, when the benefits of doing so aren’t blossoming until later in the game.
I think that is what I’m getting at with a mature versus immature API. Not just the syntax of the commands, but which commands it should include, and whether or not the API is solving a real issue to begin with. I think libraries such as protobuf can only really show its true colors once you’ve already gained traction and can start thinking about optimisations and maintenance.
Thoughts?
Best,
Marcus
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA0%3Da6pYj6wbm47YKYaoaPjtnbyVCYuuo_O2LiLWrjxZYw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOALctxwcSz1TdbJd87yNbhS2EH4iS3Db-xz2cwriCCJHw%40mail.gmail.com.
Thanks for the kind words, Justin.
Let’s move on to another interesting topic. The next one is about the Command Pattern in a asynchronous environment, and why Undo/Redo probably can’t be asynchronous.
Before getting into higher-level questions, I’ll go through the demo program I posted above and how it helps in visualising the pros and cons of asynchronous use of the command pattern.
Here’s how Undo/Redo works and why it was left synchronous.
The question is, since anything synchronous means a potential bottleneck, how can undo/redo be asynchronous, without causing headaches for the user (or developer)?
Best,
Marcus
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA2kc-8m-T8iZ9-kMgZ1UkZ_10dVPrYGf63FBBybSjMGFQ%40mail.gmail.com.