[Maya-Python] Command Pattern

151 views
Skip to first unread message

Marcus Ottosson

unread,
May 16, 2014, 10:02:05 AM5/16/14
to python_in...@googlegroups.com

Hey guys,

Does anyone have experience using the command pattern? I’m looking for a few things that this pattern seems to help with.

  • Undo/redo (preferable multi-level and persistent)

  • Action logging (what the user has done, in which order, and where things went wrong using which arguments)

  • Distribution of commands via a network (serialisation, asynchronous execution)

I took a whack at it and found it rather straightforward to make a scripting-language out of it.

The Command Pattern

There’s an example run at the bottom, but the ‘gist’ of it is this:

     ______________________________________________________
    |                                                      |
    | Command Pattern - Demonstration                      |
    | Author: Marcus Ottosson <mar...@abstractfactory.io>  |
    |______________________________________________________|

* Available commands
    cls
    create
    data
    delete
    exit
    help
    history
    redo
    undo
    update
    verbosity

command> create key value
command> create age 5
command> create length 1.57
command> data
    age=5
    length=1.57
    key=value

command> undo
command> redo
command> help update
Update existing value in DATASTORE

    Args:
        key: Identifier for value
        value: Value for identifier

    Precondition:
        `key` must already exist

    Example:
        command> update age 5

The main questions are about two aspects of it’s design:

History is stored as class attributes

Which means that commands add themselves to history, which wouldn’t work too great if they are accessed from separate threads.

History is stored as Python objects, as opposed to simple strings

Which means that history would be tricky to serialise and persist on disk or across a network.

Thoughts?

Best,
Marcus

--
Marcus Ottosson
konstr...@gmail.com

Tony Barbieri

unread,
May 16, 2014, 10:07:15 AM5/16/14
to python_in...@googlegroups.com
If you do decide to use python objects you could use pickling as long as you only plan on reusing the history in python.


--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOAc6RTpxT%3Dtppa4GLrvWKZvN_A9qsC0riZQN27NJPcAhQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.



--
-tony

Tony Barbieri

unread,
May 16, 2014, 10:08:49 AM5/16/14
to python_in...@googlegroups.com
You could also use json or another data-interchange format and write a simple serializer to reinstantiate your python objects.  Or abstract it to support multiple serialization backends.
--
-tony

Marcus Ottosson

unread,
May 16, 2014, 10:20:35 AM5/16/14
to python_in...@googlegroups.com

Hey Tony,

Thanks, that makes sense.

Part of me feels that it might be a bit heavy-handed to pickle, as the commands are little more than a name and an argument, arguments being plain-old-data in this case. Something I think could potentially be enforced if I continue down this path.

A data-interchange format or protocol sounds reasonable and something I’ll look into. You’re thinking in terms of a pre-defined dictionary via something like JSON?

{
    'command': 'create',
    'args': ['age', 5]
}

Best,
Marcus




For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Tony Barbieri

unread,
May 16, 2014, 10:55:56 AM5/16/14
to python_in...@googlegroups.com
Yep, if that's all the data you need then it should be as simple as that.  It gets trickier if you would ever be passing other python objects as args, kwargs but if it's commandline only then that sounds unlikely.




For more options, visit https://groups.google.com/d/optout.



--
-tony

Marcus Ottosson

unread,
May 16, 2014, 11:44:50 AM5/16/14
to python_in...@googlegroups.com
It isn't command-line only, but the command-line enforces that commands maintain plain argument signatures, so I figured it would make a good example.

The goal however is to adopt this pattern for gui's and inter-process communication overall. 

More specifically, I'm looking for a method of executing commands in Maya et. al. through something like Maya's commandPort. Something fit for a Service-Oriented Architecture where commands would be advertised remotely for any give application, such as Maya, and executed within it when being initially triggered from the outside. But that's another topic.


--
Marcus Ottosson
konstr...@gmail.com


Justin Israel

unread,
May 16, 2014, 5:02:49 PM5/16/14
to python_in...@googlegroups.com
If you end up wanting to expand to that point of communicating between different systems, you could maybe look at Google Protocol Buffers. It's a serialization format that is structured and can be versioned and validated, so that you can change the format without breaking existing clients using the old format. They are meant for both wire protocols, and for storing your data in a way that you can be sure it can be retrieved even by newer clients down the line. It is language independent, so it doesn't matter if you were communicating between a server written in Java, or C++, or Python, ...
I haven't used them directly (my colleagues have), but I use Thrift which is the exact same concept, but also handles all of the RPC functionality for you as well. It was written by someone that went to Facebook and missed Protocol Buffers.

For the part about the Undo stack in a threaded environment, you can mutex guard the undo stack for the entire application, but that still doesn't really guarantee that the undo command from thread A and thread B will be valid unless they too are considerate of the locking. Thread A could start its procedure, maintaining its undo steps, and then get preempted by thread B. Thread B makes some changes to the same area. Then control returns to Thread A, which might have state that doesn't really undo what B might have done. So really it might only work if Thread A has to acquire a lock for its entire operation. This is similar to the reason that Maya makes you defer commands code to the main thread, as it would be really really hard to maintain correct state if multiple threads are changing the graph. I don't know... just brainstorming that one.





--
Marcus Ottosson
konstr...@gmail.com


--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.

Tony Barbieri

unread,
May 16, 2014, 5:14:41 PM5/16/14
to python_in...@googlegroups.com
Ah right!  I forgot about that project.  That seems like the better way to go if you need to grow the idea (as it sounds like you want too).



For more options, visit https://groups.google.com/d/optout.



--
-tony

Marcus Ottosson

unread,
May 20, 2014, 4:23:05 AM5/20/14
to python_in...@googlegroups.com

Hey guys, thanks for your suggestions.

Here’s the implementation of the pattern in an asynchronous manner using ZeroMQ.
https://github.com/mottosso/patterns/tree/master/python/zerocommand

The goal of this implementation is to discuss, document and understand some of the higher-level issues with asynchronism in use with the Command Pattern. I’ve attempted to document the source code in such a way that is should be digestible from top-to bottom like a book, starting with server.py. If anything is unclear, let me know so I can patch it up.

Usage

To run it, you’ll need PyZMQ, installable via pip:

$ pip install pyzmq

The implementation then uses two shells that communicate with each other; which applies equally well to shells running on different computers/continents.

Inline images 1

shell A

$ python server.py

shell B

$ python client.py

Previous implementation

The implementation I posted earlier had a logical flaw; a misinterpretation of the Command Pattern. This description helped me figure out how to approach it:

The waiter (Invoker) takes the order from the customer on his pad. The order is then queued for the order cook and gets to the cook (Receiver) where it is processed. In this case the actors in the scenario are the following: The Client is the customer. He sends his request to the receiver through the waiter, who is the Invoker. The waiter encapsulates the command (the order in this case) by writing it on the check and then places it, creating the ConcreteCommand object which is the command itself. The Receiver will be the cook that, after completing work on all the orders that were sent to him before the command in question, starts work on it. Another noticeable aspect of the example is the fact that the pad for the orders does not support only orders from the menu, so it can support commands to cook many different items. - http://www.oodesign.com/command-pattern.html

In the previous implementation, each command was capable of performing its own actions; which in this scenario means that each note on the waiters checklist was capable of cooking the food for the customer. Which is nonsense. Only the chef (Receiver) knows how to perform those commands, the waiter simply hold onto them.

In the current implementation, the Datastore is the Receiver and is what ultimately performs the commands requested by the client. (You, via the shell, in this case).

Another important thing to note are three different targets for commands; previously inseparable and intertwined:

  • Server
  • Client only
  • Server only

Server commands consists of the main create, update delete commands that communicate with the datastore. Client only commands don’t operate on the server, and include things such as cls and history to clear the shell and visualise history respectively. Server only doesn’t communicate with the datastore, but merely returns information about the server; in this case, a list of available commands.

Questions

There were numerous questions raised and design decisions made in this example but to try and keep this thread linear (i.e synchronous) I figure I’d throw out one question at a time, and since we’re on the topic of protobuf, let’s start with that.

It’s funny you should mention protobuf. As commands, like RPCs, are generally sent with arbitrarily long argument lists and protobuf, being a “weakest-link” protocol, doesn’t support it which in practice means that each command would require their own protobuf protocol.

In the current implementation, JSON is used as a means of encapsulating and transferring commands across shells. What would you consider some of the benefits of using (something like) protobuf in relation to the Command Pattern in general?

Best,
Marcus




For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Justin Israel

unread,
May 20, 2014, 4:42:02 AM5/20/14
to python_in...@googlegroups.com

When you are defining a function signature for cross-language compatability,  you can't really do variadic functions. Some languages don't support it. So if you need variadic support then you accept something like a list argument that can be arbitrarily long. Or a dict for keyword args that are arbitrary.

Thrift is the same way.

Marcus Ottosson

unread,
May 20, 2014, 4:46:52 AM5/20/14
to python_in...@googlegroups.com

Some languages don’t support it.

Mm, this is what I meant by it being a “weakest-link” approach. An arbitrarily long list seems like a good compromise though, and makes sense. Thanks.

Since you are using something like this, what do you feel is beneficial/problematic about it? Off the top of my head, it seems like there will be quite a lot more code and re-compiling of protocols during initial development. Would you agree that protobuf and c/o are better suited for a mature API then it is during the development of one? I can imagine that the explicitness of protobuf can help avoid mistakenly altering protocols or misinterpreting them.

Best,
Marcus




For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Justin Israel

unread,
May 20, 2014, 4:53:50 AM5/20/14
to python_in...@googlegroups.com

Well it has the goal of being consistent and compatible. It also enforces validation of the protocol. I don't really see it as something you swap in later when you deem your API to be mature. Early on in dev you can freely break compatability and alter the structures entirely. Once you deploy a stable release you can then start adhering to the rules to keep it compatible.
I see it more as a nice tool for defining your interface contract,  and not so much of something that is a burden during dev.  Thrift,  for instance,  takes almost no time to regen the bindings. I have also integrated the process into my build process so that it regens automatically when the mtime of the spec file is newer than the generated bindings.

Marcus Ottosson

unread,
May 20, 2014, 5:01:14 AM5/20/14
to python_in...@googlegroups.com
I like it, thanks. I'll keep it in mind.

Is there a way to work with Thrift by not using their "protobuf"? I mean, do you have the ability to send commands via JSON and c/o if you wanted to?



For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Justin Israel

unread,
May 20, 2014, 5:08:22 AM5/20/14
to python_in...@googlegroups.com

Actually yes,  thought I haven't done it. They designed thrift in a modular way where you have:

* tranports
* protocols
* server/socket

So you can mix and match different protocols with different transports and a different server type.

Transport is how to send and receive the data between a server and a clients. Like httpor tcp sockets.

Protocol is how to encode and decode the messages. Like binary or json or some text protocol.

Servers are like single threaded, thread pool, thread per connection,  async.

So you could implement your own protocol. You could even use protobuf over thrift RPC. I think they have zmq protocols for thrift.

Justin Israel

unread,
May 20, 2014, 5:12:43 AM5/20/14
to python_in...@googlegroups.com
I forgot to add a bit about the transport thing. A cool example is that lets say you have this protocol/transport combination going and you have all these desktop app clients connecting to your server over tcp sockets. But then someone says "Hey I want to write a web page that shows stats from your server". So you generate jquery bindings in thrift, and spin up another thread in your server now using the http transport. That jquery client can now connect from the web page and use the exact same entry points into the exact same server. No code changes. 


Marcus Ottosson

unread,
May 20, 2014, 5:36:02 AM5/20/14
to python_in...@googlegroups.com
That sounds cool, but how is that any different from sending data using JSON from any networking library?



For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Justin Israel

unread,
May 20, 2014, 5:49:18 AM5/20/14
to python_in...@googlegroups.com

Because it still enforces the interface and types all the same. You are just defining how to serialize the object onto the wire and turn it back into an object on the other side.

Justin Israel

unread,
May 20, 2014, 5:59:06 AM5/20/14
to python_in...@googlegroups.com
Sorry, I mispoke earlier about ZMQ. They have client/server implementations to run Thrift RPC over ZMQ as the transport:

Marcus Ottosson

unread,
May 20, 2014, 6:03:17 AM5/20/14
to python_in...@googlegroups.com

Because it still enforces the interface and types all the same.

Yes, but your example isn’t dependent on enforcing types and interfaces and is equally valid (and cool) without.

You mentioned that you could use Thrift without protobuf (sorry to keep calling it protobuf, what is it called?). Could you try and illustrate a scenario in which having protobuf is especially beneficial, and perhaps one in which it shows its weakness?

They have client/server implementations to run Thrift RPC over ZMQ as the transport:

That’s cool, good to know. Maybe we could send each other messages one day. :)

Best,
Marcus




For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Justin Israel

unread,
May 20, 2014, 6:58:37 AM5/20/14
to python_in...@googlegroups.com
Sorry I think we are miscommunication on this particular aspect... 
You can't remove the aspect that deals with the interface, versioning, consistency, typing, etc.
You can replace the protocol, which is the wire format. That means something like a web interface that can only deal pretty much in json or some other built-in encodings can still serialize their data and receive it.

Thrift and Protocol Buffers are about having a cross-language versioned interchange format. If that is not what you want, then you probably want another system.



Marcus Ottosson

unread,
May 20, 2014, 11:57:51 AM5/20/14
to python_in...@googlegroups.com

Yes you are probably right.

This topic is about JSON versus Protobuf for transmitting commands across processes; let’s refer to these as unstructured versus structured respectively. To be explicit, and for those not familiar with protobuf or schemas and what not, here’s what we’re comparing:

  • Unstructured is fast, more prone to error

  • Structured is slow, less prone to error

Example

unstructured - client

>>> message = {
    'command': 'create',
    'args': ['age', '5'],
    'id': 'uuid'  # Unique id for client
}
>>> packed_message = json.dumps(message)
>>> socket.send(packed_message)
>>> return_value = socket.recv()
# Blocks until server responds
>>> # No post-processing required

unstructured - server

>>> packed_message = socket.recv()
>>> message = json.loads(packed_message)
>>> command = message['command']
>>> command_class = commands.get(command)  # Get ConcreteCommand
>>> command_instance = command_class(receiver)
>>> return_values = command_instance.do()
>>> message = {
    'status': 'ok',
    'message': return_value,
    'id': 'uuid'  # Unique id for command
}
>>> packed_message = json.loads(message)
>>> socket.send(packed_message)

Compared to:

structured - client

>>> import CommandMessage # protobuf protocol
>>> import ReplyMessage
>>> message = CommandMessage()
>>> message.command = 'create'
>>> message.args = ['age', '5']
>>> message.id = 'uuid'
>>> packed_message = message.dump()
>>> socket.send(message)
>>> reply = socket.recv()
# Blocks until server responds
>>> return_value = CommandMessage.load(reply)
# Done

structured - server

>>> import CommandMessage  # Note dependency on both sides
>>> import ReplyMessage
>>> request = socket.recv()
>>> message = CommandMessage.load(request)
>>> command = message.command
>>> command_class = commands.get(command)  # Get ConcreteCommand
>>> command_instance = command_class(receiver)
>>> return_value = command_instance.do()
>>> message = ReplyMessage()
>>> message.status = 'ok'
>>> message.message = return_value
>>> message.id = 'uuid'
>>> packed_message = message.dump()
>>> socket.send(packed_message)
# Done

structured - build

message Person {
required int32 id = 1;
required string name = 2;
optional string email = 3;
}

Note that structured in this case requires an extra step; build.

Here are some pros and cons of each.

Unstructured is:

  • UNS1: Better performance (e.g. no parsing or resolving)
  • UNS2: No learning curve (no reading up on third-party vendor solutions) (not talking about learning the protocol itself, that’s separate)
  • UNS3: Less code (no build scripts)
  • UNS4: Without dependencies (JSON is native in quite a few languages)
  • UNS5: Responds well to change (no re-compiling)

However, unstructured also means:

  • UNS6: More room for error (e.g. mis-types in code are harder to find)
>>> message = {
    'comand': 'create'
    ...
}
  • UNS5: Less room for convention.
# With structured message, errors hit you square in the face
>>> message.comand = 'create'
Error: "command" isn't defined in the protocol

So the question is, in which scenarios do the extra work involved in working structured messages pay off? If anyone has or can think of examples, that’d be really great.

Thanks.

Best,
Marcus




For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Marcus Ottosson

unread,
May 20, 2014, 11:59:26 AM5/20/14
to python_in...@googlegroups.com
  • Unstructured is fast, more prone to error

  • Structured is slow, less prone to error

I should say, “..prone to human error”. The computer doesn’t care either way.

--
Marcus Ottosson
konstr...@gmail.com

Justin Israel

unread,
May 20, 2014, 3:52:43 PM5/20/14
to python_in...@googlegroups.com

I think you have made a completely inaccurate statement here. Or at least one that is ambiguous. You referred to a structured message as slower,  and an unstructured message as faster because "it requires no parsing".

When you send the json message across the wire, it will be a single message that will be read entirely and then it has to go through the json load process to be parsed and turned into an object on the other end. It also can only represent a few types in its standard spec.

Protobuf has a binary format which can be smaller in size and faster to read because as it reads each index it can lookup the type from the proto and know what to read for the value.

Have you done tests that suggest this fast/slow categorization is valid? Depending on your message size it could either be true, false, or negligible
https://code.google.com/p/thrift-protobuf-compare/wiki/Benchmarking

In that test,  protobuf ended up being smaller and faster overall and was only slowest in the object creation component of the test.

I'm just mentioning this because it immediately struck me as strange that you would represent the options by this criteria (fast/error prone,  slow/less error prone).  It usually depends on your message structure and size and required profiling.

Marcus Ottosson

unread,
May 20, 2014, 4:30:36 PM5/20/14
to python_in...@googlegroups.com
Ok, now we're talkin'. :)

The "faster" statement is indeed ambiguous and could mean one in three things;

1. It is fast to ship over the wire
2. It is fast to marshall
3. It is fast to work with, as a human

I am referring to number 3, because in command-land, messages are neither marshalled or transferred often enough to make any significant difference in terms of performance. Let's say around command is being requested once every second, on average, by multiple clients on a daily basis, and that, for the sake of familiarity, the commands are similar to "render me this" or "render me that"

For 1 and 2 to be relevant, messages would probably have to be sent by the thousands or even millions; which is of course often the case, but possible not in the context of the Command Pattern.

So, in terms of human performance, I stand by my statement that working with a structured message is slower, yet less prone to (again, human) error.

Thoughts?

--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.


--
Marcus Ottosson
konstr...@gmail.com


Justin Israel

unread,
May 20, 2014, 5:06:34 PM5/20/14
to python_in...@googlegroups.com
Ok then, so we just focus on the third point. 

Why do you care if a command message is human readable? The wire protocol should not matter to the user creating a message. They would go through an interface either way that produces the message transparently to them. 


Marcus Ottosson

unread,
May 21, 2014, 1:42:14 AM5/21/14
to python_in...@googlegroups.com

Hey Justin,

I hope this isn’t coming across as criticism for how you do things personally (as I take it you are working with something similar using a similar approach) but more as exploring the various approached in which commands can be sent across processes with the intent on weighing the benefits with the risks.

Why do you care if a command message is human readable? The wire protocol should not matter to the user creating a message. They would go through an interface either way that produces the message transparently to them.

If I understand you correctly, you are referring the the end-user of structured messages; I’m referring to the developer of them. Someone will have to read and write the code that make up the protocol, and for that person, there is obviously more work involved up-front.

E.g. it takes more research and more typing to write a protobuf, a message using protobuf and a build script to automatically build the protobuf, then it does in simply writing a in-line dict and send it across.

Again, the goal is not to prove you or anyone else right or wrong. I’m looking for concrete examples of where one is more suitable over the other.




For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Justin Israel

unread,
May 21, 2014, 4:56:23 AM5/21/14
to python_in...@googlegroups.com
Hey,

Don't worry, this isn't really about how I do things personally. I totally agree that there is some extra overhead in maintaining the Protobuf/Thrift approach over just passing around json. Obviously one should pick the best format for the job and json could easily be the right choice for an application. But the same things that could be looked at as negatives could also be looked at as positives. Typing out the proto file acts as your 'template', and could be used to auto-document your structure. And it could actually save you typing if you are dealing with structures that have default values.

Consider the example from the protobuf tutorial: https://developers.google.com/protocol-buffers/docs/pythontutorial

Now you could distribute your packages with the python code already generated and not distribute the proto file. That can be just part of your source distribution. Building the proto file again only happens if you change things in your spec.

So once you have that example built, it could look like this:
from addressbook_pb2 import *

me = Person(name='Justin', id=1, phone=[Person.PhoneNumber(number='555-1212')])
And because the spec already knows about default values, it has already set my phone number type to HOME, and my email field is an empty string.
The equivalent pure json approach would require something like this:
import json

HOME=1

me = {
    "name": "Justin",
    "id": 1,
    "email": "",
    "phone": [ {"number": "555-1212", "type": HOME } ]
}

me_msg = json.dumps(me)
So I would say in this case, at least to my eyes, it is slightly less work and more descriptive to define a Person protobuf vs the python dict 
And on top of that, if I were to try and pass a number as my name, or a string for my id, the Person object would immediately raise an exception.

If I were using this json approach, I would probably end up writing a function or class that abstracts the format anyways, so anyone using my library would have something easier and more consistent. Writing the proto file then pretty much has the same effort as writing abstraction classes or functions to produce your message format in json.

Like I said, both formats can work for different needs and they both have benefits. 




Marcus Ottosson

unread,
May 21, 2014, 6:32:55 AM5/21/14
to python_in...@googlegroups.com

Thanks, Justin. That is some good info.

I can see two branches of our discussion; one in which clients and servers are tightly connected, say in a corporate environment where everyone shares libraries and network, such as protobuf. The other is communicating with an unknown server (e.g. across the internet), one which provides a service of which we know little about.

I think our current topic lies in the former arena; the corporate one. And I think this is where I’m mostly interested in investigating too, so I’ll skip the cons of working with an unknown server in regards to your example.

The only con I can think of at this point is the additional dependency; both technically and in terms of learning. I suspect that as I get deeper into messaging, problems will arise, problems that libraries such as protobuf is designed to solve. But at this point, the extra work involved is simply overwhelming compared to the few benefits gained.

For clarity, I’ll list the ones I gathered here.

Protobuf:

  • Adds a learning curve
  • Adds a dependency

  • Requires an extra build step

And what you get is:

  • Default values
  • Type-checking, with exceptions
  • Protocol versioning, i.e. safer to change

Each of which fades away when compared to figuring out bigger beasts such as figuring out the overall architecture of an application and its communication channels across a network or whether or not to use REQ/REP, PUB/SUB, PUSH/PULL etc. for network communication. Before handling such issues, it seems unreasonable also drag around extra build steps, learning curves and dependencies, when the benefits of doing so aren’t blossoming until later in the game.

I think that is what I’m getting at with a mature versus immature API. Not just the syntax of the commands, but which commands it should include, and whether or not the API is solving a real issue to begin with. I think libraries such as protobuf can only really show its true colors once you’ve already gained traction and can start thinking about optimisations and maintenance.

Thoughts?

Best,
Marcus




For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Justin Israel

unread,
May 21, 2014, 6:39:20 AM5/21/14
to python_in...@googlegroups.com
No thoughts really beyond this point. Sounds like you are taking the right steps in evaluating available technologies to see what suits you best. What seems reasonable to me or the guy next to me may not be reasonable to you or the guy next to you. That is, one man's junk is another man's treasure. Probably also comes down to what fits best into your preferred project workflows.

I'm sure you will end up with the right tool for the job, with the way you put so much effort into evaluating options. 



Marcus Ottosson

unread,
May 21, 2014, 8:36:13 AM5/21/14
to python_in...@googlegroups.com

Thanks for the kind words, Justin.

Let’s move on to another interesting topic. The next one is about the Command Pattern in a asynchronous environment, and why Undo/Redo probably can’t be asynchronous.

Intro

Before getting into higher-level questions, I’ll go through the demo program I posted above and how it helps in visualising the pros and cons of asynchronous use of the command pattern.

  • Intro [play] [web-player] (web player is rather low quality, but include it for folks who can’t play directly)

Synchronous Undo/Redo

Here’s how Undo/Redo works and why it was left synchronous.

The question is, since anything synchronous means a potential bottleneck, how can undo/redo be asynchronous, without causing headaches for the user (or developer)?

Best,
Marcus




For more options, visit https://groups.google.com/d/optout.



--
Marcus Ottosson
konstr...@gmail.com

Marcus Ottosson

unread,
Jun 6, 2014, 3:49:09 AM6/6/14
to python_in...@googlegroups.com
Does anyone have any experience using the Qt Undo Framework? Just found this:

Seems to be doing much of what I've been trying to do here.
--
Marcus Ottosson
konstr...@gmail.com

Reply all
Reply to author
Forward
0 new messages