Hi all,
I’m developing an example program to illustrate the benefits and disadvantages of a service-oriented approach to task-distribution; mimicking the instant message-approach to routing messages from one-to-many via a central broker, it’s not quite at the point where pros and cons start showing their true colors, but getting there.

peer.py represents a running client capable of making requests to the server, or swarm.py. Both peer.py and swarm.py handles incoming requests/replies via rather long-winded if/else methods:
# Incoming messages
if type == 'letter':
# do stuff
elif type == 'service':
# do stuff
elif type == 'receipt':
...
What would be a better/neater approach to routing multiple inputs to a single output, in cases where there’d be a large number (50+) of branches in logic?
Best,
Marcus
You could just register all your different endpoints or types in a dict where the key is the type and the value is the handler function. And each handler function could do the custom logic that transforms the message into a conformned message for the output.
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmODUGwFEJDmR0qYEm1YHptPR-%2Ba-4dP_cFDEOWcfF__n3A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA19uus2-NW7KPOSUf3Fa%2BheZjiOY59JxFqGZD0sCsmfsw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBLKwevT0qa8wFa%2Bz-ujeNnnAZDDfimrxSJKYLxWniC%2BA%40mail.gmail.com.
Both. Tasks are to be scheduled via a chat-like interface. Humans send tasks via a terminal whereas workers build messages programmatically.
For example, the command..
$ order coffee latte --no-milk
..should send a task to the swarm who will in turn delegate the task to workers capable of executing it.
On the other hand, tasks can be delegated to a group of workers:
$ peer barista order coffee latte --no-milk
In which case the swarm will still distribute the task, but in this case to a pre-determined group of workers. Who also provide an interface to available services:
$ peer barista --list-services
Barista services:
Take order (order)
I’ve had a brief look at Celery, do you have any experience with it? How come it’d be difficult to reverse-engineer? Poor docs? Complicated behaviour?
Thanks
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsTwckXni19RSBS9CbFfyGkVJPmZksM13SMfP0yU2%2B6hoQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOByLiy6EGEA-oA7Yrp30ZUZcJQqH%3DyMZ7rqgRNNcq0ffg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsTjVvBcMkBn-eD0TgrYy13eTgbHkbU_nYAP-hHjpTOR7Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBgFMn5Nq0LBPhgHuA3f7JU%3DyHjSvRCW2Nnev6a6WO9xQ%40mail.gmail.com.
I guess if you are writing a true “chat” client it may not be the way to go
How come?
The purpose of this experiment is to find differences in how a chat application deals with message-passing and how a cloud of workers is assigned tasks and so far I haven’t encountered any differences; only similarities.
For example, in an instant message application you’re got:
I could go on, but I think you see my point.
I’m not sure you’ll have the control over sending a task to a specific “peer” unless you’ve set up the individual workers using specific routing
With this, I’m imaging something similar to a render-farm overview of available workers, where you could send a task to either the entire farm or a single worker.
This would involve routing tasks to groups of workers and ideally individual workers.
Are you working with a single cloud of uniform workers, each request being sent to all or any worker? Or do you have specialised groups, e.g. some dealing with image conversion, others with file-writes etc?
Best,
Marcus
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsROtCpgiqqmw1Tr5LqMJ7kNT7hky%2BW-r--LEcSYmNU%2BuA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOC3zdyLYHpuYC8sWkWcs-d6gbkg7TqQDxq8hM9khsWGGw%40mail.gmail.com.
I’m not sure Celery is going to give you the granularity you may require
Like what?
I am working with specialized groups and single cloud. Some workers will process anything, while other workers have been setup to specifically listen for certain “types” of work.
Excellent, thanks for that!
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsQ0OVxJnTx_7%3D%2B3z4EngYTsGcWxzoPaPB_6UGdt%3DYDrkg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOAC%2BEvmVGgOx-hX1cjfuOnK%2BiRsZU412uZ4MT7Cfy2B0Q%40mail.gmail.com.
Yes, I’m looking into it now and it seems RabbitMQ would be the default. It clashes some with my use of ZeroMQ for messaging, which assumes you’re writing your own broker. ZeromMQ overall seems better equipped for small messages which is my main requirement (e.g. file reads/writes and directory listings).
I’m really interested in Celery’s use of promises for return values though. How are you making use of promises in your code?
Something like this?
def func():
promise = async_task('long_calculation')
# do something else
promise.join()
# return
I’m thinking promises are good for in-process asynchronism and less so for the distributed kind, due to the overhead of making a remote request.
Are you using it mainly for RPC?
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsRyEMJkDmaEi8owj791teQyfjMo-zyJuVxgqwj_O2U69Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBTjdiz64pX_8OH6T6dophpWwKX4b09xKDjupeXiMDVSA%40mail.gmail.com.
It looks like you can use ZeroMQ with celery as the transport and something else to handle the results (Redis or MongoDB?)
I think I’ll have to wrap my head around how Celery works before I can digest this one (a separate MQ to handle return values?)
you don’t think they would be performant do the remote request for the status of the task?
Sorry, could you rephrase that?
I’m thinking promises are good for in-process asynchronism and less so for the distributed kind, due to the overhead of making a remote request.
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBAh2Fp5AxJi_kMKNy%2BZN%2BBNxdwZhEcaF%3Daz%3D4nPxTNMQ%40mail.gmail.com.
import psyapi
from psyop.api import filesystem
...
fsr = filesystem.FileSystemRequest()
fsr.add_action("folder", path=project_root_path)
fsr.add_action("create_file", path=pc_path, contents=data, overwrite=True)
fsr.add_action("folder",
path=source_path,
symlink_path=target_path)
fsr.add_action("copy",
source_path=env.get_project_branch_path(),
target_path=new_env.get_project_branch_path(),
ignore_patterns=ignore_patterns)
# get an api instance for version 1
api_instance = psyapi.get_api_version("v1")
# get a json client for version 1 of the api in the la office
json_client = psyapi.get_json_client("v1", host="lam")
# the api_instance and the client instance both have the same interface
api_instance.filesystem.execute_filesystem_request(filesystem_request=fsr.encode("json"))
json_client.filesystem.execute_filesystem_request(filesystem_request=fsr.encode("json"))
from psyop.api import filesystem
fsr = filesystem.FileSystemRequest()
fsr.add_action("folder", path=project_root_path)
fsr.execute(local=False)
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAGq9Q7E5HzohLNeycF2rPmFA8xMWYA%2BqvFLgvV4QwudikC9sGA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAGq9Q7H35CPhkT%3Dz_n6B4zA0AHS8L9Cw69mf8JDtgf-fTFBZUw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsRthNKfAXDXN3L194c4844shEgOYBisMEaqcBkxhDi%2BwA%40mail.gmail.com.
you don’t think that retrieving or querying the status from a remote backend for the status of a distributed task would be performant?
I think it’d be about one billion times slower than querying anything local. :)
What I’m referring to though is the design aspect of RPC. I’ve been reading some rather discouraging information about it lately (this one sums up it up rather well, and this one goes through their differences) and have been staying clear for the sake of finding out exactly what can be gained by doing something else; in this case - messaging.
The argument basically boils down to the fact that making a local call is faster (by the billions) than making a remote one, and that dressing a remote call up to look local encourages bad design. I’ll try and illustrate, although I’m still looking to find exactly what those pros and cons are:
import studiox
def rpc_publish(asset):
"""Example of RPC hiding slow calls. Which are local and which are remote?"""
path = studiox.publisher.get_path(asset)
variant = studiox.path.dirname(path)
# Perform quality checks
assert studiox.qna(variant)
assert not studiox.islocked(asset)
studiox.commit(asset)
studiox.push(asset)
# Notify subscribers (database, peers)
studiox.publish(asset)
Compared to a message-based one, where each function - or “service” - is de-coupled, including handling of errors and distributed logging:
import studiox
def soa_publish(asset):
path = studiox.publisher.get_path(asset) # Local
studiox.messaging.Request(service='dirname', payload=path).send() # Remote
variant = studiox.messaging.recv() # Blocking
studiox.messaging.Push(service='asset.qna',
payload=asset,
reply_to='islocked').send() # Asynchronous
def soa_islocked(asset):
result = studiox.islocked(asset)
if result is not None:
studiox.messaging.Push(service='asset.commit',
payload=asset,
reply_to='push').send()
else:
studiox.messaging.Push(service='asset.error',
payload='%s is locked' % asset).send()
def soa_commit(asset):
studiox.commit(asset)
studiox.push(asset)
studiox.messaging.Publish(service='log.published',
payload=asset).send()
Clearly more verbose, and this is where I suspect convenience may influence a design, potentially for the worse.
how do you mange and monitor all of your celery workers?
At first, this question stuck me as odd. But from what I gather, RabbitMQ acts as a broker, in which case you’re relying on existing implementation for features such as logging and monitoring.
Ultimately, RabbitMQ (and others) are higher-level than ZeroMQ and in this particular example swarm.py is playing the role of RabbitMQ’s “server” application.
So, the reason I found it odd was that, having written swarm.py, logging is merely an additional call from the broker to another worker; a logging worker. Monitoring it yet another call and so forth. At this point, both of those are rudimentary and aligns with existing functionality.
Simple, unless there’s something I’m missing.
Does celery give you the control to define the scheduler?
In the case where you want to direct tasks to the best fitting workers, does celery take into account available resources on the workers vs requested resources of the task?
In situations where you want to direct to a subset of workers or a specific worker, does that equal a new queue?
Do workers have to be preconfigured with "slots" and do tasks consume N available slots?
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA25255sU6%2BMQZZyye-OsgwRXTXd-%3Db3imd6mFOH3jraEQ%40mail.gmail.com.
I think it’d be about one billion times slower than querying anything local. :)
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOCek-bmWcwNt%3D%2BQnT%2BYAn27G%2BVJ6Qv2UUOExQh%2BF4SaHQ%40mail.gmail.com.
I think it’d be about one billion times slower than querying anything local. :)Lol, yes it is definitely slower than querying local...but when working with distributed tasks I'm not sure how else you can track progress/failures/results without using some sort of promise system :).When you asked me originally am I mainly using it for RPC, did you mean am I using Celery for RPC or promises? I'm a bit confused I guess as to the original question. Celery is a messaging system much like the later example you listed above. When you send an asynchronous message, a promise is returned. You can decide what you want to do with it. Ignore it, wait for a result, periodically check the status, etc. I did wrap up some of the messaging features into a simpler interface, but it's purely optional to use it that way. The core messaging features of celery are still available for the developers to use.The JSON-RPC interface is another optional interface to using our API. This is more for interacting with additional web services and communicating with our other offices. The "direct" api interface could also be configured to work with our other offices if necessary. It's basically just changing the RabbitMQ broker url :).I do see how writing code to appear as if it's running locally could lead to confusion or ignorance as to what is actually happening.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsSfQ%3DRLkmOT6gYnMNcwghQB8nOQZUhkKRHP6e9hS1Gqzg%40mail.gmail.com.
You can define how tasks are routed both by default and on the fly. There are quite a few options for dealing with this: http://celery.readthedocs.org/en/latest/userguide/routing.html#id2In the case where you want to direct tasks to the best fitting workers, does celery take into account available resources on the workers vs requested resources of the task?Not by default, I don't believe it does. This would most likely have to be written in the routing logic.In situations where you want to direct to a subset of workers or a specific worker, does that equal a new queue?Yes, I believe it does. A combination of Queues, Exchange types and routing keys would need to be configured to determine which workers/consumers should pick up the tasks.Do workers have to be preconfigured with "slots" and do tasks consume N available slots?Depends. You can let celery decide what kind of concurrency a worker should have when the worker starts up or you can configure it in the celery "app" settings. I believe you can also communicate with the consumers after they have already started and shrink/grow their process pools.In the end I think you would wrap your own setup around celery. I believe this extra layer would be necessary for some components. Also the way the exchanges, queues and routes interact would have to be designed based on all of the various needs.Celery is pretty nice to work with once you understand how it all works, it's flexible and the developer of it is very active. It would be really interesting to see how far celery could get you writing an application like this.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsQjWq3SKfT0SnRHS%2BX8RP1wKaAjTrKGWaT0aamLZk18NQ%40mail.gmail.com.
Well, I take it you are both familiar with working with RPC, but are you also familiar working without?I think to make a fair judgement, one would have to at least try both to an equal degree. I've had a hard time finding any benefits of using it other than convenience, and I'm not quite convinced.
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOCjisSNmOWFeoza_tGQYRpKxFM0GhDeXOLSXP8EEhLFpQ%40mail.gmail.com.
Well, I take it you are both familiar with working with RPC, but are you also familiar working without?
Well, I take it you are both familiar with working with RPC, but are you also familiar working without?I think to make a fair judgement, one would have to at least try both to an equal degree. I've had a hard time finding any benefits of using it other than convenience, and I'm not quite convinced.
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOCjisSNmOWFeoza_tGQYRpKxFM0GhDeXOLSXP8EEhLFpQ%40mail.gmail.com.
I think both ways work great
Aw, that’s no good. :) I’m looking for actual cases where one is more appropriate than the other, not which one is the silver bullet of computing.
It’s a discussion on distributing work via a chat-like interface, not very generalised I’d think, but if you’d like let’s throw in some numbers:
In the conversation, there’d be around:
Pros RPC:
Cons RPC:
But then again, if you made an RPC function called send() which is oneway and takes some data structure, again what is the difference between the two?
Yes, precisely. What is the difference? That’s what I’m looking to find out. :)
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA2wVmfhMswienJFxA2B1u_6k2Lr-2VgRFKXXvZffyyekw%40mail.gmail.com.
Are you referring to the use of promises as RPC?
Not sure I understand. :S Calling a promise as a remote procedure call?
Pros RPC:
- Familiar, little initial learning curve
Cons RPC:
- Hockey-stick complexity (easy at first, difficult at last (e.g. debugging when routes extend past point-to-point))
But then again, if you made an RPC function called send() which is oneway and takes some data structure, again what is the difference between the two?
Yes, precisely. What is the difference? That’s what I’m looking to find out. :)
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBaZQEk7wvUNhZMD%3DBm5WmpQs%2B0kMfQKshzeoz8YXTwPA%40mail.gmail.com.
Not sure I understand. :S Calling a promise as a remote procedure call?
Ah I was referring to your question above that sparked the RPC discussion:
I’m thinking promises are good for in-process asynchronism and less so for the distributed kind, due to the overhead of making a remote request.
Are you using it mainly for RPC?
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA2OuR2dZaFaYGhaSXk0Ut_78KeLcyijN-bTM5Vo5U2%3DYA%40mail.gmail.com.
I think both ways work great
Aw, that’s no good. :) I’m looking for actual cases where one is more appropriate than the other, not which one is the silver bullet of computing.
It’s a discussion on distributing work via a chat-like interface, not very generalised I’d think, but if you’d like let’s throw in some numbers:
In the conversation, there’d be around:
- 500 peers in total
- 50 of them being active within any given second
- within which 2 tasks are being distributed continually
- Tasks are at the size of “hello world”, “create directory”, “list directory”, “write metadata”, “add 1 to 1” etc..
- ..each taking up a maximum of 1 second each.
Ah I was referring to your question above that sparked the RPC discussion - Tony
Hmmmmmm. :) Ok, for this, let’s try and define what we mean with RPC. Here’s what I mean:
RPC call, where proxy represents a remote machine
# Local
>>> proxy.log.info('hello world')
# Remote
>>> log.info('hello world')
Here, log.info is the name of the function called on the other side. If the function does not exist, you get an AttributeError. There is a 1-1 correspondence between caller and receiver. Like we would expect, from a local call in traditional, imperative programming languages such as Python.
Tony, when you say you’ve worked with messaging without RPC, how does something like that look? And Justin, how does it look for you?
What do you guys think about this for differences?
http://www.inspirel.com/articles/RPC_vs_Messaging.html
Personally I would prefer not to talk in terms of “patterns” as that sounds very java-minded (command pattern, actor pattern, …), and boxing you into thinking about what you can and cannot do. - Justin
I’m not sure you’ve got the right idea here. Patterns have little to do with languages, nor about what you can or cannot do. A pattern, as far as I can tell, is a description of a scenario, coupled with pros and cons and a name so that we can refer to it in general conversation.
Are we talking about something like this?
http://www.amazon.co.uk/Design-patterns-elements-reusable-object-oriented/dp/0201633612
Also, where did “patterns” enter into the discussion? Do you consider RPC a pattern? On the contrary, patterns can be used to implement RPCs, like the Proxy Pattern and the Abstract Factory Pattern.
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA2SdTHud6bCeipDSScTkUei-MpEz3uXJOC1AX94Z2Kadg%40mail.gmail.com.
Ah I was referring to your question above that sparked the RPC discussion - Tony
Hmmmmmm. :) Ok, for this, let’s try and define what we mean with RPC. Here’s what I mean:
RPC call, where
proxyrepresents a remote machine# Local >>> proxy.log.info('hello world') # Remote >>> log.info('hello world')Here,
log.infois the name of the function called on the other side. If the function does not exist, you get anAttributeError. There is a 1-1 correspondence between caller and receiver. Like we would expect, from a local call in traditional, imperative programming languages such as Python.
Tony, when you say you’ve worked with messaging without RPC, how does something like that look? And Justin, how does it look for you?
What do you guys think about this for differences?
http://www.inspirel.com/articles/RPC_vs_Messaging.html
Personally I would prefer not to talk in terms of “patterns” as that sounds very java-minded (command pattern, actor pattern, …), and boxing you into thinking about what you can and cannot do. - Justin
I’m not sure you’ve got the right idea here. Patterns have little to do with languages, nor about what you can or cannot do. A pattern, as far as I can tell, is a description of a scenario, coupled with pros and cons and a name so that we can refer to it in general conversation.
Are we talking about something like this?
http://www.amazon.co.uk/Design-patterns-elements-reusable-object-oriented/dp/0201633612Also, where did “patterns” enter into the discussion? Do you consider RPC a pattern? On the contrary, patterns can be used to implement RPCs, like the Proxy Pattern and the Abstract Factory Pattern.
On 31 May 2014 22:58, Justin Israel <justin...@gmail.com> wrote:
On Sun, Jun 1, 2014 at 9:39 AM, Marcus Ottosson <konstr...@gmail.com> wrote:
I think both ways work great
Aw, that’s no good. :) I’m looking for actual cases where one is more appropriate than the other, not which one is the silver bullet of computing.
It’s a discussion on distributing work via a chat-like interface, not very generalised I’d think, but if you’d like let’s throw in some numbers:
In the conversation, there’d be around:
- 500 peers in total
- 50 of them being active within any given second
- within which 2 tasks are being distributed continually
- Tasks are at the size of “hello world”, “create directory”, “list directory”, “write metadata”, “add 1 to 1” etc..
- ..each taking up a maximum of 1 second each.
Personally I would prefer not to talk in terms of "patterns" as that sounds very java-minded (command pattern, actor pattern, ...), and boxing you into thinking about what you can and cannot do. I see RPC as just a formalized layer of message passing. Under the hood you have a socket sending a message, and someone on the other side receiving the message, and sending a reply. The difference is that RPC puts you firmly into a request-reply situation, where the reply may not even be for the computed answer. The reply could just be an id for which the caller could use as a promise to then poll for the computed result at a later date.Using a pure message passing framework like ZeroMQ, as you already know, gives you the tools to implement more communication types like push-pull, and pub-sub. If these types of communication are important to your application, then RPC is probably not the single solution. It can definitely be used for a client to talk to a server, and then a server can use features of zmq to talk to workers. But in terms of the client talking to the server, I would see RPC or lower level message passing being pretty much the same camp. Either you are directly sending the structured message, or you are using a predefined interface that will send your message based on parameters. Either you want to wait for the answer or you don't.To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAPGFgA2SdTHud6bCeipDSScTkUei-MpEz3uXJOC1AX94Z2Kadg%40mail.gmail.com.--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
--
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBmV3JA6S96nWAH-SJ2iPiwfLFQ6L-LzL%2Bjd1t8Mcc20Q%40mail.gmail.com.
I've also configured celery in such a way that each time a worker picks up work a fresh python instance is started and sets it's context to the same project context that the call was originally invoked from.
I feel it is an implementation detail and not the sole definition of RPC
I think this is where we went off the rails.
It enables a system to make calls to programs such as NFS across the network transparently, enabling each system to interpret the calls as if they were local. - Definition of RPC
From now on, let’s refer to RPC as being this, ok? :)
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CABwp0vNu6PQnk%2BAXu8XZ%3D3ef5JYq61GKvg%3DAiLNfEy5wEY44rQ%40mail.gmail.com.
It enables a system to make calls to programs such as NFS across the network transparently, enabling each system to interpret the calls as if they were local. - Definition of RPC
But to me, the RPC aspect is that it presents a predefined interface. A function with a signature. This signature is validated as part of the RPC implementation before it goes onto the wire.
--
You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_m...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOCs7mvdqVbcwBgsbfBR-G_k72kny1hvCubD%2BOXtTwNkRw%40mail.gmail.com.
Cool, thanks Tony. I think it’s perfectly fine to have your own definitions, but for the sake of this conversation, it would be really helpful if we can all referred to the same thing.
Sounds like we’ve got two definitions going on, let’s find a more appropriate wording for them:
pre-validated
Where the signature must match the Python stdlib “json” module
>>> proxy.json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True)
post-validated
Where signature may differ, so as to be fit for multiple languages even those without the ability to sort the keys. The receiver decides what to use.
>>> proxy.send(
{'address': 'json',
'payload': {"c": 0, "b": 0, "a": 0},
'sortKeys': True}
)
It’s an amazingly interesting topic, as it relates to where to put the responsibility; on the user, or the recipient.
Do the examples make sense, is this what we’re referring to?
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsQbE2gZvod9fiq0_ea2ggx4NobhtMVOiYkr2qcBiGXxoQ%40mail.gmail.com.
A few posts back, we spoke about how to simplify long (50+) if/else clauses. We had two approaches:
if/else
if something == this:
then do that
elif something == this_here:
then do this other thing
hashmap
As suggested by Justin (hope I understood you correctly)
map = {
'this': then do that,
'this_here': then do this other thing
}
map[something]()
dynamic registration
I tested a third alternative, involving metaclasses. I’m generally not a fan of metaclasses and tend to stay clear, but in this case the gain may outweight the hassle.
In a nutshell, each handler is a subclass of Factory which, upon subclassing, registers said subclass and provides an interface for it. At this point, there is no additional if/else statement, no hashmap to update, just subclass the Factory, and it’s handled. Including logging, error handling and anything surrounding it. Basically resulting in leaner code, at the expense of being more difficult to understand (you’ll need to understand metaclasses, for starters)
Let me know what you think.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmODNjPaDwX8RTM9G0-OPWojvKTgQ%2BO_%3Dwv8xOgm2Mc8oVQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsRhEpnxbTjUB%2BisEO6YQjc2iNz925-TCZgK1gyD-GJZzA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOAY-nzbNkcDeGWgD8csFrNqRt2vFzbA9%2BeSpzgQO4-ogg%40mail.gmail.com.
Hmm. Do you include “validation of messages” when you mean “validation”?
Consider this:
>>> proxy.send({'do': 'make me a sandwich', 'with': 'mustard, and tomatoes', 'toasted?': 'yes'})
Here, a message is being sent, but not to a procedure. Could we consider this a non-RPC call?
In this case, there will be a receiver (the router) and the router MAY forward the call to a worker. But let’s say it doesn’t.
Still, the message will have to be interpreted. The message is routed based on the routers interpretation, no? This interpretation is what I’d consider a form of validation.
A router may be able to accept any message, but it will always try and make sense of it, before discarding it.
It feels like we’re talking about the same thing, but if you’d rather not call it “validation”, what would you call it?
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsRziUB%3DbA8218oV_RTw1KDYyRTd9-BJNTXkqERtEnsKjw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOCTuvxBvi7z6bzOv3qevtdRNM1bGzfnTmL9w9Q_GGMJig%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOCRskXXaW%2BWE6AR0hRZJG2h1wJc%2BtUdqppCxPW%2BsYXwxg%40mail.gmail.com.
Ok, how about this.
>>> proxy.send({'do': 'make me a sandwich', 'with': 'mustard, and tomatoes', 'toasted?': 'yes'})
For me to know that ‘do’ is a valid key to send to my router, wouldn’t I first have to know about it? That there is a key called ‘do’? And wouldn’t I also have to know what can be stored as a value for that key?
If I mispelt ‘do’, wouldn’t my message be “incorrect”?
What I’m trying to get at, is that, regardless of how forgiving messaging is, or your router, you would at some point need to know what you can send to retrieve the results or have the actions performed that you are looking for.
At some point, you will have to type a carefully formatted message somewhere. And, like with physical mail, you can forget to put the postcode in, and you can forget to put a stamp on it. In this case, wouldn’t the message be “incorrect”? As you knew where you wanted it to go, but it won’t get there.
This, what would you call this?
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsQw3RcNbjnDnsYCm_tcrrUrjxoFgX%3DBRmFsG8TUr99skA%40mail.gmail.com.
class Factory(object):
@classmethod
def register_handler(cls, handler_cls):
if not hasattr(cls, 'registry'):
cls.registry = dict()
key = getattr(cls, 'key', handler_cls.__name__.lower())
cls.registry[key] = handler_cls
@Factory.register_handler
class Letter(object):
def execute(self, receiver, envelope):
pass
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOBCN6JYo6jhxFrVJe0XvTjUQzG0hxbpK96X3WOZJcTJCw%40mail.gmail.com.
If I mispelt ‘do’, wouldn’t my message be “incorrect”?
rpc_server.doo(...)
This, what would you call this?
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmOA%3Do5eqt6VDiuc-ozHXKMPrtkGCWtSw5u%3D_xeOS3cqVxQ%40mail.gmail.com.
Another option if you are supporting python 2.6 or greater would be to use class decorators. They may make it more apparent that something special is happening with that class rather than having to know the details of metaclasses.
Yes! That’s a good point. Relieves me from having to expose people from the horrors of metaclasses. Thanks.
At this point I’m not even sure what we’re talking about any longer :)
I feel the same way.. I’m trying to get some wording going so we I can ask questions about certain things, but it isn’t going too well! At this point, RPC is the same as messaging, and no message is invalid, and RPC has a contract, messages do not (even though they do with protobuf) etc etc.
Let’s skip that, thanks for sticking with me anyways. :)
ps. and I like your code-theme. didn't know you could customize it like that. ;) (monokai sublime ftw)
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAJhmvsSxmw67o3a6DrM3AC6wHfu1-Qor1H%2BquNWGEihExAFDoA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/CAFRtmODL_xcw8yUzA7zUDDAzL34qKJ5YPepNwqFRMNt-uMXNxw%40mail.gmail.com.