Program advice

18 views
Skip to first unread message

Gregory Iiams

unread,
Oct 24, 2017, 1:14:50 PM10/24/17
to stompest
Hi! Stompest works great for me, pretty much out of the box, connecting to an activemq server.

Here's my question:

How should I structure a python application, when I want to subscribe to a rather busy activemq topic, and check each message to see if it matches something interesting.

When it does match something intersting, I want to pass that message into (I guess?) another function, to do *something*. In this case, it'll turn said data into a json object and http post it somewhere.

So, my activemq server is pretty busy. In the topic I'm interested in, I probably see to the tune of a couple hundred messages per minute.


So, right now I run stompest client in main(). And within that function, I am parsing the mq messages and posting them. But, how can I tell performance characteristics of that? Like, that can't scale, can it?

if data['alert']['deviceRef']['refName'] == '<devicerefname>':
payload = {"roomId":"somid","text":"'got some alert"}
headers = {'Authorization':'Bearer , 'Content-type':'application/json; charset=utf-8'}
rPostMessage = requests.post("https://api.com/v1/URI", data=json.dumps(payload),
headers=headers, verify=True)
message_Response = json.loads(rPostMessage.text)
print message_Response
client.ack(frame)
client.disconnect()

If anyone can give like a basic, even pseudo-code example of something that could work, or lessons learned. I don't know :( I'm probably too new to even ask the right questions. Which makes me avoid asking questions on SO.

nikipore

unread,
Oct 24, 2017, 1:31:01 PM10/24/17
to stompest
You can obtain performance characteristics very primitively by measuring timespans with time.time() and writing the results to a log. You could also use the built-in profiler.

If HTTP I/O is the bottleneck you could try using aynchronous I/O and/or persistent/keep-alive HTTP connections (I'm wildly guessing here, I'm really no expert with HTTP connections). The design would probably be neatest if you switched to the Twisted client.

If parsing the payload is the bottleneck and you can influence the design of the message you could move the interesting information either to the headers or change the payload s.t. you can find the information in the preamble or by "random access" (at a predefined position). Or use a faster serializer like Protocol Buffers.

If parsing the payload is the bottleneck and you can't influence the design of the message you might offload the parsing to a threadpool or run multiple consumer/producer processes in parallel on the same topic. The latter approach actually is the most straightforward design.
Reply all
Reply to author
Forward
0 new messages