As for the Tornado-web-socket example, I looked at the code, and couldn't figure out how this would work in a production environment...Does it spawn a separate python interpreter for Tornado?
If so, how does it meld with we2py's controllers? It is unclear how this works...
What serves the Tornado web-app in production? Apache? How?
As for the comet file - I can't find it - it seems it no longer exist in the new version of the web2py source-code....
As for running web2py via gEvent - how should one deploy this in production?
Can it work with Apache the same way the wsgi-handler does?
Does it require/suggest a "gEvent"ed uWSGI under NginX ?
This is all very bewildering...The documentation is very lacking...
Most libraries have fallbacks/pollifills/shims or whatever...
The thing is, it seems I would need some kind of centralized broker, if I want to share the messaging code across all use-cases, and I DO want the messages committed, in most cases, so I am not looking for a "direct" browser-to-browser channel:
1. Browser<->Browser : Commit all traffic to the database (pua/sub-chat AND collaborative-views)2. Browser<->Desktop-App : Commit all traffic to the database (pua/sub-chat only)3. Desktop-App<->Server (RPC/REST) : Don't commit anything to the database...
So, It seems that:For use-case 1 - The best solution is a non-blocking web-server with SSE and a connection to the database.
For use-case 2 - The best solution is a dual-fronting (SSE + 0MQ) and a connection to the database.
For use-case 3 - I only need a 0MQ for web2py, or falling back to xmlrpc/jsonrpc/REST...As for caching, I think I would need Redis as a stand-alone "third" service...I think I read somewhere that is has some kind of messaging support by itself - acting as a proxy... I think AMQP was the protocol...
as I was saying, you're reading too much too soon, just naming buzzwords without actually **thinking** to what you need.
Sockjs is another abstraction layer on top of websockets, has the same exact agenda of socket.io (with a bit less of flexibility)
read it all carefully. the one "whatever" is not there, you have to code it yourself. World is full of "that was the right tool until I needed that extra bit" :P
A "centralized broker" is a thing that (eventually) store messages and route them where you want them to go. All of the proposed solutions, included websocket_messaging.py, take care of that.
It's not where you need to commit the central-point. The point of messaging is where your messages need to be originated and where they need to go.
Additionally, you have to check if what you want to do is feasible with the tech you choose.
Store what needs to be stored is a layer on top.
so you say.... the solution can be very well be a normal webserver serving the pages and a non-blocking one to be the message-passer.
Don't know a single bit of what you'll use to code your desktop app. Given that you have to rely on connectivity I really won't go for a desktop client that basically does what your browser application does already.
read it again. It has pub/sub support, but no AMPQ whatsoever, although you can find libraries that on top of it abstracts away the difference.
Don't wan't to start a flame-fest, but I feel like I am under fire here, and unjustly so...
as I was saying, you're reading too much too soon, just naming buzzwords without actually **thinking** to what you need.I admit I don't have experience with many of the things I was writing about, but I don't think I am ill-informed or have erroneous-understanding of things. I made a broad-spectrum research, and wen just deep-enough into any component-option to get the "jist" of it, and see what it's all about.
Engine is the implementation of transport-based cross-browser/cross-device
bi-directional communication layer for
Socket.IO.Here you lost me completely...Obviously the main part of messaging is the routing-topology - I am well aware of that.But if I architect the components in a way that clients communicate among themselves with no centralized location, it would be sub-optimal for storing the message's data from disparate places - It would mean more hops in the message rout, and might even eventually mean coding the "storing" code in multiple places. If I have a centralized message-broker, it may include in it's topology, a filtering of which messages should be stored and where, and may have, for example, a dedicated queue for an out-going channel that goes out to store the data - this way it may even be aggregated before submitting the request to store the data, so there would be less database traffic down the line.
What I mean by that, is that if the non-blocking server for the messaging, that would also do the routing-topology, would be just another web2py server, running via gEvent, it can do the database-commits by itself. And since it is web2py, I could reuse the DAL code I have in the model of the main one - so I would not have learn a new ORM system, or device a channel for talking to the main web2py just for the database-commits.
I got that impression from this:Logstash treats Redis as a message-broker - with output messaging - I don't know how exactly...
I was not trying too, I'm just noticing how much this discussion is starting to involve a lot of things that are "offtopic". It's one thing searching for answer (and expecting them) on a specific topic and another one to try to follow every bit of your proposed libraries/solutions/frameworks. The more you add to the "offtopic list" the less people will answer. ... we started from sse and added websockets. then went to 0mq, that is the only one implementation without a central broker. Believe me, I'm starting to loose you as well ^_^
When I recommended socket.io for full-compatibility, I had discarded implicitely all other solutions that may be similar cause its the more complete one and fits nicely with python, given that it's the only technology where you're able to leverage a wsgi app (though gevent and gevent-socketio) .
That being said, if your interest is "academical" you may as well code your own new transport in C++. Here on web2py-users I tend to recommend ready-to-use-and-complete solutions involving both python and the web world as much as I can, cause it's not the "let's try something new" group :P
In my POV, the most pressing argument is that you need to choose either "single endpoint for messages" or "0mq" .... the latter choice will end up trimming all the possibilities to one.
what I meant is that a very few set of applications need the "realtime interaction" on the route client --> server....
e.g. in your "calendaring" example the times the user will receive messages that supposedly holds the informations about appointments sent by other clients will be far more than the times the client will send its own appointment to the server.....
In this usecase you could leverage sse or websockets to receive messages on the page, and let the client send "in the usual way" to a normal webserver what is his appointment, then web2py would send that message to all the other clients passing though the "tornado broker".
saying "they use redis as a message broker" is not the same of saying "it has ampq support".
Again, researching on the "gist" of the features provided, if you search for "redis messaging" the first result on google leads to http://redis.io/topics/pubsub
You said that the commet-thing is no longer existing, as "websockets" where already included in web2py.js, which as I remember correctly is referenced in the main application layout. But what about SSE? I mean, sure, it's just an HTTP request, at start, but there is a different model for "responding"...
How is web2py built for doing that? Is it keeping the session afloat for that connection, if it get's the correct MIME-type? Will I just be able to reuse the same controller-action for consecutive replies?
Can I explicitly call it from another controller, from a different session? Where should a "yield" be placed? There is ZERO documentation about this in the web2py book, and there was only one thread about this in this group, which had an attached "example application" packed in a w2p file that I couldn't use for some reason...
Nope, or maybe I expressed myself badly: that implementations started named as "comet messaging" but turned to "websocket messaging" at the first iteration.
web2py.js has an usable implementation for it and gluon/contrib/websocket_messaging.py is 200 lines of which 70 are comments, it's easy to hack it into.
I didn't get what you mean by "can I explicitely call it": either with websockets or SSE as soon as the user hits the page, a connection is established and remains open. There's no request/response cycle, just a request coming in and a (eventually) infinite response out.
As I said, I've already gone over the websocket_messaging.py file - it has dealings with WebSockets - NOT SSE (!) - and via Tornado, NOT web2py...
I didn't get what you mean by "can I explicitely call it": either with websockets or SSE as soon as the user hits the page, a connection is established and remains open. There's no request/response cycle, just a request coming in and a (eventually) infinite response out.What I mean, is that once the connection is ope, and say, is handled by a "session", then from that moment on, My usage of this connection would be "pushing" through that connection onto the browser. The usage of the "push" would obviously be from another controller.I mean, let's take the "chat" use-case :User "A" logs into a chat-view, and that sends a GET request to a controller-action, who's job is to opens an SSE connection for that user - a long-lasting session - let's call it "The SSE action". Then user "B" logs into the same view on his side, and the same thing happens for him. Now we have 2 outgoing sessions open - one for each user - 2 "SSE Actions" are waiting to send more responses - each to their respective recipients.Now, user "A" writes a comment, and "submits" it. This sends a POST request to a different controller-action that saves the comment to the database -let's call it "The Submission Action". This controller-action is different from the SSE action, and may theoretically even belong to a different controller (say, the system may have chat-views in multiple pages...).
My question is, then :"Can a submission-action 'call' an SSE-action that belongs to a different controller, and has a different session/request/response object(s)? If so How?".
We all got that. it's an external process, but it's implemented already, it "just works", has a simple yet powerful routing algo and its secure.
With SSE you have to do it yourself.
This is exactly the example shown on the videos about websocket_messaging.py . the user receives updates through the ws, and he sends to the default web2py installation with a simple ajax post its message. web2py then queues that message to tornado, that informs all connected users of the new message on the ws channel.
On the SSE side, you'd have some controller that basically does:
def events():
initialization_of_sse
while True:
yield send_a_message
you have to think to security, routing, etc by yourself.
Basically in that while True loop you'd likely want to inspect your "storage" (redis, ram, dict, database, whatever) if there's a new message for the user.
You can't "exit" from there and resume it....all the logic needs to happen inside that yield(ing) loop.
# -*- coding: utf-8 -*-import timefrom gluon.contenttype import contenttype### required - do no deletedef user(): return dict(form=auth())def download(): return response.download(request,db)def call(): return service()### end requiresdef index():return dict()def error():return dict()def sse():return dict()def buildMsg(eid , msg):mmsg = "id: %s\n" %eidmmsg += "data: {\n"mmsg += "data: \"msg\": \"%s\", \n" %msgmmsg += "data: \"id\": %s\n" %eidmmsg += "data: }\n\n"return mmsgdef sent_server_event():response.headers['Content-Type'] = 'text/event-stream'response.headers['Cache-Control'] = 'no-cache'def sendMsg():startedAt = time.time(); #http://www.epochconverter.com/while True:messaggio = buildMsg(startedAt , time.time())yield messaggiotime.sleep(5)if ((time.time() - startedAt) > 10):breakreturn sendMsg()def event_sender():response.headers['Content-Type'] = 'text/event-stream'response.headers['Cache-Control'] = 'no-cache'mtime = time.time()return 'data:' + str(mtime)
if (!window.DOMTokenList) {Element.prototype.containsClass = function(name) {return new RegExp("(?:^|\\s+)" + name + "(?:\\s+|$)").test(this.className);};Element.prototype.addClass = function(name) {if (!this.containsClass(name)) {var c = this.className;this.className = c ? [c, name].join(' ') : name;}};Element.prototype.removeClass = function(name) {if (this.containsClass(name)) {var c = this.className;this.className = c.replace(new RegExp("(?:^|\\s+)" + name + "(?:\\s+|$)", "g"), "");}};}// sse.php sends messages with text/event-stream mimetype.var source = new EventSource('{{=URL("sent_server_event")}}');function Logger(id) {this.el = document.getElementById(id);}Logger.prototype.log = function(msg, opt_class) {var fragment = document.createDocumentFragment();var p = document.createElement('p');p.className = opt_class || 'info';p.textContent = msg;fragment.appendChild(p);this.el.appendChild(fragment);};Logger.prototype.clear = function() {this.el.textContent = '';};var logger = new Logger('log');function closeConnection() {source.close();logger.log('> Connection was closed');updateConnectionStatus('Disconnected', false);}function updateConnectionStatus(msg, connected) {var el = document.querySelector('#connection');if (connected) {if (el.classList) {el.classList.add('connected');el.classList.remove('disconnected');} else {el.addClass('connected');el.removeClass('disconnected');}} else {if (el.classList) {el.classList.remove('connected');el.classList.add('disconnected');} else {el.removeClass('connected');el.addClass('disconnected');}}el.innerHTML = msg + '<div></div>';}source.addEventListener('message', function(event) {//console.log(event.data)var data = JSON.parse(event.data);var d = new Date(data.msg * 1e3);var timeStr = [d.getHours(), d.getMinutes(), d.getSeconds()].join(':');coolclock.render(d.getHours(), d.getMinutes(), d.getSeconds());logger.log('lastEventID: ' + event.lastEventId +', server time: ' + timeStr, 'msg');}, false);source.addEventListener('open', function(event) {logger.log('> Connection was opened');updateConnectionStatus('Connected', true);}, false);source.addEventListener('error', function(event) {if (event.eventPhase == 2) { //EventSource.CLOSEDlogger.log('> Connection was closed');updateConnectionStatus('Disconnected', false);}}, false);var coolclock = CoolClock.findAndCreateClocks();
Look, I appreciate you're trying to help-out, but it seems you are answering the questions you know the answers to, instead of the questions I ask.It's OK to say that you don't know the answer. You are not alone in this user-group, perhaps someone else does.
That is answering the question : "How does web2py keep a long-lasting connection".That is NOT answering the question: "How can a different controller-action activate this"
msg = db(db.messages.recipient == auth.user_id).select().first()
yield msg
But it answers NONE of the questions I asked...
There is no inter-controller/action communication in here, there is no way to POST something from the client to the server, that will call a different action in web2py, which will then invoke another yield of the SSE action, thus intentionally-spawning another response over the existing connection....
And what if there are multiple connections to multiple clients? the only way to differentiate between them would be via their sessions.
Now, the way I understand this, it's a fundamental "executional" limitation of web2py - it has no concurrency, so each invocation of web2py's wsgi-handler, is in fact a single-process-single-thread type of scenario, so that there could never exist multiple sessions that are handled at the same time....
oh my.... SSE are unidirectional, so of course the example shows you just the server --> client part and not the client-->server one.
you can do the client--> server part as usual with an ajax post.
EDIT: you don't need to have one-and-only sse capable controller.
You just need to code into a single one of them what is required by the view who will call it (i.e. you can have a page for a chat that will "call" the sse that deals with the chat, the page of the calendar that listens to the calendar sse and so on)
oh my.... SSE are unidirectional, so of course the example shows you just the server --> client part and not the client-->server one.
you can do the client--> server part as usual with an ajax post.(I would appreciate you refrain from using expressions with condescending implications such as "oh my...")
EDIT: you don't need to have one-and-only sse capable controller.
You just need to code into a single one of them what is required by the view who will call it (i.e. you can have a page for a chat that will "call" the sse that deals with the chat, the page of the calendar that listens to the calendar sse and so on)Now you are getting closer... Of course I understand that I can have more then a single SSE-enabled controller-action, but as you said - this would mean that, say, a "chat" view, may ONLY invoke a "chat" SSE-enabled-controller-action, and a "calendar" view, may ONLY invoke a "calendar" SSE-enabled-controller-action...
What if I want 2 users to collaborate on the same data, using different views, and still get real-updates?Let's say we have 2 views, a calendar, and a scheduling-run-chart - Different views of the same (or partially-shared) data, for different use-cases.How can I have one updating the calendar, and getting live-updates from another user updating the schedule (and vice-versa) ?If it is not clear verbally, perhaps a picture is in order...
Thanks for clearing it out - I get it now. It is still disappointing that the only way to do that is by "polling"... It's not solving the problem, just moving it around. It's fundamentally (in terms of execution-model), not different than using "long-polling" in the client instead of SSE... In both cases you got this scenario:
The whole point of SSE is to avoid that execution model...You alluded to Redi's "push" mechanism - I've read your link on Redis's Pub/Sub protocol, but couldn't find how the push is being done.
I'm currently looking into the Python-client implementation options there, but let's assume that there is a way to listen from Python to Redis - where do I put that? Inside the while-loop?And how does this "generator-instance-yield in a return statement" work from an execution-model perspective? What happens when it's sleeping? Isn't the python-run-time blocked? I mean, the controller-action "itself" is NOT a generator - it "returns" a generator-instance. It is returning an object. That object has a ".next()" method.. Great. Now what happens? Is web2py recognizing it as a generator-instance by it's type/methods ? Then it does a ".next()" call and issues the result within a response with the response headers? What happens then? It sleeps, right? What happens during that sleep? And after it finishes sleeping, it does not yield another values by itself - a generator is not a self-activating-agency - it needs to be called explicitly - only then will it replay the loop and yield another result.
When I wrote the small app "SSE_clock" I was searching a replacement for a "long polling javascript code" that I was using in order to push db's table update notifications to clients. I abandoned the project by lack of browser's support.Anyway, the application is a simple translation from php to python. Original demo target is to show that SSEs reconnect automatically and that it possible send multiple events on a single connection. Here attached you'll find original code in php to compare with python version.However SSE has other features not discussed in the clock example.
the subscribe part is a method that listens "blocking" and "resumes" as soon as a message is received (so, it blocks if there are no new messages until there is a new one)
so, in your SSE action you should do something like (pseudo-code)
a = pubsub.subscribe(channel)
while True:
yield a.listen()
In a gevent environment the coroutine "context switching" happens when you put that thread to sleep. This is done in several standard libs where an IO is done. Additionally, if you monkey_patched web2py (as anyserver.py does), every sleep() call effectively calls gevent.sleep() that is "coroutine-friendly": while that coroutine sleeps the execution of other coroutines can go forward, so yes, a sleep() blocks the execution, but only of that greenlet, letting other greenlets to pick up from where they were put to sleep.
As per wsgi specs if the body is an iterator the body is returned in a chunked-like manner to the client: this enables the yielding loop to "stream" pieces of information while keeping the connection open.
You can yield with the default threaded webserver, but the way it's implemented is a pool of threads, that has a maximum vale: as soon as there are n connections = n of threads of the webserver replying to a connection, no other connection can be established.
On gevent, on the other end, a new greenlet is spawned at every request and given that they are lighter, there's (virtually) no upper bound: that's why an evented environment is recommended (not required, but "highly-appreciated" nonetheless) while doing long-standing connections.
--
---
You received this message because you are subscribed to a topic in the Google Groups "web2py-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/web2py/bpx7ZcL67Co/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to web2py+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
The first yield WILL block the thread, but as you say, only the thread of that connection. So the inter-thread communication would then be solved via another "shared" process - Redis - which will act as a message broker, listening to submissions and submit publications for subscribers.
I guess I can live with that, for now, our user-base is small enough I think...Apache is doing the same, right?
just a crazy question: what about if you wrap the eventsource in a web worker?
Well, again, Redis IS required for inter-controller communication... (the notorious "green arrows" in my picture...) Which is, to me, a trivial requirement for most production use-cases...
So, to sum-up :- For inter-controller communications, you need an external message-broker (Redis/RabbitMQ).- To avoid "polling" the message-broker, you need concurrency (threads/processes/Eevntlets).Now we can move on to Socket.IO:
What integration for it (if any), already exists "within" web2py for a "gevent'ed-deployment story" ?
I don't know or care much for Tornado... From what I gather, it is similar to twisted in terms of asynchronous-coding requirements...
The way I understand it, unless there is some special-integration code, then using socket.io, would usually require running an independent gEvent'ed Socket.IO server - and routing "/socket.io/*" URI's in the web-server to it... It will then deal with all browser's "client-socket.io" interactions, and inter-operate with web2py via a message-broker (as noted above).Am I understanding this correcty?