I take a look at the logs and in both files ..log and .err i found this:
Feb 17 19:41:19 xmppserver_listener error Traceback[s2s]: /usr/lib/prosody/core/usermanager.lua:79: attempt to index field '?' (a nil value): stack traceback:
/usr/lib/prosody/net/xmppserver_listener.lua:57: in function </usr/lib/prosody/net/xmppserver_listener.lua:57>
/usr/lib/prosody/core/usermanager.lua:79: in function 'user_exists'
mod_archive.lua:763: in function '?'
/usr/lib/prosody/util/events.lua:67: in function 'fire_event'
/usr/lib/prosody/core/stanza_router.lua:172: in function 'core_post_stanza'
/usr/lib/prosody/core/stanza_router.lua:120: in function </usr/lib/prosody/core/stanza_router.lua:43>
(tail call): ?
[C]: in function 'xpcall'
/usr/lib/prosody/net/xmppserver_listener.lua:64: in function 'cb_handlestanza'
/usr/lib/prosody/util/xmppstream.lua:129: in function </usr/lib/prosody/util/xmppstream.lua:115>
[C]: in function 'parse'
/usr/lib/prosody/util/xmppstream.lua:177: in function 'feed'
/usr/lib/prosody/net/xmppserver_listener.lua:130: in function 'data'
/usr/lib/prosody/net/xmppserver_listener.lua:163: in function </usr/lib/prosody/net/xmppserver_listener.lua:160>
(tail call): ?
/usr/lib/prosody/net/server_select.lua:820: in function </usr/lib/prosody/net/server_select.lua:802>
[C]: in function 'xpcall'
/usr/bin/prosody:407: in function 'loop'
/usr/bin/prosody:473: in main chunk
[C]: ?
How can i solve this?
Philipp
Hi Philipp.
The problem is the mod_archive plugin. It isn't ready for production
use yet. Disable that for now.
--
Waqas
Sorry, this email slipped through the cracks in my inbox. By the way
it fills up you wouldn't think there were any, but...
On 12 February 2011 05:44, Kandada Boggu <kandad...@gmail.com> wrote:
> I have deployed a Prosody server with custom
> authentication(centralized) and custom roster storage(centralized).
> All clients connect to the server through the BOSH module.
>
> Ideal scenario would be to deploy multiple instances of Prosody server
> with HTTP load balancer in to front. I wasn't able to figure out how
> to share the connected user list across the servers and how to address
> the presence and message forwarding issues.
>
Load-balancing XMPP is a lot harder than load-balancing HTTP.
Typically a HTTP request can go to any number of
identically-configured boxes, all plugged into the same storage but
otherwise not needing to share any data between them.
In XMPP there is a single long-lived session that is essentially tied
to a given server. All the servers must be able to share presence and
forward messages to each other. There is also complexity involved in
correctly handling what happens when one goes offline (all the other
servers in the cluster must know, and mark all the users on that box
as offline, etc.).
If the users are l logging in anonymously and not sharing presence
then it's quite straightforward - you just run X number of Prosody
instances, each with their own routeable domain. Each user is assigned
a random JID on-demand. The load-balancing can be then done
client-side or with a slightly intelligent HTTP load-balancer.
If you really need full horizontal scaling of a single XMPP domain
however, you need clustering support in Prosody. This is on the
roadmap (at least basic clustering is a goal for 1.0). It's not easy
to do, and will take up a significant amount of my time. I'm currently
working on getting myself into a position where I do have the time
(and hence funding) to work on this, but I'm not able to just yet.
Hope this helps,
Matthew
Hi Matthew,If a load balanced cluster is not possible, how about a fail-over cluster?For example, is it possible to have 2 Prosody instances load a common roster and registration database from replicated MySQL Servers (or a shared NAS), and then if the Primary instance fails, connected clients can rec-connect (IP address pre-configured) to the failover server using the same username and password?
Of course, the session will not be migrated so clients will need to login to the other server all over again.If a user is created on the Primary instance (say using inband registration), will that user will automatically be recognized on the Secondary instance and vice versa?
Yes, if they are using the same data store.
The only problem with this setup is that the primary and secondary nodes are not aware of each other. If you manage to get into the state where you have some users connected to one node, but other users connected to the second node, they won't be able to communicate. The simplest solution is to restart the secondary node when the primary is back up, causing all users to reconnect to the primary (hopefully). Some people have gone so far as to automate this.
On 7 July 2013 09:52, <joshu...@hotmail.com> wrote:On Sunday, July 7, 2013 1:16:18 AM UTC+8, Matthew Wild wrote:Yes, if they are using the same data store.Hi Matthew, I'll probably be using 2-way MySQL replication.The only problem with this setup is that the primary and secondary nodes are not aware of each other. If you manage to get into the state where you have some users connected to one node, but other users connected to the second node, they won't be able to communicate. The simplest solution is to restart the secondary node when the primary is back up, causing all users to reconnect to the primary (hopefully). Some people have gone so far as to automate this.For my project, I'll be using Prosody for my application to send messages and receive acknowledgements from Strophe clients. I may program my application to login to both Prosody servers and route the message whichever Prosody server that presence is detected for that client.Ah, if this is a closed system then it opens up some possibilities. For example if the JID isn't important, you can simply set up each Prosody instance as an independent server with a different hostname. Then, assuming you need routing between clients, you can simply use s2s for that.