Prosody server scaling guidelines

4,051 views
Skip to first unread message

Kandada Boggu

unread,
Feb 12, 2011, 12:44:07 AM2/12/11
to Prosody IM Users
I have deployed a Prosody server with custom
authentication(centralized) and custom roster storage(centralized).
All clients connect to the server through the BOSH module.

Ideal scenario would be to deploy multiple instances of Prosody server
with HTTP load balancer in to front. I wasn't able to figure out how
to share the connected user list across the servers and how to address
the presence and message forwarding issues.

How is this done in production deployments? How do you scale the
server horizontally? How do you make the server highly available?

Thanks,
-kb

Philipp Heumos

unread,
Feb 17, 2011, 2:52:54 PM2/17/11
to prosod...@googlegroups.com
Hello Prosody Team,
i installed on a fresh Debian Lenny x64 system Prosody 0.7 and updated it to 0.8RC1
I added a user with prosodyctl adduser b...@foo.bar
Now I can login in and add friends to my roster, it all works fine, if i write a message the other contact can receive my messages but i cant receive messages of the other contact.

I take a look at the logs and in both files ..log and .err i found this:

Feb 17 19:41:19 xmppserver_listener error Traceback[s2s]: /usr/lib/prosody/core/usermanager.lua:79: attempt to index field '?' (a nil value): stack traceback:
/usr/lib/prosody/net/xmppserver_listener.lua:57: in function </usr/lib/prosody/net/xmppserver_listener.lua:57>
/usr/lib/prosody/core/usermanager.lua:79: in function 'user_exists'
mod_archive.lua:763: in function '?'
/usr/lib/prosody/util/events.lua:67: in function 'fire_event'
/usr/lib/prosody/core/stanza_router.lua:172: in function 'core_post_stanza'
/usr/lib/prosody/core/stanza_router.lua:120: in function </usr/lib/prosody/core/stanza_router.lua:43>
(tail call): ?
[C]: in function 'xpcall'
/usr/lib/prosody/net/xmppserver_listener.lua:64: in function 'cb_handlestanza'
/usr/lib/prosody/util/xmppstream.lua:129: in function </usr/lib/prosody/util/xmppstream.lua:115>
[C]: in function 'parse'
/usr/lib/prosody/util/xmppstream.lua:177: in function 'feed'
/usr/lib/prosody/net/xmppserver_listener.lua:130: in function 'data'
/usr/lib/prosody/net/xmppserver_listener.lua:163: in function </usr/lib/prosody/net/xmppserver_listener.lua:160>
(tail call): ?
/usr/lib/prosody/net/server_select.lua:820: in function </usr/lib/prosody/net/server_select.lua:802>
[C]: in function 'xpcall'
/usr/bin/prosody:407: in function 'loop'
/usr/bin/prosody:473: in main chunk
[C]: ?

How can i solve this?

Philipp

Waqas Hussain

unread,
Feb 18, 2011, 7:30:36 AM2/18/11
to prosod...@googlegroups.com, Philipp Heumos

Hi Philipp.

The problem is the mod_archive plugin. It isn't ready for production
use yet. Disable that for now.

--
Waqas

Matthew Wild

unread,
Feb 18, 2011, 2:40:12 PM2/18/11
to prosod...@googlegroups.com
Hi,

Sorry, this email slipped through the cracks in my inbox. By the way
it fills up you wouldn't think there were any, but...

On 12 February 2011 05:44, Kandada Boggu <kandad...@gmail.com> wrote:
> I have deployed a Prosody server with custom
> authentication(centralized) and custom roster storage(centralized).
> All clients connect to the server through the BOSH module.
>
> Ideal scenario would be to deploy multiple instances of Prosody server
> with HTTP load balancer in to front.  I wasn't able to figure out how
> to share the connected user list across the servers and how to address
> the presence and message forwarding issues.
>

Load-balancing XMPP is a lot harder than load-balancing HTTP.
Typically a HTTP request can go to any number of
identically-configured boxes, all plugged into the same storage but
otherwise not needing to share any data between them.

In XMPP there is a single long-lived session that is essentially tied
to a given server. All the servers must be able to share presence and
forward messages to each other. There is also complexity involved in
correctly handling what happens when one goes offline (all the other
servers in the cluster must know, and mark all the users on that box
as offline, etc.).

If the users are l logging in anonymously and not sharing presence
then it's quite straightforward - you just run X number of Prosody
instances, each with their own routeable domain. Each user is assigned
a random JID on-demand. The load-balancing can be then done
client-side or with a slightly intelligent HTTP load-balancer.

If you really need full horizontal scaling of a single XMPP domain
however, you need clustering support in Prosody. This is on the
roadmap (at least basic clustering is a goal for 1.0). It's not easy
to do, and will take up a significant amount of my time. I'm currently
working on getting myself into a position where I do have the time
(and hence funding) to work on this, but I'm not able to just yet.

Hope this helps,
Matthew

joshu...@hotmail.com

unread,
Jul 6, 2013, 6:11:23 AM7/6/13
to prosod...@googlegroups.com
Hi Matthew,

If a load balanced cluster is not possible, how about a fail-over cluster?

For example, is it possible to have 2 Prosody instances load a common roster and registration database from replicated MySQL Servers (or a shared NAS), and then if the Primary instance fails, connected clients can rec-connect (IP address pre-configured) to the failover server using the same username and password?

Of course, the session will not be migrated so clients will need to login to the other server all over again.

If a user is created on the Primary instance (say using inband registration), will that user will automatically be recognized on the Secondary instance and vice versa?

Rgds,
Joshua

Ralph J. Mayer

unread,
Jul 6, 2013, 7:28:27 AM7/6/13
to prosod...@googlegroups.com
Hi,

if you need a cluster, take a look at ejabberd.


rm

Matthew Wild

unread,
Jul 6, 2013, 1:16:18 PM7/6/13
to Prosody IM Users Group
Hi Joshua,

On 6 July 2013 11:11, <joshu...@hotmail.com> wrote:
Hi Matthew,

If a load balanced cluster is not possible, how about a fail-over cluster?

For example, is it possible to have 2 Prosody instances load a common roster and registration database from replicated MySQL Servers (or a shared NAS), and then if the Primary instance fails, connected clients can rec-connect (IP address pre-configured) to the failover server using the same username and password?

Yes, this will work.
 
Of course, the session will not be migrated so clients will need to login to the other server all over again.

If a user is created on the Primary instance (say using inband registration), will that user will automatically be recognized on the Secondary instance and vice versa?

Yes, if they are using the same data store.

The only problem with this setup is that the primary and secondary nodes are not aware of each other. If you manage to get into the state where you have some users connected to one node, but other users connected to the second node, they won't be able to communicate. The simplest solution is to restart the secondary node when the primary is back up, causing all users to reconnect to the primary (hopefully). Some people have gone so far as to automate this.

Hope this helps.

Regards,
Matthew

joshu...@hotmail.com

unread,
Jul 7, 2013, 4:21:12 AM7/7/13
to prosod...@googlegroups.com, rma...@nerd-residenz.de
Yes, I read about fb chat using ejabberd so that's the first thing I tried, Erlang is also very impressive, but I'm not able to get ejabberd+erlang installation up in Windows.  There's also another ejabberd fork, MongooseIM which has a nice website but no Windows binary.

joshu...@hotmail.com

unread,
Jul 7, 2013, 4:52:31 AM7/7/13
to prosod...@googlegroups.com
On Sunday, July 7, 2013 1:16:18 AM UTC+8, Matthew Wild wrote:

Yes, if they are using the same data store.

Hi Matthew, I'll probably be using 2-way MySQL replication.  
 

The only problem with this setup is that the primary and secondary nodes are not aware of each other. If you manage to get into the state where you have some users connected to one node, but other users connected to the second node, they won't be able to communicate. The simplest solution is to restart the secondary node when the primary is back up, causing all users to reconnect to the primary (hopefully). Some people have gone so far as to automate this.


For my project, I'll be using Prosody for my application to send messages and receive acknowledgements from Strophe clients.  I may program my application to login to both Prosody servers and route the message whichever Prosody server that presence is detected for that client.

Matthew Wild

unread,
Jul 7, 2013, 6:07:07 AM7/7/13
to Prosody IM Users Group
Ah, if this is a closed system then it opens up some possibilities. For example if the JID isn't important, you can simply set up each Prosody instance as an independent server with a different hostname. Then, assuming you need routing between clients, you can simply use s2s for that.

If you don't need client-to-client communication then the whole thing is very easy. You can simply set up as many identical Prosody instances as you want, and it doesn't matter which one the client connects to.

Regards,
Matthew

joshu...@hotmail.com

unread,
Jul 9, 2013, 1:09:13 PM7/9/13
to prosod...@googlegroups.com


On Sunday, July 7, 2013 6:07:07 PM UTC+8, Matthew Wild wrote:
On 7 July 2013 09:52, <joshu...@hotmail.com> wrote:
On Sunday, July 7, 2013 1:16:18 AM UTC+8, Matthew Wild wrote:

Yes, if they are using the same data store.

Hi Matthew, I'll probably be using 2-way MySQL replication.  
 

The only problem with this setup is that the primary and secondary nodes are not aware of each other. If you manage to get into the state where you have some users connected to one node, but other users connected to the second node, they won't be able to communicate. The simplest solution is to restart the secondary node when the primary is back up, causing all users to reconnect to the primary (hopefully). Some people have gone so far as to automate this.


For my project, I'll be using Prosody for my application to send messages and receive acknowledgements from Strophe clients.  I may program my application to login to both Prosody servers and route the message whichever Prosody server that presence is detected for that client.

Ah, if this is a closed system then it opens up some possibilities. For example if the JID isn't important, you can simply set up each Prosody instance as an independent server with a different hostname. Then, assuming you need routing between clients, you can simply use s2s for that.

Yes, it's a closed system.  But if we use different JID would that be a problem with the shared storage?

Matthew Wild

unread,
Jul 9, 2013, 1:34:26 PM7/9/13
to Prosody IM Users Group
On 9 July 2013 18:09, <joshu...@hotmail.com> wrote:
>
>
> On Sunday, July 7, 2013 6:07:07 PM UTC+8, Matthew Wild wrote:
>>
>> On 7 July 2013 09:52, <joshu...@hotmail.com> wrote:
>>>
>>> On Sunday, July 7, 2013 1:16:18 AM UTC+8, Matthew Wild wrote:
>>>>
>>>>
>>>> Yes, if they are using the same data store.
>>>
>>>
>>> Hi Matthew, I'll probably be using 2-way MySQL replication.
>>>
>>>>
>>>>
>>>> The only problem with this setup is that the primary and secondary nodes
>>>> are not aware of each other. If you manage to get into the state where you
>>>> have some users connected to one node, but other users connected to the
>>>> second node, they won't be able to communicate. The simplest solution is to
>>>> restart the secondary node when the primary is back up, causing all users to
>>>> reconnect to the primary (hopefully). Some people have gone so far as to
>>>> automate this.
>>>>
>>>
>>> For my project, I'll be using Prosody for my application to send messages
>>> and receive acknowledgements from Strophe clients. I may program my
>>> application to login to both Prosody servers and route the message whichever
>>> Prosody server that presence is detected for that client.
>>
>>
>> Ah, if this is a closed system then it opens up some possibilities. For
>> example if the JID isn't important, you can simply set up each Prosody
>> instance as an independent server with a different hostname. Then, assuming
>> you need routing between clients, you can simply use s2s for that.
>
>
> Yes, it's a closed system. But if we use different JID would that be a
> problem with the shared storage?

Users can simply authenticate as user...@example.com, while actually
connected to prosodyNNN.example.com (for example). Prosody would be
configured identically on each node to serve the virtual host
"example.com", and they would all share the same database (which can
itself be clustered).

This approach is quite easy, and presents no barrier to switching to
"true" clustering if necessary in the future.

Regards,
Matthew
Reply all
Reply to author
Forward
0 new messages