How to properly coordinate a sole task between multiple processes on many servers

95 views
Skip to first unread message

Tom

unread,
Oct 5, 2012, 8:04:58 AM10/5/12
to nod...@googlegroups.com
I've setup a cluster of physical servers. Each server runs exactly the same code. Moreover, each server runs multiple node processes using the build in cluster functionality.

I use MongoDB (native) to share information between processes and servers. However, I am having some difficulty with running a special task that needs to be executed only once during initialization:
> if a special `admin` account does not yet exist in the database, it should be created

Originally I figured that I could read from the MongoDB master server on each node and check if the admin account already exists. If it does not then another node has not yet created it, so this node should do so. However this is problematic because creating an admin password hash is asynchronous and takes time. Therefore there is a delay between when a node decides to create the account and when the account is being found by other nodes when querying the database.

The code snippet that reads from the Mongo master only and creates the account is available here: https://gist.github.com/3839429

In the future I would also like a special task to be executed every 5 minutes. This task must then only be executed by a running server, and not by all servers.

In short: when running a cluster of servers, how do you coordinate between these servers which of them is going to execute a sole task such as the one described above?

Tom

greelgorke

unread,
Oct 5, 2012, 8:59:33 AM10/5/12
to nod...@googlegroups.com
i wouldn't do it that way. when deploying your app just do a pre-start script, that ensures the existence of your desired data. 
If you have periodical tasks, it's best to use a lib for it, that triggers jobs appart of your main application. i.E http://stackoverflow.com/questions/3785736/is-there-a-job-scheduler-library-for-node-js , so you just avoid the concurrency problems. 

Tom

unread,
Oct 5, 2012, 11:07:20 AM10/5/12
to nod...@googlegroups.com
Unfortunately I'm afraid that I don't see how a scheduler can avoid the concurrency problems. Note that the advantages (e.g. in availability) of having a cluster should be maintained here, and so you cannot run a scheduler in a separate process on a single server. If every server would be running the scheduler, the same concurrency problems would arise. What were you proposing?

About the initialization, I guess that would work. It is not the way I would prefer to do it, as I would like the application to be self-controlled and usable without running special tools, but I reckon it is an acceptable approach.

Tom

Op vrijdag 5 oktober 2012 19:59:33 UTC+7 schreef greelgorke het volgende:

Dan Milon

unread,
Oct 5, 2012, 12:21:03 PM10/5/12
to nod...@googlegroups.com
greelkorke ment using a job queue where jobs are put, and handed to workers.

If you want to do it only with mongo, you'll need to use some "lock" document, that is set and unset by the first process which tries to initiate a task. All other processes which try to grab the lock while its held by another process should assume that the job is being worked by another process. But thats really ugly & has problems because jobs cant be acknowledged, so if a process crashes while its performing some task, you're fucked.

--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com
To unsubscribe from this group, send email to
nodejs+un...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

Mark Hahn

unread,
Oct 5, 2012, 3:31:33 PM10/5/12
to nod...@googlegroups.com
I may be crazy but I'm implementing a scheme where processes get the tasks from a db record and then stores their process number in that record.  Then while they are running they periodically check to make sure that their server number is still the one in the record.  If another process has 'stolen' the task then the process aborts and looks for another one to do.

This is the only way I could figure out how to do task assignment when faced with a db that only has "eventual consistency".  The CouchDB i'm using offers no atomic operations so this was the only reliable way to do it.

It works quite well.  In the usual case it just grabs the task, does it, and moves on.  Collisions are rare, but they may be more frequent as the cluster grows in size.

Evan

unread,
Oct 5, 2012, 4:14:28 PM10/5/12
to nod...@googlegroups.com
You are basically proposing the schema for DelayedJobs (Ruby) [ https://github.com/collectiveidea/delayed_job ].  This kind of thing gets really weird in eventually consistant databases, but Mongo (like mySQL) is always consistant so you should be OK.  However, a lot of folks have been finding some locking problems with this approach (multiple job execution or really aggressive table locking is needed), so most folks tend to use a store which can support atomic "push" and "pop" operations  so you can ensure that one and only one worker gets the job.  Redis is the most popular of these types of stores these day.  I make use of that property in http://actionherojs.com/ for exactly this purpose, as does the very popular https://github.com/defunkt/resque and some other projects.  

Dan Milon

unread,
Oct 5, 2012, 7:14:39 PM10/5/12
to nod...@googlegroups.com
Mongo is definitely NOT always consistent.

Mark Hahn

unread,
Oct 5, 2012, 7:19:31 PM10/5/12
to nod...@googlegroups.com
 so most folks tend to use a store which can support atomic "push" and "pop" operations  so you can ensure that one and only one worker gets the job. 

This usually requires a centralized resource.  My scheme is 100% distributed.

Evan Tahler

unread,
Oct 5, 2012, 7:20:02 PM10/5/12
to nod...@googlegroups.com
Sorry, I should have been more clear: Mongo in a cluster (as Dan points out) is certainly not `always consistent`. I got the impression that your implementation had many node servers reading from a single mongo server.  If thats the case, then that one server will be consistent with itself :D

Sorry for the confusion! 

Ryan Schmidt

unread,
Oct 5, 2012, 10:27:23 PM10/5/12
to nod...@googlegroups.com
On Oct 5, 2012, at 10:07, Tom <tomm...@gmail.com> wrote:

> Unfortunately I'm afraid that I don't see how a scheduler can avoid the concurrency problems. Note that the advantages (e.g. in availability) of having a cluster should be maintained here, and so you cannot run a scheduler in a separate process on a single server.

I would say if you want a task run *once* every five minutes, then you should run it on one server.

If high availability is essential, perhaps you designate one primary server to run the task, and a backup server that monitors the primary server and takes over if the primary goes offline.


> If every server would be running the scheduler, the same concurrency problems would arise.

I agree they would.

What is this task you want to run every five minutes? Is there a way you could partition the work so that each server runs the script every five minutes and each server only does a specific fraction of the work? That would help distribute the workload as it increases, but wouldn't help a server goes offline.

Tom

unread,
Oct 6, 2012, 1:07:30 AM10/6/12
to nod...@googlegroups.com
I have multiple mongodb servers running in a cluster, so they are not always consistent.

@Mark, I see a problem with your suggestion: "processes get the tasks from a db record". So what server inserts the task into the database? If all servers try to insert the task then multiple tasks will be created. If only one server creates tasks then you will have a problem when this server goes offline. If each server first checks if a task was not already created, I see two problems: 1) how do you identify two tasks as being equal? 2) what if there is a race condition in inserting the task, so server B cannot read the task but server A is already in the process of writing it.

@Ryan, since the aim is to coordinate a sole task among multiple servers, simply reducing the cluster to one server does not really achieve my objective. I think that it is possible to coordinate such task. Maybe with a distributed event system, allowing servers to coordinate who is going to perform a task?
 
Tom

Op zaterdag 6 oktober 2012 09:27:55 UTC+7 schreef ryandesign het volgende:

Mark Hahn

unread,
Oct 6, 2012, 3:35:30 PM10/6/12
to nod...@googlegroups.com
So what server inserts the task into the database?

The tasks come from user actions.  Each user is only connected to one host server.  So only one server creates a task..  When a task is finished its results are shared by all hosts.  There is no concept of two tasks being equal and no possible race during creation.

If a task server claims a task and then crashes, the others notice a time-out condition (actually a lack of heartbeat) and the task is up for grabs again.  If two servers finish the same task before the collision is noticed, then their results go into the same doc so eventual consistency wins again.  It has taken me a long time to make this fool-proof but it works now.


Joshua Gross

unread,
Oct 6, 2012, 6:55:11 PM10/6/12
to nod...@googlegroups.com
Mark, maybe I missed this from earlier on, but do you have any plans to document this or open-source any related code? Sounds pretty interesting!

-- Joshua Gross
Christian / SpanDeX.io / BA Candidate of Computer Science, UW-Madison 2013
Reply all
Reply to author
Forward
0 new messages