How do node.js handle database transactions?

4,797 views
Skip to first unread message

Charlie Circle

unread,
Dec 19, 2012, 10:01:50 PM12/19/12
to nod...@googlegroups.com
For serious web applications, transaction is crucial.

But node's asynchronous nature do not obey transaction rule,  I wonder why there is no one care about it.

In synchronous applications, you start your transaction, and do database operation step by step, when done, just commit it.

While in node, you must start transaction a place, and commit it in a callback chain in a  deep level. 

But it's not the worst thing, because node  share a single connection in a process,  we can not sure which  operation is in what transaction, so all messed must.

I've seen a solution in npm, which let you execute database operation in sequence, let us go back to synchronous age, not so scalable, right?

Is there any new idea?

Jake Verbaten

unread,
Dec 19, 2012, 10:37:24 PM12/19/12
to nod...@googlegroups.com
Transactions are trivial when supported by your database engine.

Like [levelup's .batch()][1]

var ops = [
    { type: 'del', key: 'father' }
  , { type: 'put', key: 'name', value: 'Yuri Irsenovich Kim' }
  , { type: 'put', key: 'dob', value: '16 February 1941' }
  , { type: 'put', key: 'spouse', value: 'Kim Young-sook' }
  , { type: 'put', key: 'occupation', value: 'Clown' }
]

db.batch(ops, function (err) {
  if (err) return console.log('Ooops!', err)
  console.log('Great success dear leader!')
})

If your database driver doesn't give you a sensible clean api then just write one.

If your database doesn't support transactions then your screwed.



--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com
To unsubscribe from this group, send email to
nodejs+un...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

Mark Hahn

unread,
Dec 19, 2012, 10:44:23 PM12/19/12
to nodejs
> I wonder why there is no one care about it.

I will assume you aren't a troll. I would think that almost all couch
users know about ACID and use Couch understanding the trade-offs.
CouchDB has distributed features that would not be possible with
transactions. If transactions are necessary for your application then
choose a DB with transactions.

Arunoda Susiripala

unread,
Dec 19, 2012, 11:11:03 PM12/19/12
to nod...@googlegroups.com
It's depend on the type of app.

For this problem you have nothing to do with node. Its totally upto your db.
Node is mostly used with nosql dbs and running with mysql is also not uncommon.

In mongodb, they don't have transactions but they have atomic operations.
If your app is using mysql, I hope node-mysql driver supports transactions.

Read and try more node. You'll understand.

On Thu, Dec 20, 2012 at 8:31 AM, Charlie Circle <xie...@gmail.com> wrote:
For serious web applications, transaction is crucial.



Ted Young

unread,
Dec 19, 2012, 11:26:13 PM12/19/12
to nod...@googlegroups.com
Mark I'm guessing he's talking about mysql, not couch.  It's true that you can't run multiple simultaneous transactions over one mysql connection.  But that's a mysql connection in any language, not just node (unless I'm missed something).  Not a node-mysql expert, but your choices are: 

a) create a connection per request
b) create a connection pool, for example with node-pool, and acquire and release a connection on each request


Ted

Charlie Circle

unread,
Dec 20, 2012, 12:50:03 AM12/20/12
to nod...@googlegroups.com
Yeah, I'm talking about mysql.  Here is my point:
a) Current library only provide connection per process choice, not suitable for transaction. If you choose one connection per request for each request, it's wasteful.  
My first thought is only create a connection when there is a transaction, but there is no module support now, maybe I should write one myself.

b)  Batch processing is suitable for simple business logic, if your logic is combined with a lot of DAO in Service layer, you messed up. Or maybe I should write in stored procedure, but I don't like it either.

Mark Hahn

unread,
Dec 20, 2012, 1:03:35 AM12/20/12
to nodejs
Duh. My bad.

I read my couch messages inline with my node messages. I didn't check
the list and I assumed this was on the couch list, where it would
appear to be trolling.

Sorry to all (and to all good night).

Jake Verbaten

unread,
Dec 20, 2012, 3:09:53 AM12/20/12
to nod...@googlegroups.com
its not hard to write a wrapper around the mysql library to allow a transaction queue where if a second transaction is started it gets transparently queued until the first ends.

Of course that will suck performance wise but transactions already suck performance wise, if you want that use atomic operations or batch operations.

Ted Young

unread,
Dec 20, 2012, 1:01:36 PM12/20/12
to nod...@googlegroups.com
Yeah charlie i think your best bet is to use node-pool (https://github.com/coopernurse/node-pool) to make a set of persistent connections, to avoid the creation overhead.  You could then use a separate connection not in the pool for your non-transaction requests.

Ted

Andrey

unread,
Dec 20, 2012, 8:41:38 PM12/20/12
to nod...@googlegroups.com
a) it's not true, there is no limit from node side on the number of mysql connections ( but you should have some reasonable number as mysql server creates thread for each )

Mysql protocol itself is sequential and does not allow you to send new query before receiving previous result. Common solution is to have connection pool.

Charlie Circle

unread,
Dec 21, 2012, 12:55:20 AM12/21/12
to nod...@googlegroups.com
Yeah, you guys are right, the solution is not so complicated, I just want to make it clear.
But  all transaction in one queue is not good solution, and this statement is not true:
  >>Of course that will suck performance wise but transactions already suck performance wise

Innodb is a transactional storage engine, whether you set autocommit to true or false, there is a transaction, the only difference is each transaction per sql or putting a lot of sqls in one transaction. The performance difference of  settting autocommit true or false is not very obvious.  Innodb is just fast enough

We also have a team who is developing a new storage engine which is much faster than innodb, the test result is transactional engine costs twice time over  non-transactional engine, which is also acceptable.
Reply all
Reply to author
Forward
0 new messages