Pessimistic locking locks the entire table?

860 views
Skip to first unread message

PierreW

unread,
Apr 2, 2012, 6:33:04 PM4/2/12
to Ruby on Rails: Talk
Hi guys,

I must be missing something obvious with pessimistic locking. Here is
what I do (Rails 2.3, mySQL):

In my Foo model:

def test
Foo.transaction do
lock = UpdateLockForFoo.find_by_foo_id(self.id, :lock => true)
sleep 30
end
end

For each Foo model, I have an UpdateLockForFoo. This "lock" needs to
be acquired before doing any changes to Foo.

I believe it works, except that my entire UpdateLockForFoo table is
blocked during the sleep. If I open two terminals and do:

1st terminal:
f = Foo.find(1)
f.test

2nd termincal:
f = Foo.find(2)
f.test

The 2nd terminal waits 30s before doing what it has to do. My
understanding was that :lock => true only locks a specific record, but
it seems it is not the case and it locks the entire table.

Do you know?

Please note: it might not look like a good idea to have this separate
UpdateLockForFoo model but I am just simplifying the whole thing here
(I need this second model).

Thanks!

Scott Ribe

unread,
Apr 3, 2012, 11:42:12 AM4/3/12
to rubyonra...@googlegroups.com
On Apr 2, 2012, at 4:33 PM, PierreW wrote:

> My
> understanding was that :lock => true only locks a specific record, but
> it seems it is not the case and it locks the entire table.

That depends on what kind of locking the underlying database offers. Which with MySQL depends on the storage manager you're using.

--
Scott Ribe
scott...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice


PierreW

unread,
Apr 3, 2012, 2:52:18 PM4/3/12
to rubyonra...@googlegroups.com
Hi Scott,

Thanks. In case it helps others, it is explained here:


Setting transaction to Read Committed did not work for me though, but creating an index on UpdateLockForFoo did the trick.

Thanks!

Frederick Cheung

unread,
Apr 3, 2012, 5:40:29 PM4/3/12
to Ruby on Rails: Talk


On Apr 3, 7:52 pm, PierreW <wamre...@googlemail.com> wrote:
> Hi Scott,
>
> Thanks. In case it helps others, it is explained here:
>
> http://stackoverflow.com/questions/6690458/innodb-select-for-update-s...
>
> Setting transaction to Read Committed did not work for me though, but
> creating an index on UpdateLockForFoo did the trick.
>
InnoDB's row level locks are actually index-range locks - if there is
no index for the column you are using to select the records to lock
then this will result in locking the whole table.

Fred

Robert Walker

unread,
Apr 3, 2012, 7:37:55 PM4/3/12
to rubyonra...@googlegroups.com
wam r. wrote in post #1054714:

> Hi guys,
>
> I must be missing something obvious with pessimistic locking. Here is
> what I do (Rails 2.3, mySQL):

So is there a reasonable use case for pessimistic locking on a web
application? That seems insane to me.

--
Posted via http://www.ruby-forum.com/.

PierreW

unread,
Apr 3, 2012, 7:47:44 PM4/3/12
to Ruby on Rails: Talk
I don't know if using pessimistic locking is the best way to do it,
but here is why I used this:

- every X hours a demon runs and updates records
- thing is, this demon "action" can last Y with Y > X

So there is a risk that two instances of the demon try to update the
same record. So each demon needs to acquire the lock before they can
do anything on a given record.

I guess an alternative could be to try and make sure that a demon only
starts if the previous one has finished but this was not an option in
my case.

Adding an index was actually fairly easy.

Robert Walker

unread,
Apr 4, 2012, 9:47:26 AM4/4/12
to rubyonra...@googlegroups.com
wam r. wrote in post #1054916:

> I don't know if using pessimistic locking is the best way to do it,
> but here is why I used this:
>
> - every X hours a demon runs and updates records
> - thing is, this demon "action" can last Y with Y > X
>
> So there is a risk that two instances of the demon try to update the
> same record. So each demon needs to acquire the lock before they can
> do anything on a given record.
>
> I guess an alternative could be to try and make sure that a demon only
> starts if the previous one has finished but this was not an option in
> my case.
>
> Adding an index was actually fairly easy.

Thanks for the clarification. I hope I didn't sound condescending in my
previous reply. I was asking because I was interested to know of a case
where pessimistic locking might be useful in a web environment.

This does make some sense in that case. Have you attempted to calculate
the performance effect on user driven queries while the daemon is
performing this pessimistic locking batch update? It would be
interesting to know whether the overhead of acquiring the locks would be
significant compared to using optimistic locking and handing the likely
few unresolvable conflicts that might arise.

I'd imagine that most optimistic locking issues could be resolved by
catching the optimistic failure, merging the changes and rewriting. Of
course that leaves the possibility of a conflict on changes to the same
field, where another strategy might be need. Something like a "last
write wins," or "user changes override daemon changes (or vice versa ).
In a worst case scenario unresolvable conflicts might just have to be
recorded and skipped over by the daemon process.

This might sound "bad" on the surface, but when compared with
conflicting changes between two actual users it's likely not as big a
deal as it seems. In the end it might actually turn out to be safer than
introducing the possibility of deadlocking the database due to a
pessimistic locking scheme.

Of course this all depends on the specific nature of the app (i.e.
whether user changes are fairly isolated, or multiple users often
manipulate the same data)?

In the end though it all comes down to metrics. It's far too easy to
spend too much time optimizing only to find out later that time spent
gained you almost nothing.

Jeffrey L. Taylor

unread,
Apr 4, 2012, 12:24:27 PM4/4/12
to Ruby on Rails: Talk
This is similar to a application I am working on. In my case, all updates to
the database, with one exception, can be done atomically. For examples, use
SQL UPDATE to increment a field rather than read with pure Rails/ActiveRecord,
increment model instance, and write. The one exception is adding new
instances of model with several assocations.

My solution is that all problematic actions are done with ONE Resque worker
that runs forever. The cron jobs are Ruby (not Rails) programs that enqueue a
task to the Resque worker. This has worked very well. I was pleasantly
surprised how easy it went together. The one con is that Resque workers are
not good about reporting exceptions and other problems and in the development
environment they reload properly after most, but not all changes. So in the
development environment if there are problems or I am making big and deep
changes, I will stop the Resque worker and run the problematic code with
script/runner or whatever the Rails 3 equivalent is.

HTH,
Jeffrey

> --
> You received this message because you are subscribed to the Google Groups "Ruby on Rails: Talk" group.
> To post to this group, send email to rubyonra...@googlegroups.com.
> To unsubscribe from this group, send email to rubyonrails-ta...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/rubyonrails-talk?hl=en.
>

Frederick Cheung

unread,
Apr 6, 2012, 6:18:37 AM4/6/12
to Ruby on Rails: Talk


On Apr 4, 12:47 am, PierreW <wamre...@googlemail.com> wrote:
> I don't know if using pessimistic locking is the best way to do it,
> but here is why I used this:
>
> - every X hours a demon runs and updates records
> - thing is, this demon "action" can last Y with Y > X
>
> So there is a risk that two instances of the demon try to update the
> same record. So each demon needs to acquire the lock before they can
> do anything on a given record.
>
> I guess an alternative could be to try and make sure that a demon only
> starts if the previous one has finished but this was not an option in
> my case.
>

You might want to look at separating the concept of the SQL lock
(which do have costs, can land you with deadlocks etc) from the
application level lock.

You could have a locked column (better a locked_since column) that
indicates that the row is locked. Use optimistic or pessimistic
locking to update the locked_since column, and make your update stuff
ignore records with a non null locked_since


Fred

jmcguckin

unread,
Nov 21, 2012, 2:24:05 PM11/21/12
to rubyonra...@googlegroups.com
What happens when a 2nd process tries to write to a record/table that is locked? Does it stall until the lock is released or
does it throw an exception?

Frederick Cheung

unread,
Nov 21, 2012, 5:29:51 PM11/21/12
to rubyonra...@googlegroups.com


On Wednesday, November 21, 2012 7:24:05 PM UTC, jmcguckin wrote:
What happens when a 2nd process tries to write to a record/table that is locked? Does it stall until the lock is released or
does it throw an exception?


It will block until the lock is released. If the lock is never released, depending on your database settings, it will give up at some point and throw an exception

Fred
Reply all
Reply to author
Forward
0 new messages