> My
> understanding was that :lock => true only locks a specific record, but
> it seems it is not the case and it locks the entire table.
That depends on what kind of locking the underlying database offers. Which with MySQL depends on the storage manager you're using.
--
Scott Ribe
scott...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice
So is there a reasonable use case for pessimistic locking on a web
application? That seems insane to me.
--
Posted via http://www.ruby-forum.com/.
Thanks for the clarification. I hope I didn't sound condescending in my
previous reply. I was asking because I was interested to know of a case
where pessimistic locking might be useful in a web environment.
This does make some sense in that case. Have you attempted to calculate
the performance effect on user driven queries while the daemon is
performing this pessimistic locking batch update? It would be
interesting to know whether the overhead of acquiring the locks would be
significant compared to using optimistic locking and handing the likely
few unresolvable conflicts that might arise.
I'd imagine that most optimistic locking issues could be resolved by
catching the optimistic failure, merging the changes and rewriting. Of
course that leaves the possibility of a conflict on changes to the same
field, where another strategy might be need. Something like a "last
write wins," or "user changes override daemon changes (or vice versa ).
In a worst case scenario unresolvable conflicts might just have to be
recorded and skipped over by the daemon process.
This might sound "bad" on the surface, but when compared with
conflicting changes between two actual users it's likely not as big a
deal as it seems. In the end it might actually turn out to be safer than
introducing the possibility of deadlocking the database due to a
pessimistic locking scheme.
Of course this all depends on the specific nature of the app (i.e.
whether user changes are fairly isolated, or multiple users often
manipulate the same data)?
In the end though it all comes down to metrics. It's far too easy to
spend too much time optimizing only to find out later that time spent
gained you almost nothing.
My solution is that all problematic actions are done with ONE Resque worker
that runs forever. The cron jobs are Ruby (not Rails) programs that enqueue a
task to the Resque worker. This has worked very well. I was pleasantly
surprised how easy it went together. The one con is that Resque workers are
not good about reporting exceptions and other problems and in the development
environment they reload properly after most, but not all changes. So in the
development environment if there are problems or I am making big and deep
changes, I will stop the Resque worker and run the problematic code with
script/runner or whatever the Rails 3 equivalent is.
HTH,
Jeffrey
> --
> You received this message because you are subscribed to the Google Groups "Ruby on Rails: Talk" group.
> To post to this group, send email to rubyonra...@googlegroups.com.
> To unsubscribe from this group, send email to rubyonrails-ta...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/rubyonrails-talk?hl=en.
>
What happens when a 2nd process tries to write to a record/table that is locked? Does it stall until the lock is released ordoes it throw an exception?