One pattern that I saw just last week for a similar problem, transcribed from Oracle to pseudocode:
try {
update
catch (OptimisticLockException e) {
lock row
refresh
update
}
This is for situations where OptimisticLockException is the rare case, because it makes the normal case run without row locking overhead and still makes sure that the exceptional, race-condition case is properly handled.
(You can still have a timeout waiting for the lock. For example, somebody could have done a manual SELECT ... FOR UPDATE on that row and forgotten to COMMIT or ROLLBACK after that. That's the kind of situation where you give up on the task, log the error and move on.)
This was about Quartz jobs, which are ALWAYS background jobs where you don't have user interaction.
Which means that you don't need to consider user think time, and the penalty for aborting an attempted transaction is higher; both considerations weigh the scales in favor of locking.
> To be picky, if the contention is between long running transactions
> then locks are potentially not going to be a great solution either.
Well, I avoid long-running transactions at (almost) any cost, they come with too many problems:
- They lock out any parallel activity in interactive processing.
- In batch processing, accumulating updates for the commit means
- any failure will lose a lot of successful work
- collecting to-do info for the commit causes memory pressure in the DB
> In that case Josh might be better ensuring the jobs were
> externally managed to run in a serial fashion or that the
> jobs worked on orthogonal data sets.
Yes.
> I personally would not be prescribing locks without knowing
> more about the actual problem.
Well, I personally would not prescribe that you simply have to live with optimistic lock exceptions...
Well, I avoid long-running transactions at (almost) any cost, they come with too many problems:
- They lock out any parallel activity in interactive processing.
- In batch processing, accumulating updates for the commit means
- any failure will lose a lot of successful work
- collecting to-do info for the commit causes memory pressure in the DB
> In that case Josh might be better ensuring the jobs were
> externally managed to run in a serial fashion or that the
> jobs worked on orthogonal data sets.
Yes.
> I personally would not be prescribing locks without knowing
> more about the actual problem.
Well, I personally would not prescribe that you simply have to live with optimistic lock exceptions...