Using Ebean.refresh to prevent OptimisticLockException

1,317 views
Skip to first unread message

Josh Kamau

unread,
Apr 22, 2012, 8:05:14 AM4/22/12
to eb...@googlegroups.com
Hi ,

I keep getting OptimisticLockException because i have a schedulers running in separate threads and updating the  data.

If i call Ebean.refresh on an entity always before doing an update on that entity, can i prevent this error?

Regards.
Josh.

edge

unread,
Apr 22, 2012, 8:49:10 AM4/22/12
to eb...@googlegroups.com
yes - if you do the refresh and update in a transaction 

Josh Kamau

unread,
Apr 22, 2012, 12:19:19 PM4/22/12
to eb...@googlegroups.com
Thanks.

Josh.

Rob Bygrave

unread,
Apr 22, 2012, 5:17:56 PM4/22/12
to eb...@googlegroups.com
>> If i call Ebean.refresh on an entity always before doing an update on that entity, can i prevent this error?

Strictly speaking the answer is no. There is always a 'time window' where you are effectively in a race condition unless you actually use an actual lock on the DB row(s) (pessimistic locking).

I'd suggest you google search for "lost update" (which is what optimistic concurrency checking is preventing).



Cheers, Rob.

Josh Kamau

unread,
Apr 22, 2012, 5:22:20 PM4/22/12
to eb...@googlegroups.com
Hi Rob;

I understand the concept of "Lost Update" and the need to avoid override changes made elsewhere. Now i have an application that has several Quartz jobs that update each others data. I keep getting OptimisticLockException in a very random manner. So i was looking for a way of solving the problem for all cases once and for all. 

Thanks.
Josh.

Rob Bygrave

unread,
Apr 22, 2012, 5:46:53 PM4/22/12
to eb...@googlegroups.com
>> getting OptimisticLockException in a very random manner.

I'd suggest it is not random. If multiple jobs can run concurrently and are updating the same DB rows then there is always a chance of an OptimisticLockException (unless you use actual DB locks or use an external mechanism to ensure the jobs are not concurrent for that query/update period).

Using 'refresh()' or fetching a 'fresher version of a entity' reduces the time window where you will get OptimisticLockException but does not eliminate it.

Josh Kamau

unread,
Apr 22, 2012, 5:48:15 PM4/22/12
to eb...@googlegroups.com
ok. Thanks Rob

edge

unread,
Apr 22, 2012, 5:49:58 PM4/22/12
to eb...@googlegroups.com
ok - strictly speaking there is a very small window where this can still occur because during the time between the refresh and the update another thread can update the data.
So I think the problem is more to do with your design and why you want to overwrite past updates regardlessly. 
If you channel all updates of the specific entity through a single synchronized method that should work.

Josh Kamau

unread,
Apr 22, 2012, 5:51:21 PM4/22/12
to eb...@googlegroups.com
Thanks Edge for the 'Synchronized' hint. I had forgotten there is such a thing in java

Durchholz, Joachim

unread,
Apr 23, 2012, 5:58:48 AM4/23/12
to eb...@googlegroups.com
Just to complete the picture: If you ever need to scale the application to running in multiple JVMs, you'll really need to hold (at least) a row lock between fetch and update.
The other source of contention could be third-party software: outsourced development, plug-ins that want to talk to your tables, manual SQL.

One pattern that I saw just last week for a similar problem, transcribed from Oracle to pseudocode:

try {
update
catch (OptimisticLockException e) {
lock row
refresh
update
}

This is for situations where OptimisticLockException is the rare case, because it makes the normal case run without row locking overhead and still makes sure that the exceptional, race-condition case is properly handled.
(You can still have a timeout waiting for the lock. For example, somebody could have done a manual SELECT ... FOR UPDATE on that row and forgotten to COMMIT or ROLLBACK after that. That's the kind of situation where you give up on the task, log the error and move on.)

Josh Kamau

unread,
Apr 23, 2012, 6:13:32 AM4/23/12
to eb...@googlegroups.com
Thanks Joachim for the userful information.

Josh

Rob Bygrave

unread,
Apr 23, 2012, 7:23:24 AM4/23/12
to eb...@googlegroups.com
>> you'll really need to hold (at least) a row lock between fetch and update.

Well I will never agree to that. It does depend on the type and duration of transactions but in most short lived user interface transactions Optimistic locking is going to be VASTLY more scalable than holding even row level locks on the DB during user think time.

For longer running transactions for batch job processing etc then it is different story and you need to think more about the interaction with other concurrent short and long running transactions.


Cheers, Rob.

Durchholz, Joachim

unread,
Apr 23, 2012, 9:01:16 AM4/23/12
to eb...@googlegroups.com
>> you'll really need to hold (at least) a row lock between fetch and update.
> Well I will never agree to that. It does depend on the type and duration of
> transactions but in most short lived user interface transactions Optimistic
> locking is going to be VASTLY more scalable than holding even row level
> locks on the DB during user think time.

This was about Quartz jobs, which are ALWAYS background jobs where you don't have user interaction.
Which means that you don't need to consider user think time, and the penalty for aborting an attempted transaction is higher; both considerations weigh the scales in favor of locking.

Rob Bygrave

unread,
Apr 23, 2012, 5:49:37 PM4/23/12
to eb...@googlegroups.com
>> This was about Quartz jobs

That is fair enough. I read your comment differently (in the more general sense) and hence thought it was bad advice.

To be picky, if the contention is between long running transactions then locks are potentially not going to be a great solution either. In that case Josh might be better ensuring the jobs were externally managed to run in a serial fashion or that the jobs worked on orthogonal data sets. I personally would not be prescribing locks without knowing more about the actual problem.

Josh Kamau

unread,
Apr 24, 2012, 8:32:28 AM4/24/12
to eb...@googlegroups.com
In my situation, i was  opening a transaction, read an object, close the transaction .

Now a few minutes later... i want to save the updated object. Apparently the object i have been holding is already
out of date. 

Thats what kept causing the issue . 

Josh.

On Tue, Apr 24, 2012 at 3:24 PM, Durchholz, Joachim <Joachim....@hennig-fahrzeugteile.de> wrote:
> To be picky, if the contention is between long running transactions
> then locks are potentially not going to be a great solution either.

Well, I avoid long-running transactions at (almost) any cost, they come with too many problems:
- They lock out any parallel activity in interactive processing.
- In batch processing, accumulating updates for the commit means
 - any failure will lose a lot of successful work
 - collecting to-do info for the commit causes memory pressure in the DB


> In that case Josh might be better ensuring the jobs were
> externally managed to run in a serial fashion or that the
> jobs worked on orthogonal data sets.

Yes.


> I personally would not be prescribing locks without knowing
> more about the actual problem.

Well, I personally would not prescribe that you simply have to live with optimistic lock exceptions...

Durchholz, Joachim

unread,
Apr 24, 2012, 8:24:11 AM4/24/12
to eb...@googlegroups.com
> To be picky, if the contention is between long running transactions
> then locks are potentially not going to be a great solution either.

Well, I avoid long-running transactions at (almost) any cost, they come with too many problems:


- They lock out any parallel activity in interactive processing.
- In batch processing, accumulating updates for the commit means
- any failure will lose a lot of successful work
- collecting to-do info for the commit causes memory pressure in the DB

> In that case Josh might be better ensuring the jobs were


> externally managed to run in a serial fashion or that the
> jobs worked on orthogonal data sets.

Yes.

> I personally would not be prescribing locks without knowing
> more about the actual problem.

Well, I personally would not prescribe that you simply have to live with optimistic lock exceptions...

Rob Bygrave

unread,
Apr 25, 2012, 3:53:39 AM4/25/12
to eb...@googlegroups.com
>> Well, I avoid long-running transactions at (almost) any cost

Fair enough I guess. Personally I have written quite a few systems with a significant amount of batch processing (accounting systems with bank reconciliation, journal posting etc, Telco CDR processing applications etc) but yes there is frequently no simple solution. You need to think about the conflicts and design the solution accordingly.

Oracle Concurrent Manager is pretty core to how Oracle Financials works and I have seen a lot of batch processing in the Telco industry. I'd argue there are many cases where the data goes through some 'state transition' workflow and is efficiently and easily processed via batch processing.

Lets just agree to disagree.


>> .. not prescribe that you simply have to live with optimistic lock exceptions

Agreed - but I never said that. I said that I would not prescribe pessimistic locking without knowing a lot more.

Rob Bygrave

unread,
Apr 25, 2012, 3:59:02 AM4/25/12
to eb...@googlegroups.com
For someone else to help you they need to understand the nature of the conflicting transactions (specifically if one or both are quartz/batch jobs).

The transaction log would likely show you what the conflicting transaction is.

刘松

unread,
Apr 25, 2012, 10:38:50 AM4/25/12
to eb...@googlegroups.com
Actor Model is another toolkit to avoid lock&conflict, but it depends on the context.

2012/4/25 Rob Bygrave <robin....@gmail.com>
Reply all
Reply to author
Forward
0 new messages