How to prevent OptimisticLockingException when executing service tasks concurrently?

8,387 views
Skip to first unread message

tosc...@googlemail.com

unread,
May 9, 2014, 11:02:49 AM5/9/14
to camunda-...@googlegroups.com
Hi all,

how can two service tasks of a single process by executed concurrently?

I have a parallel gateway which splits to two parallel service tasks (which are later joined again).

I've set the asynchronous attribute "async" to "true" on both service tasks, so that the continuation will happen asynchronously. I've also set the "exclusive" attribute to "false" on both service tasks.

In this case, both service tasks are now started concurrently and the first to finish is completed correctly. However, the second service task to finish will always fail with a OptimisticLockingException.

SEVERE: Error while closing command context
org.camunda.bpm.engine.OptimisticLockingException: ExecutionEntity[d1332687-d4e0-11e3-a07b-58946bf7bb18] was updated by another transaction concurrently
at org.camunda.bpm.engine.impl.db.DbSqlSession.flushUpdates(DbSqlSession.java:700)
at org.camunda.bpm.engine.impl.db.DbSqlSession.flush(DbSqlSession.java:496)
at org.camunda.bpm.engine.impl.interceptor.CommandContext.flushSessions(CommandContext.java:214)
at org.camunda.bpm.engine.impl.interceptor.CommandContext.close(CommandContext.java:157)
at org.camunda.bpm.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:49)
at org.camunda.bpm.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:32)
at org.camunda.bpm.engine.impl.jobexecutor.ExecuteJobsRunnable.executeJob(ExecuteJobsRunnable.java:79)
at org.camunda.bpm.engine.impl.jobexecutor.ExecuteJobsRunnable.run(ExecuteJobsRunnable.java:67)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

http://docs.camunda.org/latest/guides/user-guide/#process-engine-the-job-executor-exclusive-jobs says
"[Exclusive execution] can be turned off if you are an expert and know what you are doing (and have understood this section)"

But how do I prevent the OptimisticLockingException if I know that the two service tasks are unrelated, e.g. they don't change any process variables? Or: what is the sense of the "exclusive" attribute? What is the use case for this attribute?

A workaround would be to model a "Send Task" followed by a "Receive Task", so that the "Send Task" makes a asychonous call to a service which then reports its completion to the "Receive Task" using a previously defined correlation key.

Regards
Tobias

Daniel Meyer

unread,
May 10, 2014, 2:51:45 AM5/10/14
to camunda-...@googlegroups.com, tosc...@googlemail.com
Hi Tobias,

We implement synchronization using optimistic locking. So there is no way around this except retrying the failed request.

I think that your workaround would not prevent the optimistic locking exception:
if you have two receive tasks on parallel branches, and both are triggered by concurrent transactions, then you end up with the same behavior. 
In addition you will now have the added complexity of having to do error handling outside of the process engine which may mean that you have to use transactional messaging for fault tolerance and manage things like dead-letter queues etc. 

Why are you setting exclusive=false? Are your service invocations long running and you want to do them truly concurrently?

Daniel

tosc...@googlemail.com

unread,
May 12, 2014, 3:27:09 AM5/12/14
to camunda-...@googlegroups.com, tosc...@googlemail.com
Hi Daniel,

thanks for your reply.

> We implement synchronization using optimistic locking. So there is no way around this except retrying the failed request.

Whats the point of the "exclusive" attribute then? The result will always be that the parallel paths will only be successfully executed after both have been executed one after the other.

> I think that your workaround would not prevent the optimistic locking exception:
> if you have two receive tasks on parallel branches, and both are triggered by concurrent transactions, then you end up with the same behavior. 

True, if the trigger occurs at the same time, not if there is enough time between the two, e.g. one service task takes 20 seconds and the other 30 seconds.

> In addition you will now have the added complexity of having to do error handling outside of the process engine which may mean that you have to use transactional messaging for fault tolerance and manage things like dead-letter queues etc. 

ok

> Why are you setting exclusive=false? Are your service invocations long running and you want to do them truly concurrently?

It a theoretical question currently: Let's asume "truly concurrently". There must be use case for the attribute "exlusive", or isn't there?

Regards
Tobias

webcyberrob

unread,
May 12, 2014, 4:25:15 AM5/12/14
to camunda-...@googlegroups.com, tosc...@googlemail.com
Hi Tobias,

One approach Ive tried is as follows;

You can't avoid the optimistic locking exception, as thats the nature of the engine. Hence when I have two parallel paths, before the join I add sacrificial No Operation tasks and make sure they are asynchronous with respect to the preceeding task. Hence when the optimistic locking exception is thrown, the NoOp task may be rerun as opposed to a 'real' task. Hence the NoOp tasks do nothing and thus when re-run have no consequence...

regards

Rob

Bernd Rücker (camunda)

unread,
May 12, 2014, 10:27:15 AM5/12/14
to camunda-...@googlegroups.com, tosc...@googlemail.com

Hi Rob.

 

I like this pragmatic approach :-) Only downside is that you have to add a task into the BPMN which has no business meaning…

 

Cheers

Bernd

--
You received this message because you are subscribed to the Google Groups "camunda BPM users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to camunda-bpm-us...@googlegroups.com.
To post to this group, send email to camunda-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/camunda-bpm-users/f3b0262e-92e9-42cd-9fb7-fdc9ebf3f0a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

webcyberrob

unread,
May 12, 2014, 3:53:24 PM5/12/14
to camunda-...@googlegroups.com, tosc...@googlemail.com
Hi Bernd,

I agree and Ive espoused a principle that non business tasks (eg implementation detail) should not be visible in the BPMN Model. This would be the exception. I have two compensating options - Some modelling tools allow me to create a view such that these are not visible to the business. The other option is to sell it to the business that NoOps are special...

regards

Rob

tosc...@googlemail.com

unread,
May 13, 2014, 5:04:33 AM5/13/14
to camunda-...@googlegroups.com, tosc...@googlemail.com
Hi Rob,

thanks for your reponse. I've tried it and can confirm that it works.

The NoOp task in the model is not nice, but a possible solution.

@Camunda: IMHO, this workaround using NoOp should be documented in the paragraph "It can be turned off if you are an expert and know what you are doing" of the section http://docs.camunda.org/latest/guides/user-guide/#process-engine-the-job-executor-exclusive-jobs

Regards
Tobias

Daniel Meyer

unread,
May 14, 2014, 7:17:51 AM5/14/14
to camunda-...@googlegroups.com, tosc...@googlemail.com
Hi Tobias,

"IMHO, this workaround using NoOp should be documented in the paragraph "It can be turned off if you are an expert and know what you are doing" of the section http://docs.camunda.org/latest/guides/user-guide/#process-engine-the-job-executor-exclusive-jobs"

Thanks for this feedback! Another alternative would be to support an async continuation on the parallel gateway. This way you would not need the NoOP tasks, instead the retries could be performed in the gateway itself:

Cheers,
Daniel

galen...@gmail.com

unread,
Jul 30, 2014, 7:43:34 PM7/30/14
to camunda-...@googlegroups.com, tosc...@googlemail.com
Hi Daniel,

Thanks for CAM-2217. With the ability to now put the async continuation on the parallel gateway, I have a couple of questions:

1) We are only really talking about the joining gateway, right?

2) Since different parallel branches might take different amounts of time to complete, is the preferred approach to set retry strategy on the gateway to be:
a) a lot of retries?
b) long delays between retries?
c) combination of both?

Thanks,
Galen

Daniel Meyer

unread,
Jul 31, 2014, 12:27:15 PM7/31/14
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
Hi Galen,

> 1)  We are only really talking about the joining gateway, right?

Yes, technically you could use it on both forking and joining parallel gateway, but the joining gateway is where it makes most sense. In addition, we also added to possibility to put asynchronous continuations after tasks: 

http://docs.camunda.org/latest/guides/user-guide/#process-engine-transactions-in-processes-configuring-asynchronous-continuations

That may also be worth checking out.

> Since different parallel branches might take different amounts of time to complete, is the preferred approach to set retry strategy on the gateway to be:
>   a)  a lot of retries?
>   b)  long delays between retries?
>   c)  combination of both?

I am not sure yet but I think that it may depend on the number of executions joins (number of incoming sequence flows of the gateway) and the probability of threads "arriving" at the gateway concurrently. Depending on that you could configure more or less retries. On top of that I would do some experiments.

Cheers,
Daniel



galen...@gmail.com

unread,
Jan 12, 2015, 6:24:17 PM1/12/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com

Hi,

I've now migrated to 7.2.0 and am trying the parallel (concurrent) activities use case again.
No matter what combinations of "async before", and "async after" I put on the gateways and tasks, it seems I get a database deadlock exception (I'm using MySQL).

After playing around with this for a while, this mostly seems to be a problem with serviceTask calling a JavaDelegate.
When trying to reproduce this with script tasks, or serviceTasks simple executing an expression, I was unable to.

In my process, I have a parallel gateway that calls three serviceTasks (JavaDelegates that simply sleep for one second each (Thread.sleep)).
The serviceTasks are NOT exclusive, as I want them to run concurrently.

Below, is the stack trace I get.
I see similar posts about MSSQL, and perhaps Oracle getting deadlocks like this in the past, but I'm not sure if this is the same situation here.
Anyway, my understanding was that with CAM-2217, it would be possible to run three concurrent serviceTasks inside a parallel gateway, but so far I'm struggling to accomplish this,
unless it's a script task or a non-JavaDelegate serviceTask.

++++++++++++


Jan 12, 2015 3:02:16 PM org.camunda.bpm.engine.impl.interceptor.CommandContext close
SEVERE: Error while closing command context
org.apache.ibatis.exceptions.PersistenceException:
### Error updating database. Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
### The error may involve org.camunda.bpm.engine.impl.persistence.entity.ExecutionEntity.updateExecution-Inline
### The error occurred while setting parameters
### SQL: update ACT_RU_EXECUTION set REV_ = ?, PROC_DEF_ID_ = ?, ACT_ID_ = ?, ACT_INST_ID_ = ?, IS_ACTIVE_ = ?, IS_CONCURRENT_ = ?, IS_SCOPE_ = ?, IS_EVENT_SCOPE_ = ?, PARENT_ID_ = ?, SUPER_EXEC_ = ?, SUSPENSION_STATE_ = ?, CACHED_ENT_STATE_ = ? where ID_ = ? and REV_ = ?
### Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:26)
at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:154)
at org.camunda.bpm.engine.impl.db.sql.DbSqlSession.executeUpdate(DbSqlSession.java:231)
at org.camunda.bpm.engine.impl.db.sql.DbSqlSession.updateEntity(DbSqlSession.java:211)
at org.camunda.bpm.engine.impl.db.AbstractPersistenceSession.executeDbOperation(AbstractPersistenceSession.java:46)
at org.camunda.bpm.engine.impl.db.entitymanager.DbEntityManager.flush(DbEntityManager.java:265)
at org.camunda.bpm.engine.impl.interceptor.CommandContext.flushSessions(CommandContext.java:258)
at org.camunda.bpm.engine.impl.interceptor.CommandContext.close(CommandContext.java:187)
at org.camunda.bpm.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:106)
at org.camunda.bpm.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:32)
at org.camunda.bpm.engine.impl.jobexecutor.ExecuteJobsRunnable.executeJob(ExecuteJobsRunnable.java:79)
at org.camunda.bpm.engine.impl.jobexecutor.ExecuteJobsRunnable.run(ExecuteJobsRunnable.java:67)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1066)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4190)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4122)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2818)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2157)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1379)
at org.apache.ibatis.executor.statement.PreparedStatementHandler.update(PreparedStatementHandler.java:44)
at org.apache.ibatis.executor.statement.RoutingStatementHandler.update(RoutingStatementHandler.java:69)
at org.apache.ibatis.executor.SimpleExecutor.doUpdate(SimpleExecutor.java:48)
at org.apache.ibatis.executor.BaseExecutor.update(BaseExecutor.java:105)
at org.apache.ibatis.executor.CachingExecutor.update(CachingExecutor.java:71)
at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:152)
... 13 more

Thanks,
Galen

thorben....@camunda.com

unread,
Jan 13, 2015, 3:06:31 AM1/13/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
Hi Galen,

Are you able to reproduce this in a unit test? If yes, could you please share it?

Thanks,
Thorben

galen...@gmail.com

unread,
Jan 13, 2015, 7:02:34 PM1/13/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
Hi,

I think the problem only comes into play when execution.setVariable(...) is called in a JavaDelegate that is run concurrently with other JavaDelegates that also set variables on the same process instance. When I don't have any setVariable calls, I don't see the deadlocks. When I introduce the setVariable(..) the deadlocks come back.

+++++++++++++++++++++++++++++++

I was trying to create a basic unit test for this using H2, to see if it would fail with that, but I'm currently having issues (see https://groups.google.com/forum/#!topic/camunda-bpm-users/7fB4rxWEpVY).

In the meantime, this is a picture of the process:

https://raw.githubusercontent.com/druid77/camunda_unittests/master/src/test/resources/locking.png

Here's the BPMN:

https://github.com/druid77/camunda_unittests/blob/master/src/test/resources/locking.bpmn

And here's the JavaDelegate it's executing:

https://github.com/druid77/camunda_unittests/blob/master/src/test/java/org/camunda/bpm/unittest/SimpleJavaDelegate.java


Thanks,
Galen

galen...@gmail.com

unread,
Jan 14, 2015, 9:39:05 AM1/14/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
I updated my unit tests at:

https://github.com/druid77/camunda_unittests

If you run this test:

mvn -Dtest=SimpleTestCase#locking test

It will run the parallel test case with the H2 database. With the H2 database, I don't see any deadlock exceptions, but I do see OptimisticLockingExceptions on the asyncAfter of the serviceTasks. This happens presumably because the tasks are all trying to update variables for the same process instance at the same time, and this exception happens before it even reaches the closing parallel gateway.

++++++++++++++++++++++

If I take this same unit test, then change the H2 database config to point to my MySQL database in camunda.cfg.xml,
For example:

<property name="jdbcUrl" value="jdbc:mysql://localhost:3306/mydatabase" />
<property name="jdbcDriver" value="com.mysql.jdbc.Driver" />

Then this reproducibly causes the deadlock exception:

**** Running locking test...
PROCESS STARTED
Thread[pool-1-thread-1,5,main] -- SimpleJavaDelegate is running
Thread[pool-1-thread-3,5,main] -- SimpleJavaDelegate is running
Thread[pool-1-thread-2,5,main] -- SimpleJavaDelegate is running
Jan 14, 2015 6:35:11 AM org.camunda.bpm.engine.impl.interceptor.CommandContext close
P.S. I'm using Server version: 5.6.14 MySQL Community Server (GPL)

Hope this unit test helps.
Thanks,
Galen

thorben....@camunda.com

unread,
Jan 14, 2015, 12:09:34 PM1/14/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
Hi Galen,

Thanks for providing the test case with which I am able to reproduce the problem (on MySQL 5.1). I ran the statement 'SHOW ENGINE INNODB STATUS' to get more details on the deadlock and got the following:

------------------------
LATEST DETECTED DEADLOCK
------------------------
150114 16:58:36
*** (1) TRANSACTION:
TRANSACTION 0 158643, ACTIVE 0 sec, OS thread id 9108 starting index read
mysql tables in use 1, locked 1
LOCK WAIT 8 lock struct(s), heap size 1216, 4 row lock(s), undo log entries 4
MySQL thread id 66, query id 4726 localhost 127.0.0.1 root Updating
update ACT_RU_EXECUTION set
      REV_ = 3,
      PROC_DEF_ID_ = 'locking:1:1203',
      ACT_ID_ = 'ParallelGateway_1',
      ACT_INST_ID_ = '1212',
      IS_ACTIVE_ = 0,
      IS_CONCURRENT_ = 0,
      IS_SCOPE_ = 1,
      IS_EVENT_SCOPE_ = 0,
      PARENT_ID_ = null,
      SUPER_EXEC_ = null,
      SUSPENSION_STATE_ = 1,
      CACHED_ENT_STATE_ = 16
    where ID_ = '1212'
      and REV_ = 2
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 451 n bits 184 index `PRIMARY` of table `process-engine`.`act_ru_execution` trx id 0 158643 lock_mode X locks rec but not gap waiting
Record lock, heap no 110 PHYSICAL RECORD: n_fields 19; compact format; info bits 0

*** (2) TRANSACTION:
TRANSACTION 0 158644, ACTIVE 0 sec, OS thread id 8144 starting index read, thread declared inside InnoDB 500
mysql tables in use 1, locked 1
8 lock struct(s), heap size 1216, 4 row lock(s), undo log entries 4
MySQL thread id 65, query id 4727 localhost 127.0.0.1 root Updating
update ACT_RU_EXECUTION set
      REV_ = 3,
      PROC_DEF_ID_ = 'locking:1:1203',
      ACT_ID_ = 'ParallelGateway_1',
      ACT_INST_ID_ = '1212',
      IS_ACTIVE_ = 0,
      IS_CONCURRENT_ = 0,
      IS_SCOPE_ = 1,
      IS_EVENT_SCOPE_ = 0,
      PARENT_ID_ = null,
      SUPER_EXEC_ = null,
      SUSPENSION_STATE_ = 1,
      CACHED_ENT_STATE_ = 16
    where ID_ = '1212'
      and REV_ = 2
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 0 page no 451 n bits 176 index `PRIMARY` of table `process-engine`.`act_ru_execution` trx id 0 158644 lock mode S locks rec but not gap
Record lock, heap no 110 PHYSICAL RECORD: n_fields 19; compact format; info bits 0

So the problem seems to be the primary key index of the ACT_RU_EXECUTION table that is partially being locked when the process instance execution or some child execution are updated.

I'm not sure if there is anything we can do about this problem. As the retry mechanism works for this case, I do not think this is too big of a problem. Anyway, I have created issue [1].

As a side note: The reason why the process instance execution is updated after execution the delegate is the field CACHED_ENT_STATE_ that now indicates that the process instance has a variable. If you instantiate the process already with a variable like

    ProcessInstance processInstance = runtimeService().startProcessInstanceByKey("locking", Collections.singletonMap("var", null));

you should not see the deadlock exception in this specific case, as the process instance execution is then not updated after setting the variable in the delegate.
Of course that is not at all a general solution to the problem ;)

Cheers,
Thorben

[1] https://app.camunda.com/jira/browse/CAM-3318

galen...@gmail.com

unread,
Jan 14, 2015, 1:16:09 PM1/14/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
Hi Thorben,

Thanks for looking into that,

I think my biggest concern now is that if the parallel tasks set variables (or do something else that might cause an "optimistic" DB concern), then there will always be an OptimisticLockingException at the next async commit point BEFORE the joining parallel gateway. This will happen regardless of how long each task takes (see my updated unit tests where I introduce random waits). So basically CAM-2217 does no good in this situation, in terms of avoiding re-running the tasks again because the OptimisticLoggingException always will occur before the actual gateway (not on the gateway). Therefore the task it retries will be the parallel tasks. I can't even introduce NoOP tasks in this case to avoid running the tasks again, unless I'm missing something...

I think I'm making sense here :)

Thanks,
Galen

Daniel Meyer

unread,
Jan 15, 2015, 3:37:10 AM1/15/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
Nice catch guys!

so there seem to be two issues resulting from the update of the cachedEntityState property:
  1. Deadlock on primary key index (currently only reproduced in MySQL)
  2. Optimisitic locking exception due to the update itself
Daniel

thorben....@camunda.com

unread,
Jan 15, 2015, 3:37:13 AM1/15/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
Hi Galen,

The reason for the OptimisticLockingException is again the CACHED_ENT_STATE_ field that indicates whether an execution has any variables (among other relations). Since the process instance has no variables before the tasks are executed but has afterwards, the CACHED_ENT_STATE_ is updated and an OptimisticLockingException is thrown. If you work around this by initializing the process instance with any variable, the tasks should not be re-run and the OptimisticLockingExceptions only occur at the joining gateway. Please correct me if you observe different behavior.

I'll discuss with the team later on what we can do about this and let you know.

Cheers,
Thorben

galen...@gmail.com

unread,
Jan 15, 2015, 12:13:59 PM1/15/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
Hi,

I haven't tried the workaround of initializing the process with variables ahead of time, but in my processes, I don't always know ahead of time what variables will be set. It's true that in most cases I could figure this out, but it would be a painful process to identify them, and it might end up being hundreds of variables..

Thanks,
Galen

galen...@gmail.com

unread,
Jan 15, 2015, 12:24:36 PM1/15/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
I'm sorry, I re-read your post, and you said "any variable", not "all variables". So yes, this would be an easy workaound! I tried this approach out, and it seems to work.

Since this approach works, could Camunda by default always set at least one variable in the process? On the other hand this might not always be necessary/wanted though..

Thanks,
Galen

thorben....@camunda.com

unread,
Jan 16, 2015, 2:56:42 AM1/16/15
to camunda-...@googlegroups.com, tosc...@googlemail.com, galen...@gmail.com
Hi Galen,

I agree that always initializing a process with a variable should not be the default behavior since it's hard to explain to people who are not experiencing the issue we are discussing here. I rather regard this as a workaround.

One idea we have to fix this is that it is not necessary to raise an OptimisticLockingException in this case, since the conflicting transactions perform exactly the same update and could be trivially "merged" (by dropping the later update). However, we are not sure how complex it is to implement such a check that distinguishes between cases in which optimistic locking exceptions are required or in which merging is safe.

Cheers,
Thorben
Reply all
Reply to author
Forward
0 new messages