Nos encontramos con un problema al implementar TransactionScope en
nuestra aplicacion con el fin de minimizar los riesgos de
inconsistencia en las distintas aplicaciones que se comunican via WCF.
El sistema que nos encontramos desarrollando consta de los siguientes
componentes y pasos de ejecucion:
1. En una Aplicacion (Aplicacion A) el usuario carga solicitudes de
compra.
2. Al Aceptar la Solicitud, esta misma via WCF se comunica con un
Motor de Workflow (Aplicacion B) a la cual se le solicita la Creacion
de una Instancia de Workflow.
3. La aplicacion B realiza la creacion de la instancia y contesta con
exito.
4. La aplicacion A guarda luego de recibir la respuesta exitosa, todos
los datos de la solicitud.
5. Luego la Aplicacion B al procesar la instancia creada responde a la
Aplicacion A con una llamada de WCF.
6. La Aplicacion A realiza los Inserts correspondientes (4 Inserts)
7. La Transaccion hace commit y luego se cierra.
El problema del deadlock ocurre en el paso 6 , cuando se procesan 2 o
mas Solicitudes en Paralelo (con una solicitud finaliza con exito).
El TransactionScope se inicia en el BeginRequest y Finaliza en el
EndRequest donde hace commit de los cambios.
Las pruebas que se realizar fueron:
En el Profiler se analizaron los procesos que dan el DeadLock, el cual
se da siempre en el primer Insert que tiene que realizar en el punto 6
Se creo una pequeña aplicacion con los inserts que se encuentran
generando el dead lock encerrados en un TransactionScope, y se
ejecutaron 2 instancias de la aplicacion en simultaneo.
Se realizaron los inserts dentro de una transaccion pero esta vez
desde el ManagementStudio
No se pudo replicar el Problema en ninguna de las pruebas.
Por ultimo se logro ejecutar de manera aislada el paso 6 (cerrando
todas las conexiones anteriores), desde una aplicacion externa que
reemplaza a la Aplicacion B, y se pudo verificar que el error sigue
ocurriendo, y que ninguna de las operaciones anteriores al punto 6 son
culpables del deadlock.
Se aceptan todos los salvavidas, paracaidas y demas yerbas.
Saludos!
--
Desuscripción: altnet-argenti...@googlegroups.com
--
Desuscripción: altnet-argenti...@googlegroups.com
--
Desuscripción: altnet-argenti...@googlegroups.com
But the choice of Serializable as the default isolation level much worse. In SQL Server SERIALIZABLE transactions are rarely useful and extremely deadlock-prone. Put another way, when the default READ COMMITTED isolation level does not provide the right isolation semantics, SERIALIZABLE is rarely any better and often introduces severe blocking and deadlocking problems. And since the TransactionScope is the recommended way to manage transactions in .NET, its default constructor is setting up SQL Server applications to be deadlock-prone. In fact I was prompted to write this post after working with some customers who were getting deadlocks in their applciation, and who had no idea that they were running transactions under the SERIALIZABLE isolation level.
So please, copy this C# code:
Some of the overloaded constructors of TransactionScope accept a structure of type TransactionOptions to specify an isolation level, in addition to a timeout value. By default, the transaction executes with isolation level set to Serializable. Selecting an isolation level other than Serializable is commonly used for read-intensive systems. This requires a solid understanding of transaction processing theory and the semantics of the transaction itself, the concurrency issues involved, and the consequences for system consistency.
In addition, not all resource managers support all levels of isolation, and they may elect to take part in the transaction at a higher level than the one configured.
Every isolation level besides Serializable is susceptible to inconsistency resulting from other transactions accessing the same information. The difference between the different isolation levels is in the way read and write locks are used. A lock can be held only when the transaction accesses the data in the resource manager, or it can be held until the transaction is committed or aborted. The former is better for throughput, the latter for consistency. The two kinds of locks and the two kinds of operations (read/write) give four basic isolation levels. See IsolationLevelfor more information.
When using nested TransactionScope objects, all nested scopes must be configured to use exactly the same isolation level if they want to join the ambient transaction. If a nestedTransactionScope object tries to join the ambient transaction yet it specifies a different isolation level, an ArgumentException is thrown.
Cannot serialize access
error and to undo and retry the transaction. Similar extra coding is needed in other database management systems to manage deadlocks. http://en.wikipedia.org/wiki/Software_transactional_memory
In computer science, software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. It is an alternative to lock-based synchronization. A transaction in this context is a piece of code that executes a series of reads and writes to shared memory. These reads and writes logically occur at a single instant in time; intermediate states are not visible to other (successful) transactions.
--
Desuscripción: altnet-argenti...@googlegroups.com
--
Desuscripción: altnet-argenti...@googlegroups.com