Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Shared locks and deadlock during

362 views
Skip to first unread message

Eugene Korolkov

unread,
Oct 12, 2004, 10:54:52 AM10/12/04
to
Hi gurus,

We are experiencing deadlock situation on datarow table in with 'shared row' lock involved
in sp which is doing ONLY delete and insert to that table
(without select from that table anywhere in those insert/delete stmts) .

Could somebody explain where shared lock comes from (parallel processing is off:
though max_online_engine =2, but  'max parallel degree' = 1
and  'max scan_parallel degree' =1; read_commited_with_lock=0, isol.level=1,
did not see it running in debugger either, switching to APL helps to avoid that) ?

Is this is the part of datarow implementation like some internal latch ?
Could not find any explanation on that in docs.

Regards,
Eugene

01:00000:00046:2004/10/11 17:48:29.09 server  Deadlock Id 25 detected                                                               
Deadlock Id 25: detected. 1 deadlock chain(s) involved.                                                                             
                                                                                                                                    
Deadlock Id 25: Process (Familyid 0 21) (suid 1) was executing a INSERT command in the procedure 'build_pos_sum_type_common'.       
SQL Text: exec dbo.test_rll_deadlock_fxrates_euro "I00006024",20                                                                    
                                                                                                                                    
Deadlock Id 25: Process (Familyid 0 46) (suid 1) was executing a DELETE command in the procedure 'build_pos_sum_type_common'.       
SQL Text: exec dbo.test_rll_deadlock_fxrates_euro "I00006024",100                                                                   
                                                                                                                                    
Deadlock Id 25: Process (Familyid 0, Spid 46) was waiting for a 'shared row' lock on row 1 page 178945 of the 'pos_sum_by_account_va
r' table in database 7 but process (Familyid 0, Spid 21) already held a 'exclusive row' lock on it.                                 
Deadlock Id 25: Process (Familyid 0, Spid 21) was waiting for a 'shared row' lock on row 0 page 178945 of the 'pos_sum_by_account_va
r' table in database 7 but process (Familyid 0, Spid 46) already held a 'exclusive row' lock on it.                                 
                                                                                                                                    
Deadlock Id 25: Process (Familyid 0, 21) was chosen as the victim. End of deadlock information.

Jeff Tallman

unread,
Oct 13, 2004, 8:44:17 AM10/13/04
to

If isolation level 3 is on, the next key locks (shared locks) invoked by
range locks and infinity locks can cause a lot of problems...but without
seeing the complete list of locks held/requested (and possibly the proc
text) it would be difficult to isolate exactly.

Eugene Korolkov

unread,
Oct 14, 2004, 12:04:03 PM10/14/04
to Jeff Tallman
Jeff,

1. Everything is under isol.level 1 (default)

2. I am reposting below e-mail (for which I got no answer) with the text of the sample sp (very simple),
(which you can try to run it simultaneously from diff.sessions) is different from the real,
diff.from originally posted source of the question, but illustrates the problem.

Looks like dupl.key situation (for RLL, but we saw it even for DPL)
now considered by Sybase as normal behavior (the history of that see
http://info.sybase.com/resolution/detail.stm?id_number=10908379)
(see page32-33 Chapter 2, P&T,BTW what does it mean "insert queries" ?,
insert with select ?, as you can see there is no select all in that sp + see exception),
but taking shared lock before exclusive (moreover with the read committed with lock = 0, default)
looks  of course very strange and in contradiction with the whole transaction theory and as result
produced frequent deadlocks.

As implication of that after switching to RLL schema (with purpose to reduce APL deadlocks !!!)
our biggest client is experiencing 2 big problems:
1. Dupl.key situation (err.2601)
2. Additional frequent deadlocks (where instead of "normal" exclusive, update
   locks shared is presented)

Switching back to APL helps to avoid above both problems.
Sybase TS is not very supportive, just constantly saying that this is normal
behavior for RLL schema and suggested to put isolation level 3 (!) ,
which just reduced 1., but still not 2. + error 12301 sometimes.

Regards,
Eugene
 

Subject: 
            RLL does not work (ASE bug ?)
       Date: 
            31 Aug 2004 12:26:55 -0700
       From: 
            Eugene Korolkov 
 Newsgroups: 
            sybase.public.ase.general




Hi gurus, 

We are having strange situation when datarow level locking does not work correctly 
on DOL tables and looks like ASE bug. Maybe it is already fixed and related to 
CRs like CR 255331, bug 169782 ? What EBF ? 

Found case 10908379, but not sure if it is the same, anyway absolutely 
cannot accept that as an  "expected behavior", when protected by transaction 
simple delete/insert the same row running simultaneously from diff.sessions 
produce dupl.key or moreover deadlock (on shared lock ???). 

Below sp build_fxrates_euro (simplified version of real sp which just deletes and inserts 1 the same row) 
running from 2 DBArtisan sessions simultaneously produced Dupl.key errors on ASE 12.5.0.1 
(Adaptive Server Enterprise/12.5.0.1/EBF 10423 ESD#2/P/Sun_svr4/OS 5.8/rel12501/1792/32-bit/FBO/Tue Jul 30 07:58:16 2002) 
on ASE 11.9.2 (Adaptive Server Enterprise/11.9.2.3/1140/P/SWR 9010 ROLLUP/Sun_svr4/OS 5.5.1/FBO/Sun May  7 09:21:26 2000) 
in addition to that produced deadlocks with shared locks (?) like below: 

After switching table "fxrates_euro" to APL schema it shows normal behavior, 
set tran isolation level 3 also helps. Switching to DPL or  read committed with lock = 1 or 
non-clustered index does not help. 

Loop (while(1=1)) added only for the purpose 
to simulate multi-user environment quickly. 

Running under sp debugger from two sessions the same sp build_fxrate_euro (without loop) 
simultaneously shows that the 2-nd sessions has been blocked correctly 
(with begin/commit tran) till 1-st committed, but then looks like delete in 2-nd session skips 
for some reason actual deleting and tries to insert dupl.key (with deadlock sometimes) ! 
At the same time sp itself does not create dupl.keys, just in parallel execution 
on DOL fxrates_euro table. 

Regards, 
Eugene 

  01:00000:00048:2004/08/31 12:57:58.76 server  Deadlock Id 15 detected                                                               
  Deadlock Id 15: detected. 1 deadlock chain(s) involved.                                                                             
                                                                                                                                      
  Deadlock Id 15: Process (Familyid 0 14) (suid 1) was executing a INSERT command in the procedure 'test_rll_loop'.                   
  SQL Text: exec test_rll_loop                                                                                                        
                                                                                                                                      
  Deadlock Id 15: Process (Familyid 0 48) (suid 1) was executing a DELETE command in the procedure 'test_rll_loop'.                   
  SQL Text: exec test_rll_loop                                                                                                        
                                                                                                                                      
  Deadlock Id 15: Process (Familyid 0, Spid 48) was waiting for a 'update row' lock on row 1 page 28208 of the 'fxrates_euro' table in
   database 7 but process (Familyid 0, Spid 14) already held a 'exclusive row' lock on it.                                            
  Deadlock Id 15: Process (Familyid 0, Spid 14) was waiting for a 'shared row' lock on row 0 page 28208 of the 'fxrates_euro' table in
   database 7 but process (Familyid 0, Spid 48) already held a 'exclusive row' lock on it.                                            
                                                                                                                                      
  Deadlock Id 15: Process (Familyid 0, 48) was chosen as the victim. End of deadlock information.


IF OBJECT_ID('dbo.fxrates_euro') IS NOT NULL 
BEGIN 
    DROP TABLE dbo.fxrates_euro 
    IF OBJECT_ID('dbo.fxrates_euro') IS NOT NULL 
        PRINT '<<< FAILED DROPPING TABLE dbo.fxrates_euro >>>' 
    ELSE 
        PRINT '<<< DROPPED TABLE dbo.fxrates_euro >>>' 
END 
go 
CREATE TABLE dbo.fxrates_euro 
( 
    ccy_code         char(3)  NOT NULL, 
    mult_div_ind     char(1)  NOT NULL, 
    fx_rate          float    NOT NULL, 
    market           char(2)  NOT NULL, 
    last_update_time char(12) NULL 
) 
LOCK DATAROWS 
go 
IF OBJECT_ID('dbo.fxrates_euro') IS NOT NULL 
    PRINT '<<< CREATED TABLE dbo.fxrates_euro >>>' 
ELSE 
    PRINT '<<< FAILED CREATING TABLE dbo.fxrates_euro >>>' 
go 
CREATE UNIQUE CLUSTERED INDEX index1 
    ON dbo.fxrates_euro(ccy_code) 
  WITH IGNORE_DUP_KEY 
go 
IF EXISTS (SELECT * FROM sysindexes WHERE id=OBJECT_ID('dbo.fxrates_euro') AND name='index1') 
    PRINT '<<< CREATED INDEX dbo.fxrates_euro.index1 >>>' 
ELSE 
    PRINT '<<< FAILED CREATING INDEX dbo.fxrates_euro.index1 >>>' 
go 
IF OBJECT_ID('dbo.test_rll_loop') IS NOT NULL 
BEGIN 
    DROP PROCEDURE dbo.test_rll_loop 
    IF OBJECT_ID('dbo.test_rll_loop') IS NOT NULL 
        PRINT '<<< FAILED DROPPING PROCEDURE dbo.test_rll_loop >>>' 
    ELSE 
        PRINT '<<< DROPPED PROCEDURE dbo.test_rll_loop >>>' 
END 
go 
create proc test_rll_loop /*(@wt char(8))*/  as 
while (1=1) 
  begin 
    begin transaction 
--  waitfor time @wt --"11:41:00"                /* Uncommented out 08.18.04 - EK */ 
    delete from fxrates_euro 
    /***********************************************/ 
 /* fake_ccy                                    */ 
 /***********************************************/ 
 insert into fxrates_euro 
 values ( " ", "M", 0, "31", "0") 
    commit transaction       /* Uncommented out 08.18.04 - EK */ 
  end 

go 
IF OBJECT_ID('dbo.test_rll_loop') IS NOT NULL 
    PRINT '<<< CREATED PROCEDURE dbo.test_rll_loop >>>' 
ELSE 
    PRINT '<<< FAILED CREATING PROCEDURE dbo.test_rll_loop >>>' 
go 
--EXEC sp_procxmode 'dbo.test_rll_loop','unchained' 
--go 
  


Jeff Tallman

unread,
Oct 17, 2004, 10:28:34 PM10/17/04
to
Also replied via email, but posted here for sake of others....

Okay - a bit of history. In ASE 11.9.2, Sybase introduced DOL (RLL &
DPL) locking. Along with that implementation, Sybase introduced a
'feature' called 'uncommitted insert bypass/update return' at isolation
levels 1 & 2. This feature can be disabled via trace flag 693 or you
can see when it happens by enabling trace 694. A good example of what
happens can be seen if you do the following (two isql sessions required)

*** ISQL session #1 ****
begin tran
insert into <rll_table_name>
go

*** ISQL session #2 ****
select * from <rll_table_name>
go

*** ISQL session #1 ****
rollback tran
go

Then try it with an APL table. You will see that with DOL locking, the
select * in session #2 is not blocked (nor is the pending inserts read -
this is not a dirty read - this is a deliberate skipping of the row).

Now then, why does it apply in your case?

Because of the repeating sequence:

begin tran
delete row
insert row
commit tran

Since you are repeatedly attempting the *SAME* rows, you will be
attempting to insert the same values in multiple transactions. Because
parallel DSI's are implemented in the fashion they are, here is what
happens (and you can do this in isql sessions if you want)

**** session #1 ****
begin tran -- tran begins
delete row -- row is deleted/exclusive lock held on row
insert row -- row is inserted/exclusive delete lock is replaced by
exclusive insert lock

**** session #2 ****
begin tran -- tran begins
delete row -- returns 0 rows affected as the uncommited insert above is
skipped
insert row -- throws duplicate key error as the uncommitted insert above
is same index values..index uses latch, hence values are visible


Now, then, as mentioned earlier, isolation level 3 disables this feature
- however, it inflicts range/infinity locks on DOL tables, which can
increase contention considerably. You may want to alter the proc and
add the following line near the top:

set transaction isolation level 3

Also, much of your below email was confusing as many of the sentences
left out information that would have made helping you easier. For
example, you make a ref to "see page32-33 Chapter 2, P&T,BTW what does
it mean "insert queries" ?, and I can not find a page 32-33 that makes
sense in the ASE P&T manuals; and pg 32-33 of RS P&T whitepaper (v1.9
from CodeXchange) is discussing repagent tuning configuration
parameters. I am completely unsure of what you were intending to
say/ask with "but taking shared lock before exclusive (moreover with the

read committed with lock = 0, default) looks of course very strange and
in contradiction with the whole transaction theory and as result

produced frequent deadlocks. " I believe you were referring to the
sp_configure "read committed with lock" parameter, but I am not sure
about the latter part of the question/statement, particularly the
"frequent deadlocks" is part of the problem unless you are referring to
that isolation level 3 causes frequent deadlocks as compared to
isolation level 1. Were you suggesting that you altered sp_configure
to modify the whole server to isolation level 3???? If so, then
definitely that would be a problem....

My quick comments for fixes:

#1) Try adding "set transaction isolation level 3" to the proc below.

#2) If the proc is not the issue and the deletes/inserts are sent by the
application directly, try trace 693 - my docs say this is a boottime -
so you will need to boot with -T693.

If you (anyone in this case - not just Eugene) have read this far, do
you think this should be a session level setting (as in 'set
uncommitted_insert_bypass {on|off}') as well as sp_configure????

>> *Deadlock Id 15: Process (Familyid 0, Spid 14) was waiting for a 'shared row' lock on row 0 page 28208 of the 'fxrates_euro' table in
>>* database 7 but process (Familyid 0, Spid 48) already held a 'exclusive row' lock on it.

Michael Peppler

unread,
Oct 18, 2004, 1:59:18 AM10/18/04
to
On Sun, 17 Oct 2004 19:28:34 -0700, Jeff Tallman wrote:

>
>
> Also, much of your below email was confusing as many of the sentences
> left out information that would have made helping you easier. For
> example, you make a ref to "see page32-33 Chapter 2, P&T,BTW what does
> it mean "insert queries" ?, and I can not find a page 32-33 that makes
> sense in the ASE P&T manuals; and pg 32-33 of RS P&T whitepaper

I think Eugene is talking about the ASE P&T guide - in particular the
"locking" guide, chapter 2.


Thanks for posting this to the group, BTW - it will probably help others
(me?!) at some point in the future!



> If you (anyone in this case - not just Eugene) have read this far, do
> you think this should be a session level setting (as in 'set
> uncommitted_insert_bypass {on|off}') as well as sp_configure????

In general I think that it is useful to be able enable/disable this sort
of behavior on a per session basis, so I'd be in favor of a "set" option.

Michael
--
Michael Peppler Data Migrations, Inc.
mpep...@peppler.org http://www.peppler.org/
Sybase T-SQL/OpenClient/OpenServer/C/Perl developer available for short or
long term contract positions - http://www.peppler.org/resume.html

Frank_Hamersley

unread,
Oct 18, 2004, 2:24:32 AM10/18/04
to
"Jeff Tallman" wrote

>
> #2) If the proc is not the issue and the deletes/inserts are sent by the
> application directly, try trace 693 - my docs say this is a boottime -
> so you will need to boot with -T693.
>
> If you (anyone in this case - not just Eugene) have read this far, do
> you think this should be a session level setting (as in 'set
> uncommitted_insert_bypass {on|off}') as well as sp_configure????

Whilst this would appear to be a relatively uncommon problem I agree with
Michael Peppler that a session level set option would be the best
arrangement.

In a case where a developer needs to use this option it would then be
feasible to switch just for the identified problem statements whilst leaving
the remainder of the application code unaffected by any change to a system
wide setting.

Cheers, Frank.


Eugene Korolkov

unread,
Oct 18, 2004, 5:48:34 PM10/18/04
to Jeff Tallman
Jeff,

Some thoughts about your explanation:

'uncommitted insert bypass/update return'
Could you please point me out to the some Sybase docs with that
phrase. Search could not find it all.

 

parallel DSI's are implemented in the fashion they are


Not quite understand. I am not talking about RS at all.

 

delete row -- returns 0 rows affected as the uncommited insert above is
  skipped


I think it is not correct at least for 3 reasons:
1. delete (in ses.#2 ) should be blocked on delete (in ses.#1)
    because it is not committed yet in (ses#1) and isolation level =1.

2. Even if for some reason (?)  1. did not happen then delete (in ses#2)
   should be blocked on insert (in ses#1) as it it documented as exception
   (again see page32-33 Chapter 2 "locking overview"
    "skipping uncommitted inserts during deletes, updates and inserts",
    ASE P&T Guide,PDF format)
    or/and http://info.sybase.com/resolution/detail.stm?id_number=10908379):

   "... The only exception to this rule is if the transaction doing the uncommitted
    insert  was overwriting an uncommitted delete of the same row done
    earlier by the same transaction.
    In this case updates, and deletes will block on the uncommitted inserted row..."
    (the very first 2 lines related to ses#1, the 3-rd one to ses#2)
    I think from the context it presumes that the same row means the row
    with the same values.
    Every insert in that sp is following up the delete stmt for the same row.

    For some inconceivable reason nor 1. nor2. did not happen
    and we are facing   situation (p.33):
    "insert query, upon encountering an uncommitted inserted row with the same
     key value will raise duplicate key error for if the index is unique."

3. Running those 2 sessions in debugger
    (DBARtisan which is using Sybase' debugger  API)
    definitely shows behavior like  1. , but then suddenly (after commit in ses#2)
    produced dupl.key error.

The doc.itself is not clear:
1. It says datarow schema, but we saw it for datapages too
    (but never for allpages !)
2. The same row (looks like it means the row with the same value, not on the same place)
3. Some confusion terminology like "insert query", what does it mean:
    1. Any insert stmt or  2. insert stmt with the select stmt ?

set transaction isolation level 3


It definitely helps, but I think should be considered as a workaround
for the ASE bug, cause if we do not have any real select stmts written by developer
(not some internal Sybase's reads) we should not bother with such things as isolation level 3 at all and also should not see any "shared" locks/deadlocks or at least it should be reflected in docs.

In contradiction with the usual Sybase's recommendation
our experiments showed that combination of datarow schema and isol.level 3
in average looks OK (at least in our environment for some set of sps) ,
no dupl.key errs and almost no deadlocks,
but sometimes lock time itself  was toooooo long, moreover we have to add that
to many sps.

So we ask client to go back to APL (at least for now) which does not produced
dupl.key and deadlocks saying that shared locks presented where only  insert/delete
stmts happened under isol.level 1.

BTW that client is bigger than CSFB for which all that non-blocking behavior
was done as http://info.sybase.com/resolution/detail.stm?id_number=10908379
explains.

 

"frequent deadlocks" is part of the problem unless you are referring to
  that isolation level 3 causes frequent deadlocks as compared to
  isolation level 1.


I was talking about additional deadlocks when server at first for some reason for RLL schema acquiring shared lock with isol.level 1 when there are NO select stmt ats all
in the whole transaction.

I found old e-mail with the almost the same problem (shared lock before exclusive) , only just for simple update (again without any select stmt).
(the paradox of the situation also is that I was the only person who have responded to that person and I said that I could not see that behavior in our env). See below.

Subject: 
            Concurrent identical update statements cause deadlocks on datarow locking table
       Date: 
            Fri, 5 Oct 2001 09:15:24 -0400
       From: 
            mathias....@exmentis.se
 Newsgroups: 
            sybase.public.ase.performance+tuning



Unfortunately do not know how it was resolved.

 

If you (anyone in this case - not just Eugene) have read this far, do
  you think this should be a session level setting (as in 'set
  uncommitted_insert_bypass {on|off}') as well as sp_configure??


I think so, but better off  apparently is to  implement MVC  (ASAP :-))

Regards,
Eugene

Jeff Tallman

unread,
Oct 21, 2004, 11:30:17 AM10/21/04
to
Doc para's:

(ASE 12.5.1 P&T Locking - Chap 2, Lock types and duration...)

Skipping uncommitted inserts during selects
select queries on data only locked tables do not block on uncommitted
inserts when the following conditions are true::
愁he table uses datarow locking, and
愁he isolation level is 1 or 2.
Under these conditions scans will skip such a row.


The only exception to this rule is if the transaction doing the
uncommitted insert was overwriting an uncommitted delete of the same row

done earlier by the same transaction. In this case, scans will block on
the uncommitted inserted row.

And later in the discussion on pseudo column level locking (same chap)

During concurrent transactions that involve select queries and update
commands, pseudo column-level locking can allow some queries to return
values from locked rows, and can allow other queries to avoid blocking
on locked rows that do not qualify. Pseudo column-level locking can
reduce blocking:
慌hen the select query does not reference columns on which there is an
uncommitted update.
慌hen the where clause of a select query references one or more columns
affected by an uncommitted update, but the row does not qualify due to
conditions in other clauses.
慌hen neither the old nor new value of the updated column qualifies, and
an index containing the updated column is being used.

....so that's the doc part....

Okay, it is true the the deletes block - even with the blocking
optimizations enabled (uncommitted insert bypass feature). I tested it
with the following logic

Session #1 (in tempdb)
create table rll_test (
rowid int not null,
descr varchar(255) null,
primary key (rowid)
)
lock datarows
go
insert into rll_test values(1,"this is row 1")
insert into rll_test values(2,"this is row 2")
insert into rll_test values(3,"this is row 3")
insert into rll_test values(4,"this is row 4")
insert into rll_test values(5,"this is row 5")
insert into rll_test values(6,"this is row 6")
go
begin tran
delete rll_test where rowid=5
insert into rll_test values (5,"this is my new row 5")
go

session #2 (in tempdb)
begin tran
delete rll_test where rowid=5
go

at that point it blocks and the 694 trace shows:
ACC94 :16: no opt

whereas with a skipped insert/select scenario, it shows:
ACC94: 2: skip unc ins

So - I took a closer look at your example and noticed the following -
You are doing a delete of all rows (no where clause)...so I altered my
test likewise and still got blocking - this time on an address lock
(expected) "ACC94: 3: addr/no upd". I also noticed you have
ignore_dup_key on for the index - again a quick test shows that the
ignore_dup_key kicks in even when an uncommitted insert is pending with
the same values....all my testing was on 12.5.2 - I note that you have
12.5.0.1 and 11.9.2 - have you tried your test on a 12.5.2 server? Do
you see the deadlocks without the debugger?? (you post doesn't say
so)...If I run your exact test, all I get is endless "duplicate key
ignored" warnings - no deadlocks, etc. Also, I note that in the
original deadlock, different pages are refed - which suggests that your
test doesn't really mimic the problem - do you have the original source
code to the real proc???

IRT ignore_dup_key - it is a worthless crutch that saves you 1 SELECT
statement and inflicts horrible P&T problems on the server due to CLR's
added to the transaction log - don't use in a production application
where performance matters.

IRT MVC, that would make matters worse - in a banking application, you
have to add a lot of logic to triggers, etc. just to keep me from
overdrawing my account if my wife and I made simultaneous withdrawals.
Personally, I think there is a lot of other features that could be added
to ASE much sooner than MVC that would benefit others a lot more

> for the ASE bug, cause if we do not have any *real select stmts* written

> exclusive) , only just for simple update* (*again without any select
> stmt)*.*

fluxe...@gmail.com

unread,
Feb 27, 2014, 7:25:05 AM2/27/14
to
Its almost 10 years after, but with ASE 15.7 sp102 we ran into the same problem as posted by Eugene. As this post is not conclusive, I've slightly changed the above example to get a simple reproducible scenario.

We use two tables, rll_source which is will be the source table with 10 rows.
rll_test is going to be the table that is deleted and inserted in one transaction:

create table rll_test (
rowid int not null,
descr varchar(255) null,
primary key (rowid)
)
lock datarows
go

create table rll_source (
rowid int not null,
descr varchar(255) null,
primary key (rowid)
)
lock datarows
go

-- Populate the source table with 10 rows.
insert into rll_source
select number, 'this is row ' + convert(varchar(4), number)
from master..spt_values where type = 'P' and number between 1 and 10
order by number
go

Then save a sql script with a loop that deletes the entire rll_test table and repopulate it from rll_source in one transaction like this:

set flushmessage on
go
while 1=1
begin
begin transaction

delete rll_test
if @@error <> 0 break

insert rll_test
select * from rll_source
if @@error <> 0 break

commit
end
go

Run the script with isql in parallel in 2 sessions.
Both sessions are able to perform several loops (about 7000 and 500), but then one of the sessions breaks on the duplicate key row error:

...
(10 rows affected)
(10 rows affected)
(10 rows affected)
(0 rows affected)
Msg 2601, Level 14, State 6:
Server 'FLUXERT', Line 8:
Attempt to insert duplicate key row in object 'rll_test' with unique index 'rll_test_7040025082'
Command has been aborted.

With this example it seems very weird that this could happen.
isolation level in the example is read committed.
no dbcc traceflags are set on the server.
Message has been deleted
0 new messages