Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

begin transaction

1 view
Skip to first unread message

s...@nospamxyz.com

unread,
Feb 7, 2009, 5:27:28 AM2/7/09
to
I have seen lots of code snippets out there for WAITFOR (RECEIVE ....)
always beginning with BEGIN TRANSACTION and ending with
COMMIT TRANSACTION. like snippet #1 below.

Can anyone explain what is the importance of having waitfor(receive...)
inside a transaction. why should we not consume the message and do
a proper error handling.


i belive /*snippet 2*/ (pls see below) would be a better approach to
avoid poison messages.

/*snippet 1*/
BEGIN TRANSACTION;

WAITFOR (RECEIVE TOP (1) @dh = conversation_handle,

@mt = message_type_name

FROM [ProcessingQueue]), TIMEOUT 1000;

WHILE (@dh IS NOT NULL)

BEGIN
....
END
...
COMMIT TRANSACTION


/*snippet 2*/
-- TRY & CATCH

WAITFOR (RECEIVE TOP (1) @dh = conversation_handle,

@mt = message_type_name

FROM [ProcessingQueue]), TIMEOUT 1000;

WHILE (@dh IS NOT NULL)
BEGIN
BEGIN TRY

/* do other stuff*/
begin transaction
...
....

commit transaction

END TRY

BEGIN CATCH

--rollback if xact_state = -1

--LOG Message in custom error table
END CATCH

END
...
-- Commented COMMIT TRANSACTION

Dan Guzman

unread,
Feb 7, 2009, 11:35:51 AM2/7/09
to
> Can anyone explain what is the importance of having waitfor(receive...)
> inside a transaction. why should we not consume the message and do
> a proper error handling.

The importance of the explicit transaction is that a RECEIVE is otherwise in
an implicit transaction and would remove the message from the queue
immediately. If an error occurs during message processing (e.g. server
crash), the message is lost and cannot be reprocessed. You might also be
able to gracefully handle some types of errors or at least log them and a
TRY/CATCH can facilitate this.

That said, there is nothing to prevent you from omitting the explicit
transaction and processing at your own risk. This might be appropriate if
the risk of losing messages is no big deal to you.

--
Hope this helps.

Dan Guzman
SQL Server MVP
http://weblogs.sqlteam.com/dang/

"s...@nospambbb.com" <s...@nospamxyz.com> wrote in message
news:eMe3t6Qi...@TK2MSFTNGP03.phx.gbl...

s...@nospamxyz.com

unread,
Feb 7, 2009, 1:44:02 PM2/7/09
to
Dan. Thx for your insight! highly appreciate it. Certainly we don't want to
loose messages.

i have a long running stored proc. which write to log table. if i do a
begin transaction
then the log table is locked and other processes trying to query the table
can get locking issue.

Please suggest any alternatives/advise how to avoid locking and still have
long running transaction.

Thx


BEGIN TRY
BEGIN TRANSACTION

RECIEVE FROM Queue Next Batch id for processing

Write to processing log table.
<Long running stored proc >


COMMIT TRANSCATION

END TRY

BEGIN CATCH
.....
END CATCH

"Dan Guzman" <guzm...@nospam-online.sbcglobal.net> wrote in message
news:uoZElIUi...@TK2MSFTNGP04.phx.gbl...

Dan Guzman

unread,
Feb 7, 2009, 2:52:27 PM2/7/09
to
> i have a long running stored proc. which write to log table. if i do a
> begin transaction
> then the log table is locked and other processes trying to query the
> table can get locking issue.

It is unlikely that the entire table is locked. The blocking is likely
because queries are trying to access uncommitted data. This problem can be
addressed in a number of ways.

1) use the READPAST hint in the log table queries so that locked rows are
skipped

2) construct the log table select queries in such a way to use indexes that
don't touch newly inserted data

3) ALTER the database to turn on READ_COMMITTED_SNAPSHOT

--
Hope this helps.

Dan Guzman
SQL Server MVP
http://weblogs.sqlteam.com/dang/

news:eQpRNQVi...@TK2MSFTNGP06.phx.gbl...

s...@nospamxyz.com

unread,
Feb 8, 2009, 7:34:03 AM2/8/09
to
This helps a lot. Thx Dan!!

"Dan Guzman" <guzm...@nospam-online.sbcglobal.net> wrote in message

news:eWikc2Vi...@TK2MSFTNGP05.phx.gbl...

0 new messages