Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Insert Trigger failure (original record not saved)

2 views
Skip to first unread message

Yousry

unread,
Apr 20, 2003, 2:00:40 PM4/20/03
to
I have an INSERT trigger on a table, the trigger saves
some values from the inserted record into another table.
If the trigger fails for any reason, I do not want to
loose the original record.
I tried SET XACT_ABORT OFF and this did not work, The
original record still was not saved.

Any suggestions ASAP.

Thanks


Erland Sommarskog

unread,
Apr 20, 2003, 7:00:04 PM4/20/03
to

It is not at all clear what you do. Could you please post some
code, and some INSERT statment and what you expect to happen with
it.

But I can point out some general things about triggers. You say
"inserted record". Beware that a trigger in SQL Server fires once
per *statement*, so you can have more than one row in the virtual
table "inserted".

When you are in a trigger, and an error occurs, the entire batch
is aborted, and the transaction rolled back, so nothing you inserted
is there any more. If you want to keep the data, even if the trigger
fails, you would have to play some severe tricks. It could help if
you tell us of your actual business problem.


--
Erland Sommarskog, SQL Server MVP, som...@algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp

Yousry

unread,
Apr 20, 2003, 8:22:03 PM4/20/03
to
Thanks a lot for the feedback.

Actually my INSERT Trigger is so simple, and I the
application inserts one record at a time.

However I just want to make sure that no matter what
happens in the trigger that the original data can never be
lost.
I did some test to make the Trigger Fail and observed that
the original data gets lost (transaction rolled back)
and I was hoping to be able to stop this behaviour.

The reason I want to make sure the data can never be lost
is that the database is used for a mission critical
application that is currently in use.

Here is an example of my INSERT Trigger:

CREATE TRIGGER tr_MapData
ON CrCPer
FOR INSERT
AS
BEGIN

INSERT INTO [TBSales_Data](
[ReferenceNum],
[CAP],
[EForm])
SELECT [REFERENCE_NUM],
[CAP],
[EForm]
FROM INSERTED

END

Thanks a Lot for the Help.

Yousry

>.
>

Andrew John

unread,
Apr 20, 2003, 11:38:06 PM4/20/03
to
Yousry,

Your trigger already handles multi-row inserts, and is so simple
that the only way I can think of for it to fail, is a constraint
violation on TBSales_Data - duplicate primary key / lack of foreign
key perhaps. ( Or out of disk space ! )

On current SQL versions there is no way to avoid the rollback, and your
client side code needs to handle this.

In the trigger all you can do is try to ensure that the error condition does not occur
Such as checking for the existence of a duplicate PK, or lack of FK prior to the
insert. You will find IF EXISTS / IF NOT EXISTS useful in this context.

Regards
AJ

"Yousry" <ysaa...@hotmail.com> wrote in message news:01be01c3079c$087f5100$a401...@phx.gbl...

Erland Sommarskog

unread,
Apr 21, 2003, 5:33:33 AM4/21/03
to
Yousry (ysaa...@hotmail.com) writes:
> Actually my INSERT Trigger is so simple, and I the
> application inserts one record at a time.
>
> However I just want to make sure that no matter what
> happens in the trigger that the original data can never be
> lost.
> I did some test to make the Trigger Fail and observed that
> the original data gets lost (transaction rolled back)
> and I was hoping to be able to stop this behaviour.
>
> The reason I want to make sure the data can never be lost
> is that the database is used for a mission critical
> application that is currently in use.

If I understand this correctly, you have a mission-critical application,
and now you want to tap data from a table, but without any risk of
disturbing the normal flow.

In such case a trigger is probably not the way to go, because it is
part of trigger fundamentals that a trigger is whole together with
the statement that it is attached to. If the trigger fails, then
the statement has failed, and there is no way around that, nor should
there be.

You should probably look into replication. In this case, the committed
data is read from the transaction log at some later point in time it
reaches its destination.

0 new messages