Bulk update in 5.0.2 loses changes done by trigger

105 views
Skip to first unread message

Paulius Pazera

unread,
Apr 18, 2025, 3:24:14 AM4/18/25
to firebird-support
Hello,

we are trying to migrate from firebird v2.5.9 to v5.0.2 (classic, windows and linux) and bumped into such roadblock -- bulk update in firebird 5 works differently than in 2.5, it loses changes done by trigger. Individual updates work as expected, the same way as in firebird 2.5. Attached is simplified test case to demonstrate the problem. It may look that snapshot of both records was made when bulk update started, and update of second record overwrote new amount with old value from snapshot, but 'ReadConsistency = 0' in firebird.conf did not help. Please advise

Thanks,
Paulius (GLDS)




testBulkUpdate.sql

Dimitry Sibiryakov

unread,
Apr 18, 2025, 3:44:20 AM4/18/25
to firebird...@googlegroups.com
Paulius Pazera wrote 18.04.2025 9:24:
> It may look that snapshot
> of both records was made when bulk update started, and update of second
> record overwrote new amount with old value from snapshot, but
> 'ReadConsistency = 0' in firebird.conf did not help. Please advise

Yes, DML stability is an intended change in behavior described in Release
Notes v3 IIRC. This change fixed indefinite loop with INSERT SELECT.
ReadConsistency affects different parts that's why it has no effect.

--
WBR, SD.

Paulius Pazera

unread,
Apr 22, 2025, 7:51:25 AM4/22/25
to firebird-support
I expect bulk update to work identically as if records were updated individually. Current behavior looks like serious consistency issue to me. Another issue I see is broken backwards compatibility with firebird v2.5. Would you please rework the fix for 'indefinite loop with insert select' to regain bulk update consistency, and retain backwards compatibility? Or make ReadConsistency option work for bulk updates as well? Or implement new option for bulk update consistency and backwards compatibility? We are preparing to migrate to v5 and this is serious road block for us. Thanks!

Paulius (GLDS)

Dimitry Sibiryakov

unread,
Apr 22, 2025, 7:57:33 AM4/22/25
to firebird...@googlegroups.com
Paulius Pazera wrote 22.04.2025 13:51:
> I expect bulk update to work identically as if records were updated
> individually. Current behavior looks like serious consistency issue to me.

On contrary, it gained consistency: whole DML query see table as it was at
the beginning of run.

> Another issue I see is broken backwards compatibility with firebird v2.5.
> Would you please rework the fix for 'indefinite loop with insert select' to
> regain bulk update consistency, and retain backwards compatibility?

No. You should rework the design of your database.

--
WBR, SD.

Paulius Pazera

unread,
Apr 23, 2025, 10:48:29 AM4/23/25
to firebird-support
obviously we understand 'consistency' differently. To me, if I select two records individually, I expect the same data back as if I was doing bulk select. The same with update, I expect individual updates to work exactly the same as bulk update. Would you elaborate what 'consistency' was gained with core-92 or #417? what was the problem with bulk updates?  (select-insert problem looks like user error to me, see below)

changes done within transaction was always visible for that transaction in all transaction isolation levels. What kind of new functionality is this when transaction does not see changes it made, when changes done in the same transaction are silently overwritten by old values? Any other leading RDBMS has this? Should this be new transaction isolation level for those who can't add 'where' clause to 'insert-select' statement telling what records they want to insert? users used to have full control -- either to insert old records only (they could limit by primary key), or 10 sets of old records, or insert as many as possible in one hour, or even cause infinite loop if they wish and do not limit insert, etc. In my eyes even insert-select fix is not an improvement, it's a limitation for single use case, which unfortunately affected updates as well. Can this be undone at least for updates?

going back to original test case, here is the same test for MS SQL server, and it works like firebird 2.5 used to, it sees and does not lose change just made, like we expect it to work, like it used to work in firebird for 30+ years since interbase times:

create table tbl (
  Id integer,
  Amt double precision,
  Status integer);
go

create or alter trigger trg_tbl_bef_chg on tbl for update
as
begin
  DECLARE @oldStatus integer;
  DECLARE @newStatus integer;
  DECLARE @newId integer;

  SELECT @oldStatus = status FROM deleted;
  SELECT @newStatus = status, @newId = Id FROM inserted;

  if (@newStatus <> @oldStatus) begin
    update tbl set Amt = Amt + 2 where Id = @newId + 1;
  end;
end;
go

insert into tbl (id, Amt, Status) values (1, 10.00, 1);
insert into tbl (id, Amt, Status) values (2, 20.00, 1);
go

select * from tbl order by id;
go

update tbl set Status = 2;
go

select * from tbl order by id;
go

--MS SQL Server 2022 Express: (amount change of second record is there as expected)
--Id          Amt                      Status
------------- ------------------------ -----------
--          1                     10.0           2
--          2                     22.0           2


Thanks for reading and trying to understand our concerns,

Paulius (GLDS)

Dimitry Sibiryakov

unread,
Apr 23, 2025, 11:19:12 AM4/23/25
to firebird...@googlegroups.com
Paulius Pazera wrote 23.04.2025 16:48:
> obviously we understand 'consistency' differently. To me, if I select two
> records individually, I expect the same data back as if I was doing bulk
> select. The same with update, I expect individual updates to work exactly
> the same as bulk update. Would you elaborate what 'consistency' was gained
> with core-92 or #417?

You are mixing up transaction consistency and cursor consistency. "If I
select two records individually, I expect the same data back as if I was doing
bulk select" is a transaction consistency.

> what was the problem with bulk updates? (select-insert problem looks like user error to me, see below)

No, it isn't user error, it is perfectly valid statement that produced
unpredictable (and sometimes unwanted) outcome. Exactly "unpredictable" part was
the consistency problem: the same statement run on the same data under different
circumstances produced different outcome. That's a bug.

> changes done within transaction was always visible for that transaction in
> all transaction isolation levels.

They still are. Changes were made for statement level.

> Any other leading RDBMS has this?

Yes. Read for example Oracle documentation about cursor consistency.

> going back to original test case, here is the same test for MS SQL server,
> and it works like firebird 2.5 used to, it sees and does not lose change
> just made, like we expect it to work, like it used to work in firebird for
> 30+ years since interbase times

MS SQL has no row-level triggers. Feel the difference.

--
WBR, SD.

Paulius Pazera

unread,
Apr 24, 2025, 7:51:36 AM4/24/25
to firebird-support
I reworked the trigger to 'after':

set term ^ ;
create or alter trigger trg_tbl_aft_chg for tbl active after insert or update position 5
as
begin
  if (new.Status <> old.Status) then begin
    update tbl set Amt = Amt + 2 where Id = new.id + 1;
  end
end
^
set term ; ^

now it should be executed after bulk update, and individual updates done in the trigger should no longer be overwritten with old values by bulk update, right?

but why the results are still incorrect and unexpected?

update tbl set Status = 2;
select * from tbl order by id;

          ID                     AMT       STATUS
============ ======================= ============
           1       10.00000000000000            2
           2       20.00000000000000            2

I also tested postgreSQL, and it also works as expected, like firebird 2.5:

create table tbl (
  Id integer,
  Amt double precision,
  Status integer);

create or replace function func ()
RETURNS TRIGGER
language plpgsql
as $$
begin
  update tbl set Amt = Amt + 2 where Id = New.Id + 1;
  RETURN NEW;
end;
$$;

create trigger trg_tbl_aft_upd after update on tbl for each row
WHEN (OLD.Status IS DISTINCT FROM NEW.Status)
execute function func();


insert into tbl (id, Amt, Status) values (1, 10.00, 1);
insert into tbl (id, Amt, Status) values (2, 20.00, 1);
select * from tbl order by id;

update tbl set Status = 2;
select * from tbl order by id;

-- id | amt | status
------+-----+--------
--  1 |  10 |      2
--  2 |  22 |      2


so the behavior of firebird v5 is unique, thus I still question if it is correct behavior


Paulius (GLDS)

Dimitry Sibiryakov

unread,
Apr 24, 2025, 7:53:29 AM4/24/25
to firebird...@googlegroups.com
Paulius Pazera wrote 24.04.2025 13:51:
> now it should be executed after bulk update, and individual updates done in
> the trigger should no longer be overwritten with old values by bulk update,
> right?

Wrong. Firebird has only "ON EACH ROW"-type triggers, it has no
"STATEMENT"-type triggers.

--
WBR, SD.

Paulius Pazera

unread,
Apr 24, 2025, 8:03:11 AM4/24/25
to firebird-support
why update done in after update trigger is lost when doing bulk update?

Dimitry Sibiryakov

unread,
Apr 24, 2025, 8:16:41 AM4/24/25
to firebird...@googlegroups.com
Paulius Pazera wrote 24.04.2025 14:03:
> why update done in after update trigger is lost when doing bulk update?

Because UPDATE is SELECT + UPDATE CURRENT.

Sequence of operation is:

1. UPDATE updates record 1.
2. AFTER UPDATE trigger is fired and updated record 2.
3. UPDATE updates record 2.

At point 3 data written by trigger is overwritten by the main statement which
use values from stable SELECT's result set.

You must update the records individually to get result you expect.

And generally speaking interdependency between different records of the same
table (except FK reference) is a bad database design. Perhaps there is a normal
form that is violated by it.

--
WBR, SD.

Tomasz Tyrakowski

unread,
Apr 24, 2025, 8:27:13 AM4/24/25
to firebird...@googlegroups.com
On 24.04.2025 at 14:03, Paulius Pazera wrote:
> why update done in after update trigger is lost when doing bulk update?
In fact it's not always the case. If you make an after update trigger
that updates one of the _previous_ rows (i.e. one of those that the
update has already processed), the changes made by the trigger are
preserved. If the trigger updates a row ahead of the row being currently
processed, the update overwrites the changes made by the trigger.
I'm not a FB developer, just a fellow user, so to wrap your head around
it, think of it this way: update takes a snapshot of the current state
of the rows that it intends to modify. Suppose the column to update
contains 1 and 2 in the subsequent rows (there are just 2 rows) and your
update is supposed to increase each by 1. So it takes the first row,
calculates 1+1 and writes 2 back to the row. Even if it fires a trigger
(for the first row), which changes 2 to 3 in the next row, the update
still sees the "frozen" values, so for the next row it calculates 2+1=3
and writes 3 (not 4) back to the row.
I don't know how close this description is to what actually is going on
under the hood, but I think it helps to visualize the processing of rows
by the update statement.
Now, if the trigger updated the _previous_ row instead of the next one,
the update would change 1 to 2 in the first row (the trigger fires but
has no effect - there's no previous row), then it changes 2 to 3 in the
second row, the trigger fires again, this time changing 2 to 3 in the
first row (update doesn't go back to ensure 2 stays there ;) ).

regards
Tomasz

Dimitry Sibiryakov

unread,
Apr 24, 2025, 8:33:36 AM4/24/25
to firebird...@googlegroups.com
Tomasz Tyrakowski wrote 24.04.2025 14:27:
> then it changes 2 to 3 in the second row, the trigger fires again, this time
> changing 2 to 3 in the first row (update doesn't go back to ensure 2 stays
> there ;) ).

But record versioning can make sure that UPDATE in the trigger see value 1 in
the first row.
If I'm not mistaken, SQL standards says something like "result of operation
must not depend on the order in which records are handled".

--
WBR, SD.

Tomasz Tyrakowski

unread,
Apr 24, 2025, 8:42:40 AM4/24/25
to firebird...@googlegroups.com
Well, with after update triggers I think it kinda does. Try this simple
example:

create table T(
ID integer primary key,
VAL integer
);

insert into T values (1, 1);
insert into T values (2, 1);
insert into T values (3, 1);

set term ^;
create trigger T_AU for T after update
as
begin
update T set VAL=VAL+1
where ID=new.ID-1;
end
^
set term ;^


update T set VAL=VAL+1;

select * from T;

Result:

1 4
2 3
3 2

and not

1 2
2 2
3 2

(Firebird 5.0.2, Win64).

regards
Tomasz

Dimitry Sibiryakov

unread,
Apr 24, 2025, 9:02:11 AM4/24/25
to firebird...@googlegroups.com
Tomasz Tyrakowski wrote 24.04.2025 14:42:
> Well, with after update triggers I think it kinda does.

Indeed:

SQL> update T set VAL=VAL+1 order by id;
SQL> select * from t;

ID VAL
============ ============
1 4
2 3
3 2
SQL> rollback;
SQL> update T set VAL=VAL+1 order by id desc;
SQL> select * from t;

ID VAL
============ ============
1 2
2 2
3 2

Either I'm wrong about standard or it is a reason for another ticket in the
tracker.

--
WBR, SD.

Paulius Pazera

unread,
Apr 24, 2025, 10:32:23 AM4/24/25
to firebird-support
thanks both, that's what I suspected just needed confirmation. Because bulk update order is not guaranteed, then updates done in trigger may be lost randomly (some may be applied). Which is even worse, because it may work during our testing, and may start failing randomly for clients in production when different order is chosen by firebird. Inconsistency is evil...

Mark Rotteveel

unread,
Apr 24, 2025, 10:58:36 AM4/24/25
to firebird...@googlegroups.com
On 24/04/2025 16:32, Paulius Pazera wrote:
> thanks both, that's what I suspected just needed confirmation. Because
> bulk update order is not guaranteed, then updates done in trigger may be
> lost randomly (some may be applied). Which is even worse, because it may
> work during our testing, and may start failing randomly for clients in
> production when different order is chosen by firebird. Inconsistency is
> evil...

You could add an order by to your update so it processes them in a
descending order, but every time someone sees that, they'll wonder why.

Mark
--
Mark Rotteveel

Gerdus van Zyl

unread,
Apr 24, 2025, 11:41:57 AM4/24/25
to firebird-support
I was intrigued to make original work and it looks like you can do it if you use Global Temp Table.
Please note that update ordering is still important, so use
update tbl set Status = 2 WHERE Status = 1 ORDER BY Id asc;
instead of
update tbl set Status = 2 WHERE Status = 1 ORDER BY Id desc;


e.g.:
CREATE GLOBAL TEMPORARY TABLE statuschanged (
Id integer
);
commit;

set term ^ ;
create or alter trigger trg_tbl_bef_chg for tbl active before insert or update position 0
as
DECLARE IPrev INT;

begin
  if (new.Status <> old.Status) then begin
INSERT INTO statuschanged (Id) VALUES (new.id + 1);
  end
 
  select Id FROM statuschanged WHERE Id = new.id INTO :IPrev;
 
  if (IPrev is not null) then begin
new.Amt = new.Amt + 2;
  end

 
 
end
^
set term ; ^
commit;
testBulkUpdate.sql

Tomasz Tyrakowski

unread,
Apr 24, 2025, 2:00:19 PM4/24/25
to firebird...@googlegroups.com
On 24.04.2025 at 16:58, 'Mark Rotteveel' via firebird-support wrote:
> You could add an order by to your update so it processes them in a
> descending order, but every time someone sees that, they'll wonder why.
It won't help in general. Everything depends on how the trigger chooses
rows for modification (and it may depend on many factors), and whether
the rows the trigger updates have been already processed by the main
update statement, or not (it's even possible some are already behind the
update cursor, and some are still ahead, so the changes the trigger
makes will be partially applied, and partially lost).
In fact I've never given it much thought, just assumed updates made from
within an "on update trigger" of the same columns the main update might
be changing is not a good practice (may lead to UB), and just avoided
such scenarios.
Now that I think of it, it looks like a very hard problem to solve in
general - a totally stable update cursor might require triggers fired by
the update to be treated as being executed in separate transactions, to
make sure changes they make don't mess with the main update (unless
record versioning may be used as means of solving this, but nobody said
the on update trigger just updates - it might as well delete and/or insert).
One thing that's definitely worth considering is putting a notice in the
documentation.

regards
Tomasz

Dimitry Sibiryakov

unread,
Apr 24, 2025, 3:04:11 PM4/24/25
to firebird...@googlegroups.com
Tomasz Tyrakowski wrote 24.04.2025 20:00:
> Now that I think of it, it looks like a very hard problem to solve in general

That's why Oracle without much thinking throws the famous "table mutating"
error in such cases.

--
WBR, SD.

Tomasz Tyrakowski

unread,
Apr 24, 2025, 5:25:33 PM4/24/25
to firebird...@googlegroups.com
Which might be an indicator, that it's too hard a nut to crack ;)
They're not alone, though. AFAIR mutating a container in C++ invalidates
all its active iterators, so it looks it's a common pattern.

While we're at it, what's the interpretation of triggers executed as a
result of an update in terms of SQL statements? Are the trigger
executions treated as internal parts of the update statement that fired
them, or are they separate statements? What was the intention / design
decision in Firebird 3.0+?
If the former, it looks like the pre-3.0 behavior was more in line with
this interpretation - all changes made by a statement (and a trigger
execution is a part of the statement which fired it) should be preserved.
If the latter, I think triggers should be treated as separate
transactions (possibly causing deadlocks if they interfere with the
pending update), otherwise you'd have multiple _concurrent_ statements
(not necessarily parallel, but concurrent are nightmarish enough)
running in a _single_ transaction, which is something I've absolutely no
idea how to handle in general (i.e. regardless of what actually the
statements do). I've never heard of a RDBMS which would allow multiple
concurrent statements within a single transaction (or maybe I have, just
haven't noticed it? ;) ).
But that's a bit above my league, so I'd rather leave it as is at this
point, don't want to waste bandwidth for my speculations.
Thanks for an interesting discussion, though. Definitely an eye-opening
one for me.

regards
Tomasz

Paulius Pazera

unread,
Apr 25, 2025, 5:36:42 AM4/25/25
to firebird-support
thanks all for trying to help us

unfortunately real life code is much more sophisticated then this simplified example I provided just to show the problem, and there is no such static 'order by' which would make bulk update to work as expected in firebird v5

the only workaround I see we could do on our end is to rework bulk updates to for-select-update-single, but that would be enormous work. This is ~30 years old project still being actively developed, having 600+ tables, 1400+ triggers, 3600+ stored procedures, with most of business logic in there, with a lot of 'update' statements in there, and also in app server, and clients. And it's working fine on firebird v2.5 for many years. Now in order to upgrade to firebird v5 we need to review all those 'update' statements, analyze which are bulk, and rework all bulk updates because we don't know which ones bump into this new behavior. Due to randomness we can't even trust our tests to help identify problematic bulk updates. In such light the error returned by oracle would help at least to identify places which needs rework, but I believe firebird can do better than that

then couple rhetorical questions comes up -- why should we rework our code eliminating bulk updates, reworking them to individual updates, taking advantage of introduced inconsistency? Isn't that an indication that maybe bulk updates were accidentally broken? it looks that way at least from our perspective. IMHO, when outer bulk update changes only 'status' field, it should not reset other fields including 'amt' to prior old values. I hope you agree with this statement

this is serious roadblock for us and we need your help resolving this. Here comes several non-rhetorical questions trying to understand what was done and why, trying to figure out how this could be resolved. What was wrong with bulk updates in v2.5 which needed such behavior change? if nothing was wrong, and this behavior change is just a side effect of infinite insert-select fix, then maybe this bulk update behavior change could be simply undone? otherwise, if there were some bulk update issues which needed this fix, then maybe that fix could be reviewed and improved to fix those scenarios but not to break our scenario? if that fix can not be improved not to break our scenario, then maybe new firebird.conf option could be implemented to allow users to restore legacy behavior? (similar to ReadConsistency which we also needed to make read committed to become read committed again)

thanks again for understanding and help,

Paulius (GLDS)

Dimitry Sibiryakov

unread,
Apr 25, 2025, 5:44:14 AM4/25/25
to firebird...@googlegroups.com
Paulius Pazera wrote 25.04.2025 11:36:
> Now in
> order to upgrade to firebird v5 we need to review all those 'update'
> statements, analyze which are bulk, and rework all bulk updates because we
> don't know which ones bump into this new behavior.

No. You need to rework only triggers that update the same table making them
update different field(s) or table(s) from the main update. It is much simpler.

--
WBR, SD.

Paulius Pazera

unread,
Apr 25, 2025, 5:47:52 AM4/25/25
to firebird-support
usually there are no direct updates in triggers

do you agree with this statement "when outer bulk update changes only 'status' field, it should not reset other fields including 'amt' to prior old values"?

Dimitry Sibiryakov

unread,
Apr 25, 2025, 6:03:26 AM4/25/25
to firebird...@googlegroups.com
Paulius Pazera wrote 25.04.2025 11:47:
> do you agree with this statement "*when outer bulk update changes only
> 'status' field, it should not reset other fields including 'amt' to prior
> old values*"?

Generally I agree that UPDATE must not touch fields that are not explicitly
mentioned.
But it is not my opinion that matters, but Firebird developer's ones.

--
WBR, SD.

Paulius Pazera

unread,
Apr 25, 2025, 6:05:11 AM4/25/25
to firebird-support
fair enough

What was wrong with bulk updates in v2.5 which needed such behavior change?

Dimitry Sibiryakov

unread,
Apr 25, 2025, 7:12:05 AM4/25/25
to firebird...@googlegroups.com
Paulius Pazera wrote 25.04.2025 12:05:
> What was wrong with bulk updates in v2.5 which needed such behavior change?

I repeat: the inconsistent behavior depended on execution order.

If you need a practical solution for your problem, here it is:

1. Split the problematic table into two, extracting stored aggregates
maintaining by triggers.
2. For backward compatibility with the application create a view in the place of
the original table which join two new tables and allow to update only primary one.

You can whine as you wish but the new Firebird behavior is given and won't be
changed just at your will.

--
WBR, SD.

Paulius Pazera

unread,
Apr 25, 2025, 7:18:34 AM4/25/25
to firebird-support
now that we I agree that UPDATE must not touch fields that are not explicitly mentioned, we see that firebird has issues which should be resolved. So would you share an example what exactly was wrong with bulk updates in v2.5 which needed such behavior change? Thanks!

Dimitry Sibiryakov

unread,
Apr 25, 2025, 7:24:35 AM4/25/25
to firebird...@googlegroups.com
Paulius Pazera wrote 25.04.2025 13:18:
> now that we I agree that UPDATE must not touch fields that are not
> explicitly mentioned, we see that firebird has issues which should be
> resolved.

If tou think that this is Firebird issue - put it into Firebird issue
tracker. Then just wait while there is someone able to solve it (or find such
one to do the job for you).

--
WBR, SD.

Paulius Pazera

unread,
Apr 25, 2025, 7:29:23 AM4/25/25
to firebird-support
yes, I will do so, I just wanted to gather addition info and understand the problem better so that I can describe issue correctly in tracker

Paulius Pazera

unread,
Apr 25, 2025, 8:52:44 AM4/25/25
to firebird-support
added #8538. Thanks again guys!

Tomasz Tyrakowski

unread,
Apr 25, 2025, 11:18:52 AM4/25/25
to firebird...@googlegroups.com
On 25.04.2025 at 13:18, Paulius Pazera wrote:
> So would you share an example what exactly was wrong with bulk
> updates in v2.5 which needed such behavior change? Thanks!
I can give you one example of what definitely was wrong previously (I'm
not in a position to discuss whether the fix was perfect or not - it
fixed this issue, but possibly it can be improved):

create table T ( F integer );

set term ^;
create trigger T_BU for T before update
as
begin
insert into T values(NULL);
end
^

set term ;^

insert into T values (NULL); -- start with just 1 row

update T set F = 1; -- be ready to kill the server process ;)

The last update statement will run forever and keep filling T until you
run out of disk space. In some circumstances you can even crash the
server OS this way, e.g. when your database file is located in the Linux
root filesystem. The cause of this behavior is update being able to see
the rows inserted by the trigger (every time it updates a row, a new row
is appended and the update keeps going forever).

regards
Tomasz

Pavel Cisar

unread,
Apr 25, 2025, 1:13:17 PM4/25/25
to firebird...@googlegroups.com
Hi Paulius,

Dne 25. 04. 25 v 11:47 Paulius Pazera napsal(a):
> usually there are no direct updates in triggers
>
> do you agree with this statement "*when outer bulk update changes only
> 'status' field, it should not reset other fields including 'amt' to
> prior old values*"?

The outer bulk update does NOT resets the 'amt' to prior values, it
simply does not see the changes made by trigger in other rows.

The cursor stability over single statement is basic requirement of SQL
standard, which Firebird violated from the beginning (because it
predates the SQL standard). This violation allowed behavior (loophole)
that you have exploited, pitfalls like "insert into A ... select * from
A", and of course anomalies in simple selects in READ COMMITTED
transactions.

BTW, not consistent read problem that plagued Firebird (and was source
of frequent user complaints) was well known, and users were always
warned to do not exploit it (like updating different rows in the same
table from triggers) as some day, the Firebird developers will fix it.
And it was eventually fixed in v4.

If you look at the documentation from v4, section "Solution for not
consistent read problem", you can read:

The obvious solution to not consistent read problem is to make
read-committed transaction to use stable database snapshot while
statement is executed. Each new top-level statement create own database
snapshot to see data committed recently. With snapshots based on commit
order it is very cheap operation. Let name this snapshot as
statement-level snapshot further. Nested statements (triggers, nested
stored procedures and functions, dynamic statements, etc) uses same
statement-level snapshot created by top-level statement.

So, the outer (bulk) update creates its "snapshot" and works over
records that are visible to it, creating record version that does not
belong to this snapshot (it's yet uncommitted change!). The update
trigger however changes some row(s) in the same table as well. These
updates from trigger use the same snapshot to identify records to work
with as the outer bulk update, but as outer update, they create versions
that belong to different snapshot. Hence the outer update can't see
changes created by trigger in the same table.

The update operation does not touch the non-updated columns, but it
operates on record buffer (version) that is stable over the execution of
SINGLE statement as it comes from the snapshot created for the bulk
update. It fetches the record from stable snapshot, updates changed
values and writes it back. If the written record has newer version
created by trigger, it's overwritten and this update is lost.

You may think that this could be solved by fetching record from later
snapshot (created and shared by trigger and the bulk update) from the
same transaction, but I don't think it's a solution, because the trigger
CAN update (although your example didn't) the column used in outer
update WHERE filter, which will cause weird behavior (the outer update
may skip the record) and we'll be back at the square one again with
inconsistent non-repeatable results.

Other servers that does not have record level triggers but table level
ones (SQLServer) are not affected by this, as the trigger is executed
once after the table is updated to process the updated rows.

Because your algorithm leverages the non-SQL-standard behavior, you
can't expect that the engine would open this loophole again for you.
Hence you need to adjust you recalculation algorithm in accordance to
behavior specified by SQL standard. You have several options:

1. As suggested earlier, you can split the table to two, with columns
updated by outer bulk update in one and columns updated by trigger in
other one, and create a view to retain selects that operate on previous
united table.

2. Replace your bulk update & trigger with procedure that would use FOR
SELECT that will fetch PK's (or RDB$DB_KEY's for faster processing) and
use them to execute individual updates for rows as necessary.

Personally, I'd select the option 2 with dbkeys.

best regards
Pavel Cisar
IBPhoenix
Reply all
Reply to author
Forward
0 new messages