Hi Paulius,
Dne 25. 04. 25 v 11:47 Paulius Pazera napsal(a):
> usually there are no direct updates in triggers
>
> do you agree with this statement "*when outer bulk update changes only
> 'status' field, it should not reset other fields including 'amt' to
> prior old values*"?
The outer bulk update does NOT resets the 'amt' to prior values, it
simply does not see the changes made by trigger in other rows.
The cursor stability over single statement is basic requirement of SQL
standard, which Firebird violated from the beginning (because it
predates the SQL standard). This violation allowed behavior (loophole)
that you have exploited, pitfalls like "insert into A ... select * from
A", and of course anomalies in simple selects in READ COMMITTED
transactions.
BTW, not consistent read problem that plagued Firebird (and was source
of frequent user complaints) was well known, and users were always
warned to do not exploit it (like updating different rows in the same
table from triggers) as some day, the Firebird developers will fix it.
And it was eventually fixed in v4.
If you look at the documentation from v4, section "Solution for not
consistent read problem", you can read:
The obvious solution to not consistent read problem is to make
read-committed transaction to use stable database snapshot while
statement is executed. Each new top-level statement create own database
snapshot to see data committed recently. With snapshots based on commit
order it is very cheap operation. Let name this snapshot as
statement-level snapshot further. Nested statements (triggers, nested
stored procedures and functions, dynamic statements, etc) uses same
statement-level snapshot created by top-level statement.
So, the outer (bulk) update creates its "snapshot" and works over
records that are visible to it, creating record version that does not
belong to this snapshot (it's yet uncommitted change!). The update
trigger however changes some row(s) in the same table as well. These
updates from trigger use the same snapshot to identify records to work
with as the outer bulk update, but as outer update, they create versions
that belong to different snapshot. Hence the outer update can't see
changes created by trigger in the same table.
The update operation does not touch the non-updated columns, but it
operates on record buffer (version) that is stable over the execution of
SINGLE statement as it comes from the snapshot created for the bulk
update. It fetches the record from stable snapshot, updates changed
values and writes it back. If the written record has newer version
created by trigger, it's overwritten and this update is lost.
You may think that this could be solved by fetching record from later
snapshot (created and shared by trigger and the bulk update) from the
same transaction, but I don't think it's a solution, because the trigger
CAN update (although your example didn't) the column used in outer
update WHERE filter, which will cause weird behavior (the outer update
may skip the record) and we'll be back at the square one again with
inconsistent non-repeatable results.
Other servers that does not have record level triggers but table level
ones (SQLServer) are not affected by this, as the trigger is executed
once after the table is updated to process the updated rows.
Because your algorithm leverages the non-SQL-standard behavior, you
can't expect that the engine would open this loophole again for you.
Hence you need to adjust you recalculation algorithm in accordance to
behavior specified by SQL standard. You have several options:
1. As suggested earlier, you can split the table to two, with columns
updated by outer bulk update in one and columns updated by trigger in
other one, and create a view to retain selects that operate on previous
united table.
2. Replace your bulk update & trigger with procedure that would use FOR
SELECT that will fetch PK's (or RDB$DB_KEY's for faster processing) and
use them to execute individual updates for rows as necessary.
Personally, I'd select the option 2 with dbkeys.
best regards
Pavel Cisar
IBPhoenix