Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Before update trigger and computed column

59 views
Skip to first unread message

Volker Barth

unread,
Nov 15, 2007, 5:26:22 PM11/15/07
to
Hi all,

I'm using SA 10.0.1.3551.

I have a table with a computed column based on a user-defined function that
is applied on another column, say col_c = fn(col_b).
This has worked fine in the past.

Now I have to forbid to change the base column under certain circumstances.
I try to do this in a before update trigger that sets the new value back to
the old value.
But when the new value is reset; the computed column is not recalculated. In
contrast, it still is calculated based on the new (and now rejected) value.

So it seems that the calculation of a computed column takes place *before*
the actions of a before trigger. Is that by design?
And if so, how can I overcome this restriction?
A rollback is not applicable, as I do not want to reject all changes to the
row but have to reject changes to particular columns.


TIA

Volker


A quick sample:

create table T_Test
(
pk int default autoincrement primary key,
col2 int not null,
col3 int not null compute (col2 * 2),
col4 int null
);


insert T_Test (col2, col4) values (2, 22);
insert T_Test (col2, col4) values (12, null);

select * from T_Test;
-> displays
1 2 4 22
2 12 24 null

create trigger TUB_Test before update on T_Test
referencing old as T_O new as T_N
for each row
begin
-- reject changes to col2 if col4 has a particular value
if T_O.col2 <> T_N.col2 and T_N.col4 = 22 then
message 'col2 before: (old/new): ' || T_O.col2 || '/' || T_N.col2;
message 'col3 before: (old/new): ' || T_O.col3 || '/' || T_N.col3;
set T_N.col2 = T_O.col2;
message 'col2 after: (old/new): ' || T_O.col2 || '/' || T_N.col2;
message 'col3 after: (old/new): ' || T_O.col3 || '/' || T_N.col3;
end if;
end;


update T_Test
set col2 = 3, col4 = 23 where pk = 2;
-> no particular action
select * from T_Test;
-> displays
1 2 4 22
2 3 6 23

update T_Test
set col2 = 23 where pk = 1;
-> particular action
-> messages:
col2 before: (old/new): 2/23
col3 before: (old/new): 46/46
col2 after: (old/new): 2/2
col3 after: (old/new): 46/46
select * from T_Test;
-> displays
1 2 46 22 <- col 3 is not (2 * col2) anymore
2 3 6 23


Volker Barth

unread,
Nov 16, 2007, 7:10:40 AM11/16/07
to
Well, I have found a solution:

I have to set the computed column in the trigger.
Guess I didn't think of this at first as I thought computed columns must not
be set manually. But the docs only state, "they should not":)

So in the sample below, I added
> set T_N.col3 = T_N.col2 * 2;
after
> set T_N.col2 = T_O.col2;

Still I feel that this is somewhat errorprone (I have to retype the
expression used in the computed column definition) and is not the preferred
way to go.

So, my question remains if this theses is correct and is by design:
"The recalculation of a computed column only takes place before a before
trigger is fired."

Can someone from iAnywhere clarify on that, please?

Thanks
Volker


"Volker Barth" <No_VBarth@Spam_GLOBAL-FINANZ.de> wrote in
news:473cc78e@forums-1-dub...

Glenn Paulley

unread,
Nov 16, 2007, 5:02:59 PM11/16/07
to
This issue is very interesting, and we've been discussing it this
afternoon amongst my Engineering team. At present, for an INSERT
statement, the engine is performing the following tasks:

1. Evaluate all columns specified in the INSERT statement
2. Check that the row attribute match the table schema (nullability)
3. Evaluate computed columns
4. Fire row-level before triggers
5. Check for NULL in not-NULL column
6. Check constraints and articles
7. insert the row
8. fire row level after triggers
9. fire set level after triggers

Step (2) is required to ensure at all times that the "row image" seen by
a trigger matches the schema of the database. For example, if a column
is declared NOT NULL, any trigger cannot "see" that column with a NULL
value (if, for example, that is what the application inserted). That is
particularly true for COMPUTEd columns, because the evaluation of the
COMPUTE expression may implicitly take advantage of schema information
(such as nullability) that cannot be violated.

In a nutshell, and despite our warnings to the contrary, your only
reasonable solution to the problem is to explicitly SET the computed
column directly within the BEFORE INSERT trigger, since the computed
column will not be subsequently re-evaluated regardless of the changes
made by any BEFORE trigger.

For a later release, we are considering three potential options:

a) do nothing, and leave the semantics as they are.
b) remove COMPUTEd columns from the schema for all BEFORE TRIGGERS, and
move the evaluation of any COMPUTEd column to immediately before
inserting the row into the database. This will eliminate semantic
problems and inconsistencies caused by the actions of BEFORE triggers,
with the tradeoff of potentially breaking existing applications.
c) retain the ability for BEFORE triggers to "see" computed columns as
part of the schema of the row, but re-evaluate all COMPUTEd columns
prior to each BEFORE trigger, with one additional evaluation prior to
inserting the row (ie. for N triggers, evaluate the COMPUTE expression
N+1 times). There are some potential optimizations that can be applied,
but the problem is very difficult since the COMPUTEd column definition
can be

COMPUTE ( FOO() )

and there is no way to easily determine if the user-defined function
FOO() has any direct or indirect attribute references that are affected
by a trigger. This approach has obvious performance penalties, along
with yet another potential semantic issue: it may be undesireable to
re-evaluate a COMPUTE expression multiple times if that function has
side-effects (such as to save/compute the next surrogate key value for a
table).

I hope this helps to explain the server's behaviour. I would be
interested, Volker, in hearing your reaction to the alternatives listed
above.

Glenn

--
Glenn Paulley
Director, Engineering (Query Processing)
iAnywhere Solutions Engineering

EBF's and Patches: http://downloads.sybase.com
choose SQL Anywhere Studio >> change 'time frame' to all

To Submit Bug Reports: http://case-express.sybase.com

SQL Anywhere Studio Supported Platforms and Support Status
http://my.sybase.com/detail?id=1002288

Whitepapers, TechDocs, and bug fixes are all available through the iAnywhere
Developer Community at www.ianywhere.com/developer

Volker Barth

unread,
Nov 19, 2007, 8:23:21 AM11/19/07
to
Glenn,

thanks for your detailed explanation!

My first impression was: Wooh, there seem to be much more issues to handle
than I would have thought of - lucky me ;-)

I'm going to give a more detailed response in the next days as I still have
to pay more thoughts on your suggestions.

For the moment, I'm quite content that the workaround I have implemented is
the "officially" o.K.
And basically I think that dealing with a bundle of computed columns,
defaults and before/after triggers is something that needs a lot of care and
attention, so IMHO it's a situation where a need for a workaround is clearly
acceptable.

I have two more questions on this topic:

1. In which step are defaults handled? I suspect that will be in step 2,
too, but before computed columns (as defaults should be interdependent of
other columns).
Or could there by any interdependencies between defaults and computed
columns?

2. When is the decision made whether an after update trigger has to fire?
In my particular case, I have an update of <col_list> trigger on the same
table. That trigger only fires when the underlying data is changed (I had a
NG discussion on that with Nick Elson a few weeks ago), and I have verified
this behaviour.
But now it seems that this trigger still fires when the before trigger has
reset all according columns to the former values, in which case I would have
expected that the trigger shouldn't fire. So is this decision "whether to
trigger after the update" made before step 4?


Best regards

Volker


"Glenn Paulley" <pau...@ianywhere.com> wrote in
news:473e1393$1@forums-1-dub...

Richard Biffl

unread,
Nov 20, 2007, 9:02:09 AM11/20/07
to
Options (b) and (c) both look okay to me. Perhaps a middle ground between
them would let the computed column be read-only in BEFORE triggers (and
perhaps elsewhere - its writability makes it kind of quirky). Option (a)
seems defective, because the behavior can produce unexpected results.

Richard


"Glenn Paulley" <pau...@ianywhere.com> wrote in message
news:473e1393$1@forums-1-dub...

Glenn Paulley

unread,
Nov 20, 2007, 7:20:47 PM11/20/07
to
Volker Barth wrote:
> Glenn,
>
> thanks for your detailed explanation!
>

You are welcome.

> My first impression was: Wooh, there seem to be much more issues to handle
> than I would have thought of - lucky me ;-)
>
> I'm going to give a more detailed response in the next days as I still have
> to pay more thoughts on your suggestions.
>
> For the moment, I'm quite content that the workaround I have implemented is
> the "officially" o.K.
> And basically I think that dealing with a bundle of computed columns,
> defaults and before/after triggers is something that needs a lot of care and
> attention, so IMHO it's a situation where a need for a workaround is clearly
> acceptable.
>
> I have two more questions on this topic:
>
> 1. In which step are defaults handled? I suspect that will be in step 2,
> too, but before computed columns (as defaults should be interdependent of
> other columns).
> Or could there by any interdependencies between defaults and computed
> columns?

Defaults apply (mostly) to the case of INSERTs since the row already
exists for UPDATE and DELETE operations. The exceptions are expressions
like CURRENT TIMESTAMP. These are evaluated first, before any computed
columns are evaluated.

>
> 2. When is the decision made whether an after update trigger has to fire?
> In my particular case, I have an update of <col_list> trigger on the same
> table. That trigger only fires when the underlying data is changed (I had a
> NG discussion on that with Nick Elson a few weeks ago), and I have verified
> this behaviour.
> But now it seems that this trigger still fires when the before trigger has
> reset all according columns to the former values, in which case I would have
> expected that the trigger shouldn't fire. So is this decision "whether to
> trigger after the update" made before step 4?
>

This is complex to explain, and the server's behaviour has changed in a
recent EBF of 10.0.1 (build 3470).

Prior to build 3470, AFTER triggers would fire if *any* BEFORE triggers
fired; and with BEFORE triggers, they will always fire on an update even
if the update is a no-op (this is documented in the help:

CREATE TRIGGER statement

[snip]

BEFORE UPDATE triggers fire any time an UPDATE occurs on a row, whether
or not the new value differs from the old value. That is, if a
column-list is specified for a BEFORE UPDATE trigger, the trigger fires
if any of the columns in column-list appear in the SET clause of the
UPDATE statement.

With builds 3470 and up, the server's behaviour changes depending on
whether or not the trigger is a column-specific trigger or not. If the
trigger is global (any column of the table), the AFTER trigger will fire
even if the column's value is unchanged.

If the trigger is column-specific, the trigger will fire only if that
column(s) value is different - that change could stem from the UPDATE
statement itself, or from a BEFORE trigger executed previously. In this
latter case, if the UPDATE operation is a no-op, ie UPDATE T SET T.x =
T.x, then if the BEFORE trigger makes an additional modification then
the column(s)-specific AFTER trigger will fire.

Glenn

Volker Barth

unread,
Nov 21, 2007, 4:32:37 AM11/21/07
to
"Glenn Paulley" <pau...@ianywhere.com> wrote in
news:474379df$1@forums-1-dub...

> With builds 3470 and up, the server's behaviour changes depending on
> whether or not the trigger is a column-specific trigger or not. If the
> trigger is global (any column of the table), the AFTER trigger will fire
> even if the column's value is unchanged.
>
> If the trigger is column-specific, the trigger will fire only if that
> column(s) value is different - that change could stem from the UPDATE
> statement itself, or from a BEFORE trigger executed previously. In this
> latter case, if the UPDATE operation is a no-op, ie UPDATE T SET T.x =
> T.x, then if the BEFORE trigger makes an additional modification then
> the column(s)-specific AFTER trigger will fire.
>

So the opposite case is true, too (an UPDATE operation as "no-no-op" which
is reverted to a no-op in a BEFORE trigger)?

Say,
UPDATE T SET T.x = T.x + 1
will fire an AFTER trigger of update x
even if a BEFORE trigger sets T.x to its previous value, i.e.
new.x = old.x ?

That is the behaviour I'm noticing now.
BTW, all tested on SA 10.0.1.3547.


FWIW: A historical note:

The change in EBF 3470 is interesting enough: In previous versions (starting
with SA 5.5), I had tested with AFTER UPDATE OF col-list triggers. I wanted
to log changed rows in some kind of archive table and therefore only wanted
to log actual changes.
However, I found out that didn't work as expected (even then, AFTER triggers
were said to fire only if data had changed): They seemed to fire always
whether data was changed or not. So I implemented my own comparison logic to
filter out no-ops.
Following your words above, I think this was due to the fact that these
tables always used BEFORE triggers, too, and that will have triggered their
AFTER triggers.

Lately I checked that behaviour again and found out that AFTER triggers now
work as expected and could get rid of my own comparison logic.
So I'm glad I tried with a new enough EBF :)

Volker


Volker Barth

unread,
Jan 11, 2008, 12:16:49 PM1/11/08
to
Glenn,

well, I should have known that it would take me more than a
few days until I get time to respond to your suggestions;-)

As to the three suggestions, I still think the current
implementation (i.e., option a)) is alright as soon as it is
a somehow documented feature.

IMHO, option b) is problematic from a user's point of view:
It makes me think of a row "still under a construction -
handle with care". The situation that a computed column is
completely unaccessable in a before trigger seems to be more
unexpected than the current behaviour is. And it would lead
to a further difference between before and after triggers.
(There are a lot following Breck's book;-).)

To find a good solution, I'm tempted to use the notion of a
C++ object here:

Within a constructor, an object is not complete, and
therefore particular rules must be followed. For example,
you can not use dynamic binding here.
Additionally, you can establish any kind of invariant. An
invariant should be valid after the object is fully
constructed, and before and after any public function is
called. However, within private functions, it's the
implementor's responsibility whether invariants are valid or
not.

In other words: The implementor of a class must act with
particular care. In contrast, the users of the class (and
the implementors of any derived classes) should be able to
rely on a valid object state.


With that notion in mind, a COMPUTEd column should be
treated as an invariant.
Each "user" of an according row should be able to rely on
the fact that a) the computed column exists and b) that its
value is valid.
(That is already given in the current implementation.)

And as a trigger programmer, I would think of myself as some
kind of "derived class implementor": Writing triggers
resembles writing hooks - and they seem to be a kind of
"overridable functions".
In other words, as a trigger programmer, I think I should
take more care than an ordinary SQL user but I think I
should need much, much less knowledge than the classes'
implementor - and the latter seems to be YOUR part...

Therefore, IMHO option c) would be best from a
"philosophical" point of view: It if had already been
implemented, my original question would not have been
asked...


I cannot, however, evaluate the general effect of the
possible performance penalties you refer to:
In my case, only one before trigger is defined per table,
and that would mean only one more COMPUTE re-evaluation than
now. Since with option a), I have to enforce the
re-evaluation in the trigger's code, too (at least under
certain conditions), option c) should not lead to a
noticable performance decrease here.

Sorry for any "too philosophical statements" - I found it
quite hard to express my thoughts.


Best regards

Volker

0 new messages