I've got a trigger program that writes the item number of changed items
out to another file. Now, I need to use those item numbers to build an
update file to send to my customers.
Question 1: How do I deal with the fact that the trigger program is
still active and could be adding records to the item number file as I
read it with SQL? If I wanted to use native methods, I could just delete
each record after I read it. But using an after using an SQL select to
build my update file, I wouldn't what to delete all records in the file
because the trigger program might have added some new ones right?
Question 2: Actually what I'd like to do is at the start of my process,
freeze my view of the input files so that I don't see any changes made
during my process. Particularly if I can then saw delete all records the
item number file and it will only effect the records included in the
frozen view not any that have been added. Is there some way to do this
other than a CPYF to a temporary location?
Related question: Anybody have recommendations for books, web sites,
ect. that would help me gain a better understanding of commitment control
and transaction processing on the 400? I think ;-) I have a basic
understanding of commit and rollback. But I know there is a lot more to
it, I like to get a handle on the other capabilities.
TIA,
Charles Wilt
Question 1: Why use SQL when it brings you a problem you know how to solve
using native methods?
Question 2: Why use an intermediate file for the item numbers? Couldn't
the trigger program just write records to or update records in the update
file? That would only leave the problem of locking the update file for
clearing after it has been sent to the customers.
As for the understanding of commitment control: I too have a basic
understanding of it, but I wasn't aware of there being a lot more to it :-)
Joep Beckeringh
"Charles Wilt" <charle...@worldnet.no.spam.att.net> wrote in message
news:MPG.1426bb99...@netnews.worldnet.att.net...
Then as you iterate through the cursor you do DELETE CURRENT OF x and an
INSERT INTO anotherfile.
Then at end of cursor you do a commit.
I think that way you don't delete anything that you haven't written.
My SQL reference is at work. (Paul Conte's DB2/400 book.)
For tranaction control, have you looked at the Backup and Recovery Manual,
SC41-5304. Part 6 is all about Jounaling and Commitment control. Not
exactly light reading...
Sam
"Charles Wilt" <charle...@worldnet.no.spam.att.net> wrote in message
news:MPG.1426bb99...@netnews.worldnet.att.net...
>
Because I already know how to do this with RPG, I want to understand how
to do it with SQL. Besides the rest of the project worked in SQL quite
nicely, this part was the only catch. I've been trying to use SQL as
much as possible when dealing with "set at a time" processing.
>
> Question 2: Why use an intermediate file for the item numbers? Couldn't
> the trigger program just write records to or update records in the update
> file? That would only leave the problem of locking the update file for
> clearing after it has been sent to the customers.
Actually, the process that builds the intermediate file is left over from
some older stuff we don't do anymore. I thought about changing it, but I
decided there was no real need to plus I wondered what effect it would
have on performance. Can a trigger program try to do to much?
>
> As for the understanding of commitment control: I too have a basic
> understanding of it, but I wasn't aware of there being a lot more to it :-)
Well, the way I (don't quite) understand it. There are a lot of options
on how the files you are working with under commitment control appear to
other jobs. I'm hoping for a better understanding of what is going on
other than just changes are either committed to the DB or rolled back.
>
> Joep Beckeringh
>
>
Thanks,
Charles
Thanks for the reply. It may be my answer. However, I was hoping for a
method that would allow me to stay away from processing each record.
Perhaps that is not possible.
Thanks again,
Charles
In article <uI9cV1EHAHA.319@cpmsnbbsa09>, no...@none.com says...
Question 1:
I do not understand your trigger. If you insert or update a record which fires a
trigger then you get only control back to your program when the trigger HAS
finished! So it could never be, that your trigger is still active.
Or have I understand something wrong?
Question 2:
You need a primary key to do this. Simple use a SELECT * on your table.
Write every sentence to your other file and delete the active record.
So you will never get in twist with newly inserted data.
Cheers,
Frank.
The trigger is fired by programs outside the process that we are talking
about. So it would be very easy for it to be fired while I'm processing
the file it updates.
> Question 2:
> You need a primary key to do this. Simple use a SELECT * on your table.
> Write every sentence to your other file and delete the active record.
> So you will never get in twist with newly inserted data.
Are you talking about using a cursor here or what? I don't understand.
>
> Cheers,
> Frank.
>
>
Thanks,
Charles
You open a new cursor within your RPG Code with as exp.
SELECT * FROM AAAAAAAA.BBBBBBB
Then you get a ResultSet back. This Resultset is constant.
This means that your pgm does not see newly inserted records (trigger).
Copy every piece of data to your extra table and issue a delete
command for every line.
DELETE FROM AAAAAAAA.BBBBBBB WHERE primary key = xxxx
Cheers,
Frank.
Someone else has already suggested using a cursor with the FOR UPDATE
clause. A cursor may be my only solution, but what I'm really looking
for is some way to avoid having to work with each record. Maybe it is
not possible.
Charles,
In article <39bf2f5f...@news.btx.dtag.de>,
fhaupt...@storchenmuehle.de says...
Repeatable Read (RR)
Level RR ensures:
- Any row read during a unit of work is not changed by other
activation groups that use different commitment definitions until the
unit of work is complete. (4)
- Any row changed (or a row that is currently locked with an UPDATE
row lock) by another activation group using a different commitment
definition cannot be read until it is committed.
In addition to any exclusive locks, an activation group running at
level RR acquires at least share locks on all the rows it reads.
Furthermore, the locking is performed so that the activation group is
completely isolated from the effects of concurrent activation groups that
use different commitment definitions.
DB2 for AS/400 supports repeatable-read through COMMIT(*RR).
Repeatable-read isolation level is supported by exclusively locking the
tables containing any rows that are read or updated. In the ANS and ISO
standards, Repeatable Read is called Serializable.
Now the way I read this, if I start commitment control with *RR, and do a
select * from myfile. I should be able to do another select * from
myfile and get the exact same rows even if additional rows have been
added. Thus, it seems to me that a soul be able to issue a
select * from myfile, followed by a delete * from myfile and have only
the rows I read in the first place actually be deleted. Any additional
rows added would not be seen by my delete statement.
Anybody agree or disagree with me on this? Even better, anybody actually
used this?
Thanks,
Charles
In article <MPG.1426bb99...@netnews.worldnet.att.net>,
charle...@worldnet.no.spam.att.net says...
... "Repeatable-read isolation level is supported by exclusively locking the
tables containing any rows that are read or updated." ...
in other words, other applications can't access the table(s) in question
until the transaction "commits"?
For a multi-user application, there should be a more elegant solution. But i
think this is one of the things i have to check too myself.
Michael Justin
My AS/400 & Java newbie page: http://www.ginko.de/user/michael.justin/as400/
Thanks for pointing that out. I didn't catch it with I read through it
the first dozen times.
Charles
In article <39c1...@news.ginko.net>, michael...@postkasten.de
says...
Imagine what it would do if the transaction contained a small,
intensively-updated, highly-shared file.
I can see a definite place for this locking level but I can see it wreaking
havoc on a busy system running a highly integrated application where some
files are extremely "hot". The phrase "handle with care" comes to mind.
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Richard Jackson
Mailto:richard...@richardjackson.net
http:\\www.richardjacksonltd.com
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"Charles Wilt" <charle...@worldnet.no.spam.att.net> wrote in message
news:MPG.142ae574...@netnews.worldnet.att.net...
> I've got a trigger program that writes the item number of changed items
> out to another file. Now, I need to use those item numbers to build an
> update file to send to my customers.
It would be easier to implement a 'last changed' field in the item table,
which contains the DB2 timestamp of the... well, the last change (set, for
example, by the update trigger).
Then, you could create the update file using this field and only export
records which have been changed since the last update file generation. To
keep track of the last updated field, you could use a small table which
contains one record for every update, including the last timestamp which has
been processed.
Then you can do a "select item_id from item where last_change >
:last_change', where :last_change contains the highest value of the field
"last_change" from the last update.
just my 0.1 euro :*)
So my 'last changed' field would need to be 'last time a PC Catalog field
changed'.
I'm not sure that would be the best solution.
Charles
In article <39c1...@news.ginko.net>, michael...@postkasten.de
says...