File does not arrive at consistent times and can arrive 1 or more times a
day.
Looking for ideas to kick of the processing at our end.
Discarded the idea of a trigger as impractical because it will fire for
every record. Seems like we're stuck with a file watcher, never ending
program that wakes up periodically to see if data has arrrived. Developer
would like something more elegant. Any thoughts appreciated.
Sam
Check out the FTP exit points and see if you can find something there
that'll do you some good.
http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm?info/rzaiq/rzaiqftpscon.htm
--
82. I will not shoot at any of my enemies if they are standing in front of
the crucial support beam to a heavy, dangerous, unbalanced structure.
--Peter Anspach's list of things to do as an Evil Overlord
I was going to offer the following also, but it seems a 'manual
reset' of the monitored event is required; there may be a way to effect
the 'Reset' from the server, but I did not see anything obvious. I
believe the FTP exit program(s) are probably the best approach.
Management Central provides for file monitors, with triggered action
as a response. Using that as a built-in polling, versus your own coded
NEP might be an option. I just tested it with a file MYLIB/TARGET, that
when "Modified", polling interval of "5 minutes", notify with triggered
command action of "SNDMSG MSG('MYLIB/TARGET WAS MODIFIED')
TOUSR(MYUSR)". AFaIK the monitors for Text [& maybe Size] are supported
only for stream files [*STMF], not database *FILE.
The function requires the file already exist; which if a trigger was
not excluded for that reason but due to rows, then presumably that is a
given, that the FTP PUT is into an existing file.
The triggered action could be a CALL to a program that adds a message
to a message queue or data queue, but then the NEP/server program to
monitor the queue is still required, so what would be the point. So
logically the triggered action could be the command string that was
intended to be sent by the proposed QUOTE RCMD; possibly minus the
SBMJOB CMD() wrapping. I suppose not submitting the job, depends on
whether the job doing the triggering [job QZRCSRVS] has a desirable work
management setup for the work to be done.
Regards, Chuck
--
All comments provided "as is" with no warranties of any kind
whatsoever and may not represent positions, strategies, nor views of my
employer
Saml wrote:
> Another company is FTPing a file to us using a VPN. After the file arrives
> we'd like them to kick of a processing program on our box, using the QUOTE
> command to submit a job. The developer has provided then the exact syntax
> to use, but they are adamant that all they will do is deliver the file, and
> only the file. (I don't understand the contractual issues involved, just
> the problem...)
>
> File does not arrive at consistent times and can arrive 1 or more times a
> day.
>
> Looking for ideas to kick of the processing at our end.
>
> Discarded the idea of a trigger as impractical because it will fire for
> every record. Seems like we're stuck with a file watcher, never ending
> program that wakes up periodically to see if data has arrived. Developer
Check out my FTPTOOL application at www.bvstools.com/ftptool.html. This
is one of the processes is can accomplish (along with FTP security).
The only issue you may have (no matter what you use) is that exit points
fire when the file starts, not when it ends. So you may need a simple
DLYJOB command.
With FTPTOOL you set up "triggers" to fire off of any FTP command in any
directory (they can all be separate). All file information is available
as parameters as well, so you can work with the file name, path, etc.
Brad
www.bvstools.com
Another variation of this would be to have a 'put' of the original
file, and subsequently an 'append' to a secondary file, with the name
of the file that was put.
You would the have an everending program reading the secondary file,
with override to EOFW. This program would awaken when a record is
appended, submit the job to proces the file, and remove the record.
The secondary file could be cleared once a night.
Advantage is that you dont start processing the arriving file until
the sender appends, i.e. has released the first file.
Regards
Niels Steen Madsen
Regards, Chuck
--
All comments provided "as is" with no warranties of any kind
whatsoever and may not represent positions, strategies, nor views of my
employer
CRPence wrote:
> Past requests to resolve the /same/ requirement have often been
> handled satisfactorily by the sender doing two PUT requests; rather than
> PUT and then QUOTE RCMD. The actual file is PUT, then an additional PUT
> of one row of data into a specific file that has an insert trigger. The
> named as triggered file serves only the one purpose, to submit the job
> against the pre-defined file to which the first PUT was directed. More
> sophisticated cases have the sender put specific formatted text which
> includes the library.file.member that was just PUT. Passed necessary
> information, the submitted job can delete the inserted/triggered row;
> with REUSEDLT(*YES), the file should not grow excessively. Often the
> requirements come from the receiving side, because they have disabled
> the sender from issuing QOUTE RCMD. <<SNIP>>
A couple have suggested a record in second file after transmission is
successful, and this would be an excellent solution. However, as I said in
my original post "they are adamant that all they will do is deliver the
file, and only the file". So apparently plugging a record into a second
file is either beyond their ability, or against their standards, or
prohibitied by contractural agreement, or mabye there is some legal angle,
or something.
Sam
"nsm" <n...@privat.dk> wrote in message
news:1190545250....@k79g2000hse.googlegroups.com...
With that restriction, what exactly are all of the knowns/givens
involved with the receipt of the file(s)? Because...
Knowing that a specific user name accessing via FTP will only ever
periodically PUT a specific file.member, allows using the FTP login exit
point to identify the start of such a well-known transaction. At that
point, if the prior receipt-processing has not completed, the login can
be denied and the condition notified. If the last transaction has
completed, the exit program can clear the file.mbr [or more likely that
is indication that the prior receipt has already completed], then
allocate the file.mbr, and finally submit the asynchronous
receipt-processing to await the termination of the FTP PUT [for which
receipt of the FTP QUIT subcommand will have the lock implicitly
dropped], at which time the received file.mbr.data can be processed.
Regards, Chuck
--
All comments provided "as is" with no warranties of any kind
whatsoever and may not represent positions, strategies, nor views of my
employer
Saml wrote:
> Thanks for the suggestions so far.
>
> A couple have suggested a record in second file after transmission is
> successful, and this would be an excellent solution. However, as I said in
> my original post "they are adamant that all they will do is deliver the
> file, and only the file". So apparently plugging a record into a second
> file is either beyond their ability, or against their standards, or
> prohibited by contractual agreement, or maybe there is some legal angle,
> or something.
That could not be implemented exactly, in something generic like
FTPTOOL, due to the cases where the target of the PUT would not be known
or perhaps not exist already. The tool documentation could allude to
doing those allocate requests, to enable delaying action until after the
FTP session ends; maybe even as part of the tool, the /same/ idea could
be implemented on a data area or a row for the LOGged activity. Or if
you wanted to implement a more generic maintenance/defaults, you could
add a /notify object/ for the end of the FTP session using the same
technique, or courtesy of that feature [VFYOBJ()] in commitment control.
Regards, Chuck
--
All comments provided "as is" with no warranties of any kind
whatsoever and may not represent positions, strategies, nor views of my
employer
Bradley V. Stone wrote:
> Check out my FTPTOOL application at www.bvstools.com/ftptool.html. This
> is one of the processes it can accomplish (along with FTP security).
CRPence wrote:
> The end of the FTP session can be inferred, for processing a PUT of
> any one file.mbr, given there is a known existing file.mbr that is the
> target of the PUT. In the exit program, an ALCOBJ of the file.mbr can
> be performed, so the asynchronous processor of that file.mbr can be
> activated immediately. With this method, instead of the background job
> specifying some guess for DLYJOB DLY(value), that job can instead wait
> on its own request to allocate that file.mbr with the request:
> ALCOBJ ((thelib/file *FILE *EXCL mbr)) WAIT(value).
>
> That could not be implemented exactly, in something generic like
> FTPTOOL, due to the cases where the target of the PUT would not be known
> or perhaps not exist already. The tool documentation could allude to
> doing those allocate requests, to enable delaying action until after the
> FTP session ends; maybe even as part of the tool, the /same/ idea could
> be implemented on a data area or a row for the LOGged activity. Or if
> you wanted to implement a more generic maintenance/defaults, you could
> add a /notify object/ for the end of the FTP session using the same
> technique, or courtesy of that feature [NFYOBJ()] in commitment control.
I'm passing on all the responses to the developer, and I'll see what he says
next week.
Sam
"CRPence" <crp...@vnet.ibm.com> wrote in message
news:46f6cfbd$1@kcnews01...
Also, a lot of the time these files do not exist before putting (ie they
are unique file names being uploaded, like EDI files, statements, etc).
Because of the triggering built into FTPTOOL, you can do pretty much
anything, and know the path and filename of the file that is being sent
through FTP, and each FTP command itself can have a different trigger.