I am currently using embedded SQL in my RPGLE program.
We are calling the RPG program via a web service.
After a couple of runs the there are open files created and a few
other files created that seem to be access paths ot some kind of
optimisation paths??
Is there anyway of stopping these files from being created in QTEMP?
These seem to keep data paths open
The files being created are:
*query000n
These seem to keep any files used in prior queries open.
Ta muchly,
Assuming that is the case, then to respond to a question with another
question... Why does it matter that the files are created in QTEMP or
remain open? or... For what reason is a desire that they be closed?
Regards, Chuck
--
All comments provided "as is" with no warranties of any kind
whatsoever and may not represent positions, strategies, nor views of my
employer
bak...@optusnet.com.au wrote:
> I am currently using embedded SQL in my RPGLE program.
>
> We are calling the RPG program via a web service.
>
> After a couple of runs the there are open files created and a few
> other files created that seem to be access paths or some kind of
Hi chuck,
The reason I would like them closed is that there are mutiple web
services logon on via the same user profile from the same app server
(IP) through the QZRCSRVS server, the jobs that are being called are
in come cases RPGLE, and SQLRPGLE.
There can be many calls via these web services, and the RPG programs
open many files, and change library list depending on thier function.
I am worried about file contention, it seems the qtemp/*query000n also
keeps the ODP for the files it has used, these files may be used by
other called programs that may update/delete/add details.
I know it is good for performace, but I am still worried, I will take
the performance hit.
All other RPGLE programs clean up after they have run, and don't allow
for ODPs, this is important as some process at night required
exclusive access the files now left with ODPs from the QTEMP/
*query000n files.
I hope this makes sense...
Ian.
Also... review the "Close SQL cursor" CLOSQLCSR() and activation
group settings for the program.
Regards, Chuck
--
All comments provided "as is" with no warranties of any kind
whatsoever and may not represent positions, strategies, nor views of my
employer
TadPole wrote:
>
> <<SNIP>>
Hi Chuck,
Thanks for the general comments, this is what I did and ut seemed to
have worked:
1. Change the activation group on the main programs that is being
called to *NEW, i.e.
H dftactgrp(*no) actgrp(*new)
2. In the SQL change the SET command to:
C/Exec SQL
C+ Set Option closqlcsr=*endactgrp, commit=*none
C/End-Exec
This means the the program, when nvoked will run in a new activation
group, when the activation group finishes the closqlcsr cleans up.
This seems to work, but there is one last file left open:
QSYS2/Qasqresl
It is a logical, it seems to be associated with DB2 in some form, but
looking at the file nothing jumps out.
It would be better to be able to provide the ability to ask the
service to either close down the activation group [RCLACTGRP named] or
remove the lock [another job does ALCOBJ CONFLICT(*RQSRLS); implicitly
by a system request that conflicts, or explicitly by user request].
QASQRESL is the /routine resolution/ file which is opened and tracked
in static storage for the job; i.e. once open, it remains open for the
performance benefit, for future function and procedure name resolution
activity for SQL processing. Since it is a system file, there should be
no concern for it being open -- for read only, no commit.
Regards, Chuck
--
All comments provided "as is" with no warranties of any kind
whatsoever and may not represent positions, strategies, nor views of my
employer
TadPole wrote:
> On Dec 13, 2:25 pm, CRPence <crpe...@vnet.ibm.com> wrote:
>> No time to comment now... except, the functions that require
>> exclusive use, if by the i5/OS for something probably cause the close to
>> occur automatically with a wait time according to DFTWAIT() of the job.
>> For a user requested exclusive, AFaIK both the SQL LOCK EXCLUSIVE and
>> ALCOBJ CONFLICT(*RQSRLS) will signal the same as those OS requests, that
>> the pseudo-closed ODP is closed. So if there is only speculation of
>> some interference, versus any actual experience of a conflict, it is
>> probably worth investigating actuals.?
>>
>> Also... review the "Close SQL cursor" CLOSQLCSR() and activation
>> group settings for the program.
>>
>> TadPole wrote:
>>
>>> <<SNIP>>
>>> All other RPGLE programs clean up after they have run, and don't
>>> allow for ODPs, this is important as some process at night required
>>> exclusive access the files now left with ODPs from the
>>> QTEMP/*query000n files.- Hide quoted text -
>> - Show quoted text -
>
> Thanks for the general comments, this is what I did and ut seemed to
> have worked:
>
> 1. Change the activation group on the main programs that is being
> called to *NEW, i.e.
>
> H dftactgrp(*no) actgrp(*new)
>
> 2. In the SQL change the SET command to:
>
> C/Exec SQL
> C+ Set Option closqlcsr=*endactgrp, commit=*none
> C/End-Exec
>
> This means the the program, when invoked will run in a new activation
Thanks Chuck,
I suppose all of the above really stems from a lack of knowledge about
how programs will functions when called via JT400.
They create a job QZRCSRVS, and suppose the unknown (for me) is if
there are multiple requests at the one time to that jobs, from the
same app server, with the same logon ID, will the jobs run
concurrently, or run in a FIFO manner, if they run in a FIFO manner, I
am not overly concerned.
The library list changes seem to be left at the job level, and are not
associated with the activation group?
If I could understand how that works, and believe me I have been
trawling the net for answers, I could write the code to suit.
Thanks Ian.
Thanks for the reply, most helpful, just a couple of questions
> Each request will be allocated to a separate (concurrent) QZRCSRVS
job,
Does this have a new job number, I have been running my web services
in a test harness, and I run about 20 requests at the same time, but
they all seem to be sent to the same QZRCSRVS job. Is my as/400 too
fast, or do I need to write some automated test harness to hammer
these processes so they create mutile QZRCSRVS jobs?
How does a time out occur in a QZRCSRVS job? do I need to config
something?
Thanks, Ian.
> How does a time out occur in a QZRCSRVS job? do I need to config
> something?
>
Sorry, I still had my mind on JDBC connections, rather than program
calls.
As far as I know, a program call itself will not timeout, but the
underlying program can timeout if the QZRCSRVS job encounters an
object lock and exceeds the allowed LCKW time.
Also, if your AS400 Java object is set to use sockets (can be checked
via the isMustUseSockets method), then socket timeouts can occur
(depending on what SocketProperties have been set).
Thanks for the detailed reply, I am starting to get a clearer picture.
> Are you really running 20 threads generating simultaneous requests, or
> are 20 requests getting generated in very quick succession?
The requests are from the same webservice, but multiple people are
invoking the webservice, so my assumption here would be that there
would be 20 requests generated in very quick succession.
(I am using the WebServices wizard from with WDSCi to create the
webservices (and Java code) I have some, but very limited Java
experience.)
> How fast is your AS/400? (How many CPW?)
810 - How do I find CPW? - But it is a commercial sized box
> Also, are you sure you are using the same QZRCSRVS job? (Are you
> determining this from the AS/400 job logs, or from some Java logging?)
I review the job(s) via ops navigator you can subset this via active
QZRCSRVS jobs, i.e.
Work Management > Active Jobs
Right click > customize this view > Include
Subsystem = QUSERWRK
User = QUSER
Name = QZRCRVS
As far as I can determine we have a very fast iSeries, in most cases
it used the same QZRCSRVS job, but when I load the tasks and run them
concurrently another QZRCSRVS job is created with the same user ID
from the same server.
The QZRCSRVS job(s) are only finalising when the App server
(application) that has requested the inial job has been shut down, the
QZRCSRVS job then stops.
I can also stop the job on the iSeries side via the use of ops
navigator.
Ian.
Thanks all for the help in this post, we are now intergrating the Web
services with a workflow process and all is well.
I will keep this post open as a diary for other hurdles which may
occur in this process.
Again, thanks for your help on this one..
Ian.
hi I thought I would keep a log of what we are doing:
Tested the web services with the Workflow applictaion that needed to
call them, all was OK.
We are now fine tuning the Iseries code (SQL) to allow for better
performance.
The tool we are using is Jmeter, we are writting tests, and running
them, this is a good tool and seems to perform a multitude of tests.
We run the tool with 20 connections set, with 20 itteration of each of
the tests, it created 20 concurrent connections and performed the
tests, after reviewing the log details I am tuning the RPG/SQL and
creatin new indexes to allow the programs to run quicker.
The as/400 seems to be dealing with the load.
All seems to be working well, this seem like a good way to keep the
legacy system still useful and giving access to other newer
applications.
Ian.
If your "AS/400" is running V5R4 then check all the precious
informations that the last OS release gives ;-)
It may save you a lot of time :-)
> The as/400 seems to be dealing with the load.
Help it with the needed indexes (like every db) and it will work for you
even better...
> All seems to be working well, this seem like a good way to keep the
> legacy system still useful and giving access to other newer
> applications.
Have you read the "Modernizing IBM eServer iSeries Application Data
Access - A Roadmap Cornerstone" redbook? ;-)
> Ian.
Regards
Stefano P.
--
"Niuna impresa, per minima che sia,
può avere cominciamento e fine senza queste tre cose:
e cioè senza sapere, senza potere, senza con amor volere"
[Anonimo fiorentino, XIV sec.]
(togliere le "pinzillacchere" dall'indirizzo email ;-)