We have an as/400 9406-S10, and we have about 12 users on the machine.
We are running an accounting package written in cobol. A few of the
users will run queries on the database every so often.
Now, is anyone familiar with this machine? Whenever a query or any
report is running, everyone starts complaining about how dog slow the
computer is. Is there anything we can do besides getting another
machine? Should this machine be big enough for an office our size?
We've had this for less than 3 years, the company that sold it to us
said it was plenty big.
Is there any tuning things I can do or look for, files to clear, logs to
delete, etc.?
Thanks,
Julien
The "S" in S10 means "server model". These models are optimized for
server/batch work, and have limited capacity for interactive work. For
the gory details, see this support line technical document (sorry - very
long URL so you may have to copy/paste sections...):
Almost certainly you want to reduce the amount of work done
interactively - ie via "green screen" (aka 5250 or 5250 emulation). For
example run the queries and reports in batch, or if your users have PCs,
submit queries via Client Access, DB2 Connect client, etc.
--
Karl Hanson
--
Glen Ford
info@nospam can-da.com (remove nospam and replace info with consult)
Can Da Software, Mississauga, ON Canada
Project Management, Systems Analysis & Design, LANSA/AS400/Windows
Development
read our models and articles at www.can-da.com
"julien mills" <jfm...@ix.netcom.com> wrote in message
news:39EF3625...@ix.netcom.com...
> Hi all,
>
> We have an as/400 9406-S10, and we have about 12 users on the machine.
> We are running an accounting package written in cobol. A few of the
> users will run queries on the database every so often.
>
> Now, is anyone familiar with this machine? Whenever a query or any
> report is running, everyone starts complaining about how dog slow the
> computer is. Is there anything we can do besides getting another
> machine? Should this machine be big enough for an office our size?
> We've had this for less than 3 years, the company that sold it to us
> said it was plenty big.
>
> Is there any tuning things I can do or look for, files to clear, logs to
> delete, etc.?
>
> Thanks,
> Julien
If you have a server model then that will happen. Pretty much any time
there's more than one user actually using the system interactively the
performance will go downhill. (The actual algorithm used is
incomprehensible, but basically some code in the system watches the
interactive usage and throttles things when it gets above a certain
point.) Running the queries (and similar long-running operations) in
batch will generally prevent this problem.
This is to difficult to answer.
What do you consider as a "query" ? Are these predefined within the
package, or do you let the users make their own ?
Query and SQL is so dangerous as it allows mistakes to be made easy.
Regards,
Paul
--------------------
julien mills wrote in message <39EF3625...@ix.netcom.com>...
Hi all,
We have an as/400 9406-S10, and we have about 12 users on the machine.
We are running an accounting package written in cobol. A few of the
users will run queries on the database every so often.
Now, is anyone familiar with this machine? Whenever a query or any
report is running, everyone starts complaining about how dog slow the
computer is. Is there anything we can do besides getting another
machine? Should this machine be big enough for an office our size?
We've had this for less than 3 years, the company that sold it to us
said it was plenty big.
Is there any tuning things I can do or look for, files to clear, logs to
delete, etc.?
Thanks,
Julien
The contents of this message express only the sender's opinion.
This message does not necessarily reflect the policy or views of
my employer, Merck & Co., Inc. All responsibility for the statements
made in this Usenet posting resides solely and completely with the
sender.
Dan Hicks wrote:
--
Systems and application support IBM AS/400
apea...@home.com
203 348 9937
ip address 24.228.25.35
Instant message AvromAvrom
Internet connection 100 mhz
thank you, Avrom
I guess the concensus is that we need to run our queries in batch.
Maybe our machine is a little on the light side, but if we're careful
we'll be OK.
I guess the QPFRADJ should not set to 2, because our experience told me that
1) the system will need more resource to cater the adjustment,
2) the auto-adj will set a low value to the memory pool when there is no job
using that pool, however when you submit a job to that sub-system (say
qbatch), the job may not goto active because the memory is not enough to
active the job (because the job is not yet active and the system thinking
that this pool does not need more memory because there is no job here)
I don't know whether the auto-adj become smart in the newer version, I got
these problem in V3R2.
We set QPFRADJ to '0' and manually adjust the memory pool, and set the
paging option to fixed in wrksyssts.
julien mills <jfm...@ix.netcom.com> wrote in message
news:39F44897...@ix.netcom.com...
All blanket statements are false ;-)
Well, that "query is real inefficient" is IMO both 'blanket' and 'false'.
This should be obvious in that both use the same engine... and the cost of
processing outside that engine is typically insignificant -- and in fact,
the query feature is much more specific <ie. not generally enabled for
programming; beyond OPNQRYF and its API> to a task of report-only...
Regards, Chuck
All comments provided "as is" with no warranties of any kind whatsoever.
> All blanket statements are false ;-)
In the words of a certain pointy eared gentleman:"I am lieing" (Ok, so those
who know what I am talking about know it wasn't him - naah. And those who
don't -- go back to bed, it's after your bedtime :> (What can I say, I'm
getting grumpy in my old age! Besides, it's after MY bedtime) )
>
> Well, that "query is real inefficient" is IMO both 'blanket' and 'false'.
I don't believe it is either (blanket or false). Sorry. My experience and
knowledge of how it works says inefficient.
> This should be obvious in that both use the same engine... and the cost
of
> processing outside that engine is typically insignificant -- and in fact,
> the query feature is much more specific <ie. not generally enabled for
> programming; beyond OPNQRYF and its API> to a task of report-only...
Note: I said inefficient ... not useless or inappropriate. Any tool which
needs to create a massive workfile containing every possible combination of
records is BOUND to be inefficient. Which is why SQL is better ... you can
write the SQL in multiple steps. Result is a much smaller read base.
Therefore much more efficient read times. Note ... yes, you can write
inefficient SQL which is just as bad as Query.
Also Query is not optimized. SQL is automatically optimized by the
interpreter. (And I don't believe they use the same engine any longer ... I
may be wrong about that but one of the IBMers will have to confirm or
correct that). Also please note that Query can be used for non-report
purposes.
Having said all that, I use Openquery a lot as I do SQL. I also use good ol'
fashioned Query. I've even written systems in DFU.
For a user Query is usually sufficient. HOWEVER, power users (such as my
friend the BA and his maxed out files) can kill the system. Any user capable
of writing queries that large shouldn't be using Query ... at least not for
inquiries of that size. A programming language (yes, SQL is one) is needed.
The trade-offs just can't be made otherwise. (Actually, in this case he
created 5 files and then merged the files so yeah, it is possible but ...he
had to be told to break them up).
Besides ... the question was why does my system die when my users run
queries. Running Query in interactive was one answer which I didn't repeat
(until now :>) and overlarge Queries was my contribution.
>
> Regards, Chuck
> All comments provided "as is" with no warranties of any kind whatsoever.
- get a hardware upgrade (big $$$)
- tune your machine and programs (low $$$, lots of time, maybe big
improvement, maybe not)
- purge and archive your data
Most shops don't purge and archive because the users won't let them. They
end up carrying 20 years of shipment history to support the VP of Sales
quarterly report. If 18 of the 20 years of data were to be archived to
another library (getting it out of the production library), the production
system would run considerably faster. The batch window would drop, as would
backup times. Remember - disk space is cheap. A processor upgrade to
process all that data can be terribly expensive.
What most people don't realize is that it's easy to build that archived data
back into those few places where you really need it (the quarterly report).
Find the logical file that feeds the program and make up a new one that
looks at both the production and the archive files through a single record
format. For example:
R RECFMT1 (LIVELIB/SHIPHIST ARCLIB/SHIPHIST)
Most people don't think this works because RPG won't compile on top of it.
The trick is to not compile the program - just put the new LF in the lib
list or use an OVRDBF (turn off level check on the new lf). The rpg program
(or even query) works great.
So... now that you know how to do it, you may need some purging and
archiving tools - check out ARCTOOLS/400 (http://www.arctools.com) for more
information - purging and archiving without programming and without locking
files. Very powerful.
Good luck.
--
+++++++++++++++++++++
Improve Performance _and_ Get Access to Your Historical Data
Yes, you CAN have it both ways.
Purge and Archive without Programming with ARCTOOLS(tm)
http://www.arctools.com
mailto:in...@arctools.com
+++++++++++++++++++++
"Avrom Pearson" <apea...@home.com> wrote in message
news:39F1CE1C...@home.com...
> Run queries in batch no exceptions. What is the percentage of disk
capacity
> in use?
>
You do realize that Chuck Pence IS one of the IBMers, in fact one of the
database developers, don't you?
--
Dave Shaw
Spartan International, Inc.
Spartanburg, SC
To subscribe to the MAPICS-L mailing list send email to
MAPICS...@midrange.com or go to www.midrange.com and follow the
instructions.
"Glen Ford" <info@-nospam-can-da.com> wrote in message
news:uMsK5.7441$b8.2...@quark.idirect.com...
Those buggers who are mucking up your system with queries can easily
be frustrated by setting the system value QTSEPOOL to *BASE. This
means that once they have used more than 2000 ms of CPU they are moved
to the base pool until their manners improve ;-))
John Muir
On Thu, 19 Oct 2000 13:57:57 -0400, julien mills
<jfm...@ix.netcom.com> wrote:
>Hi all,
>
>We have an as/400 9406-S10, and we have about 12 users on the machine.
>We are running an accounting package written in cobol. A few of the
>users will run queries on the database every so often.
>
>Now, is anyone familiar with this machine? Whenever a query or any
>report is running, everyone starts complaining about how dog slow the
>computer is. Is there anything we can do besides getting another
>machine? Should this machine be big enough for an office our size?
>We've had this for less than 3 years, the company that sold it to us
>said it was plenty big.
>
>Is there any tuning things I can do or look for, files to clear, logs to
>delete, etc.?
>
>Thanks,
>Julien
The optimizer beyond specific programming features of SQL, is the same
for all 'query' interfaces to the database on OS/400. You can see this
by issuing STRDBG UPDPROD(*YES) prior to OPNQRYF, RUNQRY, or SQL SELECT
and review of the joblogs. Query/400 and OPNQRYF go through the same
optimization phase. OPNQRYF has more parameters and thus allows more
user 'forced' implemenation choices whereas Query/400 infers based on
the specified reporting requests. Most significantly is the OPTIMIZE
parameter on OPNQRYF which is determined in Query/400 by the OUTTYPE
<should be OUTPUT> parameter of RUNQRY; *ALLIO for *OUTFILE and *PRINT,
but *FIRSTIO for *DISPLAY output. The only time that Query/400 does
operate 'inefficiently' as I infer you are noting, is for the RUNQRY's
OUTFORM(*SUMMARY) for which the equivalent GROUP BY <if the report
break is indeed identical; it need not be> in SQL is not used instead,
so many records of data might be processed for final results, and so
potentially a 'massive workfile'; this was optimized in newer releases,
to drop unused fields from the summary processing request. Again there
are messages in debug mode except from the Query/400, not the query
<optimizer> engine. But in general Query/400 works very similarly to,
and efficiently as any non-grouping OPNQRYF or SQL SELECT.
we've made experiences with that type of machine (here an s30) like you
too. in our case it has been helpful to do the following:
1) minimize page faults:
set sysval of the performance adjuster (qpfradj) to '3' for some time
while normal workload is on the system to get necessary poolsizes for
normal day processing. after that, reset the qpfradj back to '0'. later
on you can manualy resize pools if there is a paging problem, also check
and rechange the max activity level for each pool. have a look into
'work management-guide' (sc41-5306-xx).
2) minimize interactive workload:
-if possible, use only clients for 5250-Access i.e. ca/400
-run qry and sql only in batch
-change commands, so that long-running proccessing can't be started
interactivly (we've copied the commands into a new library, changed the
commands runqry/ crtclpgm/ crtcblpgm/ crtrpgpgm/ savobj/ clrlib/ dltlib/
strqmqry/ rstobj/ rstlib and changed the qsyslibl to connect this lib in
front of qsys). note: if you save your system with option(21/22 or 23)
in menu(save), cmd. savlib must be able to start interactivly.
3) keep disk capacity in use under threshold-guidelines.
4) if possible limit user capabilities, so they can't change their own
job-priority (set parameter lmtcpb to *yes in cmd. chgusrprf).
5) check your application settings; are all jobs of type batch and
writer running with a priority of 50 or above (50-99)? if not, check why
and change it.
I think there are many more things you can do, but perhaps this will
help you.
henry
Sent via Deja.com http://www.deja.com/
Before you buy.
No I didn't. And it appears that we have a basic failure to communicate. Ah,
well I'll have to think out my response this time. (Why do I get the feeling
I stepped on his toes? :>)
--
Glen Ford
info@nospam can-da.com (remove nospam and replace info with consult)
Can Da Software, Mississauga, ON Canada
Project Management, Systems Analysis & Design, LANSA/AS400/Windows
Development
read our models and articles at www.can-da.com
"Dave Shaw" <ds...@spartan.com> wrote in message
news:3oWL5.9796$Pw6.6...@newsread1.prod.itd.earthlink.net...