Itschak
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to list...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
It's been a few months since we did an informal comparison, but on a
z9-BC without zAAP the CICS COBOL code was "noticeably" faster than the
equivalent CICS Java code.
-jc-
ITschak
Best I can recall, we compared the timings after the JVM was built
(first invocation of the Java txn took about 5 seconds; pretty much
sub-second after that). The COBOL was just enough faster to be
noticeable at the terminal. The txn retrieved and displayed one record
from a VSAM KSDS. Best recollection from TMON data is that the COBOL
ran about 20% - 30% faster than the Java.
-jc-
But I'm sticking with my original answer for most serious
applications: "the better programmer wins". In complex systems, the
better programmer/design is a more significant factor in performance
than the language choice. Largely, this favors COBOL, since most
Java programmers lack the skill to write high-performance code. Java
is a much nicer language than traditional COBOL (IMO), supporting
modern OO software engineering, etc. and *can* be used to create fast,
maintainable, and reusable code. There are lots of examples, but
these are still a *tiny* percentage of all Java apps.
Kirk Wolf
Dovetailed Technologies
http://dovetail.com
PS> Stand back, this is one of "those" threads :-)
-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-...@bama.ua.edu] On Behalf Of Chase, John
Sent: Thursday, July 15, 2010 2:12 PM
To: IBM-...@bama.ua.edu
Subject: Re: Cobol vs Java - who is faster?
> -----Original Message-----
> From: IBM Mainframe Discussion List On Behalf Of Itschak Mugzach
>
> As expected, I believe. But did the second run was faster as the JVM
was
> loaded? What is the ratio?
Best I can recall, we compared the timings after the JVM was built
(first invocation of the Java txn took about 5 seconds; pretty much
sub-second after that). The COBOL was just enough faster to be
noticeable at the terminal. The txn retrieved and displayed one record
from a VSAM KSDS. Best recollection from TMON data is that the COBOL
ran about 20% - 30% faster than the Java.
-jc-
NOTICE: This electronic mail message and any files transmitted with it are intended
exclusively for the individual or entity to which it is addressed. The message,
together with any attachment, may contain confidential and/or privileged information.
Any unauthorized review, use, printing, saving, copying, disclosure or distribution
is strictly prohibited. If you have received this message in error, please
immediately advise the sender by reply email and delete all copies.
OK, how about this:
If your are deciding whether to use COBOL or Java based on asking this
question, use COBOL :-)
>If your are deciding whether to use COBOL or Java based on asking this
>question, use COBOL :-)
However that should not be the sole criterion. In fact, it almost
is useful only as "finding evidence to support the choice I want".
No; and since it was a (very) informal comparison, mainly just to "get a
feel" for running Java in CICS, we didn't really look for any. And at
the time we didn't have any Java programmers who knew anything about
"the mainframe" (let alone CICS). A couple of our COBOL wizards
basically just copied an example or two from an IBM Redbook, and I set
up the JVM profiles from the same Redbook.
-jc-
Do they? What about using zAAP? it will not speed up or consume less cpu
then a CP, the only advantage over CP is that it is not counted in CPU usage
reports (YET).
ITschak
I'm really not try to pick on you, but surely you must realize the
folly of such a question.
Now we find out that what you *really* want to know is:
"How does the performance of Java code generated by EGL compared to
COBOL code generated by EGL?"
This is a very different question. Which do you think will give you a
better answer? -
a) Create a set of representative benchmarks in EGL and compare the
results; generating Java vs. COBOL
b) Ask on IBM-Main: "Cobol vs Java - who is faster?"
Kirk Wolf
Dovetailed Technologies
http://dovetail.com
Representative benchmark is an oxymoron.
Just like country music. (8-{]}
-
I'm a SuperHero with neither powers, nor motivation!
Kimota!
I'd lay my money on COBOL in general. But, then again, it depends on
what the program is doing and how good the programmer is.
Now, if a zAAP is involved, I might go with Java. It depends on the
model of the machine. A zAAP always runs full speed. So it would likely
be faster than COBOL on a kneecapped machine.
--
John McKown
Maranatha! <><
Um, I believe that all the specialty engines always run at full speed,
whereas a CP may not (if it's sub-capacity). So, yes, it may speed up
the Java code.
--
Bob Woodside
Woodsway Consulting, Inc.
http://www.woodsway.com
>On Thu, 2010-07-15 at 21:22 +0300, Itschak Mugzach wrote:
>> I wonder if this was tested ever: same business logic in batch or CICS. NO\o
>> zAAP installed. Who is faster? and in case of zAAP?
>>
>> Itschak
>>
>
>I'd lay my money on COBOL in general. But, then again, it depends on
>what the program is doing and how good the programmer is.
If they haven't revised the EGL COBOL code generation from the garbage
that CSP generated then if they got a good code generator for Java, it
might well be better. With CSP, you compiled the COBOL NOOPTIMIZE if
you wanted the compile to complete in a decent amount of time and the
COBOL code was so bad that optimizing probably was impossible.
Clark Morris
zNorman
-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-...@bama.ua.edu] On Behalf
Of Robert Woodside
Sent: Thursday, July 15, 2010 Thursday 6:19 PM
To: IBM-...@bama.ua.edu
That's a very narrow viewpoint.
>They exist to defer General Purpose Engine upgrades which WILL increase software licensing charges.
Tuning exists for the same reason.
And, specialty engines do improve performance, regardless of your purported purpose.
-
I'm a SuperHero with neither powers, nor motivation!
Kimota!
----------------------------------------------------------------------
-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-...@bama.ua.edu] On Behalf
Of Ted MacNEIL
Sent: Thursday, July 15, 2010 Thursday 7:54 PM
To: IBM-...@bama.ua.edu
Subject: Re: Cobol vs Java - who is faster?
There is an additional advantage of zAAP/zIIP over CP for those who
have "throttled" machines. CPs are throttled, zAAPs/zIIPs are not.
Of course this must not be used as a test environment to answer
questions about performance.
--
Peter Hunkeler
Credit Suisse
But, the net is 'faster'.
>The real reason (aside from marketing and legal ones) is as I said: to avoid increases in software license charges.
If that were true, then the whole concept is a joke!
Why does IBM only allow a small percentage over there?
As zPrime proved, almost all of the eligible work should be able to run over there.
While IBM wants to help save you money, they don't want you to save to much.
-
I'm a SuperHero with neither powers, nor motivation!
Kimota!
----------------------------------------------------------------------
It is and has been from the very invention.
Of what use is a concept of having to deal with three dispatcher
queues, when MVS has proven over the decades that one is enough?
Add to that the complexity of allowing one queue to "overflow"
to another under certain circumstances.
And then came the next joke: zAAP on zIIP...
--
Peter Hunkeler
Credit Suisse
----------------------------------------------------------------------
It, as always, depends (on the vendor).
>Every time we increase MSU/MIPS we get nailed from third party vendors.
We found a couple of them quite reasonable, and some not so.
We replaced two of the vendors who were not so.
One product was a work-alike with costs of 20%, better support, and no training required.
The second required training, but the vendor did it for 'free', and they also converted our production processes (batch & manual through ISPF).
And, the costs again went down by 75-80%.
We also managed to convince a few to convert to usage-based and the SCRT.
It depends on how agressive you are with the vendor.
We also dropped a few, without compromising the business, productivity, nor development.
All of this was done at my recomendatio, if I do say so myself, and, as I said, it depends.
You can't just roll over when an ISV, or IBM for that matter, sends you a new billing rate just because you upgraded.
-
I'm a SuperHero with neither powers, nor motivation!
Kimota!
----------------------------------------------------------------------
Nobody said it was.
But, it is one of the larger costs, after personell, in IT.
Don't let this discussion mask the fact that IT, as well as any other department in a business, has a responsibility to reduce excess costs, especially in this (still) uncertain economy.
So, reducing software (as well as any other) cost is a viable, and responsible, goal.
If specialty engines do it, fine.
But, with the 'articial' restriction of only allowing 20% of eligible work run on them, this is not enough.
All it does is prove that IBM is willing to help you reduce costs, but not by too much.
Other measures have to be taken, and not just with IBM; some ISV's (especially that one everybody loves to hate), either don't care, or don't want, to help you stay competitive.
If negotiation doesn't work, removal (or replacement) may.
Itschak
I wrote:
>First of all, general purpose engine upgrades don't increase software
>licensing charges unless you're talking about software licenses tied to
>machine capacity (i.e. full capacity licensing).
No need to remind me. :-)
Just to correct your assertion about IBM OTC, IBM's System z "one-time
charge" software *does* take into account usage metrics (and, for that
matter, software for other machines does, too). That's called Sub-Capacity
IPLA (International Program License Agreement) software licensing, and IBM
started offering it years ago. Here's some more information:
http://www.ibm.com/systems/z/resources/swprice/reference/exhibits/ipla.html
With that bit of information about how to get additional software pricing
efficiency out of the way, yes, software licensing is typically the largest
part of the IT budget after payroll. (Look to your left, look to your
right, look in the mirror... :-)) That's a good thing, and maybe if
software were bigger than the IT payroll budget it would be even better.
But that's also like saying the biggest part of an airline's operations
budget is fuel. Well, sure, that's not surprising. And yes, maximizing the
efficiency of that fuel is also a good thing. But stop loading fuel onto
the airplanes?
Great stuff, software. Good software solves business problems. Want to stop
solving business problems and/or increase costs elsewhere? Cut off the fuel
supply: software.
Let's not get software pricing efficiency confused with business problem
solving/business optimization. I'm a "bigger picture" person on this, and
hopefully you've got a few in your organizations.
- - - - -
Timothy Sipples
Resident Enterprise Architect
STG Value Creation & Complex Deals Team
IBM Growth Markets (Based in Singapore)
E-Mail: timothy...@us.ibm.com
....But you would need to license something else, such as WebSphere
Application Server, and with more capacity, some of which could be zAAP.
How that comparison washes out is highly situational and varies, but isn't
choice nice?
IBM offers two options for running EGL on z/OS because both are excellent,
and in different situations one or the other (or a combination of both) may
be a better choice in your particular circumstances. Sit down with somebody
familiar with the latest versions, your requirements, run at least some
basic tests if you can, and develop two or three reasonably comprehensive
business cases to compare those options. Some business cases favor the
COBOL-based run-time, and some business cases favor the Java-based
run-time. Choice is good!
By the way, you can also run EGL on z/VSE: IBM offers a COBOL-based EGL
run-time there, too.
Whoa, stop there. Did you ask your friend what drugs he's taking? :-)
Once your friend sobers up, you might want to point him here:
http://www.ibm.com/systems/z/solutions/editions/ws/index.html
That's the official IBM Web site for the "System z Solution Edition for
WebSphere." In that kit he'll find z/OS, DB2 for z/OS, and WebSphere
Application Server for z/OS (and some nice tools). That kit is designed for
one purpose only: to run Java with DB2 on the mainframe.(*)
....OK, I am officially declaring that, from now on, I am no longer
responding to rumors, no matter how crazy, except if I have time to spare.
(I rarely do.) Responding to rumors only encourages them, and if you
believe everything you read in a forum then you should know that if you do
not send me $10,000 immediately via Paypal your nose will turn into a
pumpkin and remain that way for the rest of your life. You have been
warned.
(*) To be pedantic, there's no *requirement* that your Java application(s)
access DB2. But most do.
Barry, CREATE authority to a group will allow a user to create a dataset
with an HLQ matching the group name even when the user is permitted less
than ALTER access to the group's dataset profiles. CONNECT and JOIN
authority will do the same since they include CREATE authority. OPERATIONS
authority users have implicit CREATE authority in all groups. To prevent an
OPERATIONS user from creating a group dataset, it is necessary to connect
the OPERATIONS user to the group with USE authority in addition to
permitting the user less than ALTER access to the dataset profile.
Therefore, connecting T99CTM to MPRO02 was required.
Jorge, I'm pleased to hear you got this sorted out. Do be aware that if you
have other OPERATIONS users, they too will be able create and delete this
dataset. To restrict OPERATIONS users, I usually create a group with a name
something like NO#OPER, connect all the OPERATIONS users to it, and permit
this group access of less than ALTER to resources I want them kept out of,
especially catalogs, APF libraries, and DASDVOL profiles. If there are many
OPERATIONS users, connecting them all to MPRO02 with USE authority might be
a bit cumbersome; perhaps connecting T99CTM alone is sufficient for your
purposes.
Regards, Bob
Robert S. Hansel
Lead RACF Specialist
617-969-8211
www.linkedin.com/in/roberthansel
RSH Consulting, Inc.
www.rshconsulting.com
---------------------------------------------------------------------
2010 RACF Training
> Intro & Basic Admin - Boston - OCT 5-7
> Audit for Results - Boston - OCT 26-28
Visit our website for registration & details
---------------------------------------------------------------------
-----Original Message-----
Date: Fri, 16 Jul 2010 10:24:34 -0700
From: "Schwarz, Barry A" <barry.a...@BOEING.COM>
Subject: Re: Access to RACF entries dataset. Operation attribute
There does not appear to be any reason to connect T99CTM to MPRO02.
-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-...@bama.ua.edu] On Behalf
Of Jorge Garcia
Sent: Friday, July 16, 2010 2:54 AM
To: IBM-...@bama.ua.edu
Subject: Re: Access to RACF entries dataset. Operation attribute
We've solved the problem. The main topics below:
- Define a separate user catalog with alias with the special dataset
(CAT.USUARIO.MIGHP).
- Define in profile user catalog access list NONE to operation user
(T99CTM).
- Define a group (MPRO02) with a same HLQ of dataset
- Define a profile dataset with the special dataset
(MPRO02.AT00.P02.TCPRBD02.VSAM.NOVALE). T99CTM Access NONE.
- Connect user T99CTM to group MPRO02.
- Now, T99CTM could't delete or define dataset. It's operations yet.
Walt give us the solution.
<<So, if the HLQ for the data set is a group name, and the
user doing the definition of the data set has CREATE in the group, he can
create your data set. Your user with OPERATIONS has, by default, CREATE in
all groups, and thus can define new data sets for any group where you have
not explicitly connected him with less than CREATE authority. So, you might
need to CONNECT him to the group with USE authority or lower.>>
The key is Connected the USERID of an OPERATIONS user to the Group
matching a dataset HLQ with USE authority ...
It was difficult.
Thanks a lot!!