I just returned from an IT Financial conference where I contrasted the
costs between running the 45 servers on Intel versus the z/900. I took
very conservative costs for the Intel machines ($2K per server), Switches
($10K), and Firewall’s ($10K) and all with no support (this $0). On the
Intel side I had Linux for $0 and on the zSeries, I bought SuSe Linux,
Novell e-Maintenance, and IBM 24/7 “Support”. The Middleware software was
from IBM and it is licensed per processor. This is true of most all
Distributed products including Oracle. Using Oracle in this would driven
the numbers sky high for it is $40K per processor. Thus on the IFL it is
$40K and on Intel it would be $200K and that premised 1-engine Intel
machines. So I used the DB2 solution for the comparison. In the end the z-
Solution was about $240K and the Intel solution was $840K.
As an aside, remember I kept the Intel side of the costs very, very low as
possible and the zSeries side I bought Linux with full 24/7 Support. Thus
my gut says the number in the Intel side is closer to about $1M+ if one
factors in support, increasing the speed of the connections for Switches
and Firewalls plus including support for their software and upgrades. The
beauty of z/VM is getting all the V-Lans, V-Routers, and V-Firewalls you
want for nothing and then all that “V-Cabling” running at memory speeds
and also Hypersockets for LPAR connections.
It is my conclusions there are a number of reasons why one does not hear
many stories about it. One story is those who do it quite well do not want
to reveal the competitive advantage they have. Another, is the company is
ashamed to admit they get benefit out of the mainframe when there is such
a bias against the mainframe. I know of other places who admit the facts,
but IT management wants no part of it; this is not what the trade press
and their background says is so. Then in most places, Windows and Linux
would be done by the Distributed or Network side of IT and not the
mainframers; so why give up turf. Besides more and more servers to manage
increases the size of management and their paychecks. Lastly why would
those who have Windows machines (MSCE) and Cisco hardware (CISCO
certified) turn things over to mainframe systems type to replace them.
They will fight to the death to hang onto all their turf.
Another argument is z/VM is so tough. Back in the late 1980s I was forced
over into VM (Dark Side for an MVS Bigot) of IBM systems and mastered the
work much, much less than a year where MVS takes years to be able to do
most all of it. IBM has a free 4 day z/VM and SuSe Linux school which I
sent my z/OS Bigots and they came back able to install, implement and get
things running. Bringing up a z/VM system only to run zLinux is by far
easier than have many, many VM users using CMS, etc. There are enough
zLinux Cookbooks to get things up and running quite quickly.
I am not sure what the future will be but with an upgrade to a z9BC my one
IFL goes from 238 MIPS to 480 MIPS and my software charges stay exactly
the same as they are today. A very interesting situation. The strategy is
to run what makes sense over on the zSeries and there is no way I would
want to take over 400+ Windows Servers. Once I get a processor license for
a piece of software I can bring up many of them virtually with no
additional charges. Oh yes, the z/900 IFL is not even breathing hard yet.
Jim Marshall
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to list...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
but IT management wants no part of it; this is not what the trade press
and their background says is so. Then in most places, Windows and Linux
would be done by the Distributed or Network side of IT and not the
mainframers; so why give up turf. Besides more and more servers to manage
increases the size of management and their paychecks. Lastly why would
those who have Windows machines (MSCE) and Cisco hardware (CISCO
certified) turn things over to mainframe systems type to replace them.
They will fight to the death to hang onto all their turf.
>>
Good post. Pretty much human nature. What is it 21 days for the human to
create a 'habit'? Did you figure personnel costs too on a cost per seat basis?
I assume you also factored in the staffing cost differences, which can be
huge.
Unfortunately, those in power usually "manage by perception, or magazine".
There was an early case where SIAC replaced many Sun servers with Linux
on zSeries, but it got barely a raised eye brow.
http://www-03.ibm.com/servers/eserver/zseries/os/linux/zseries_stock.html
>I have seen some requests lately for a positive zLinux experience. I am
>running zLinux under z/VM today with 2 production applications, 45 Virtual
>Servers in three LPARs providing Production, User Acceptance Test,
>Development, and SYSPROG TEST. Each application is isolated within a
> Much snippage ....
Hi
I happen to have to review my policies each year and i am interested .
( although i do not have 45 servers eligible to Z/linux but only 25 or so )
Do you have 1 or 2 IFL ? ( we usualy use dev or user acceptance test
machines as backup for DR of our production machine )
Did you include this second IFL in case the first one fails ?
Also you mention comparing 45 servers to a vm machine .
Did you compare using Vmware with Vmotion ( so impressive) software to
compare to z/vm or simply old fashion one server one appl ?
You did not mention the price of these 2 IFL's and its maintenance ?
And last : why do you use several firewalls ?
( i run a 2 sites GDPS for MVS and the distributed systems are clustered
with load balancing and DR plans built in , i have only 2 firewalls with
dual homing indeed ).
Thanks
Bruno
Bruno(dot)sugliani(at)groupemornay(dot)asso(dot)fr
phil
Any thoughts would be appreciated
Thank you
Brad Taylor
From my course "Developing Applications for z/OS UNIX":
c89 -o doneit -e // -W a,list app2a3.s > app2a3.lst
Come see the whole course, three thrill-packed fun-filled
days of Assembler, COBOL, PL/I, C, make, c89, cob2, pli,
and more!
Details:
http://www.trainersfriend.com/UNIX_and_Web_courses/u520descr.htm
Kind regards,
-Steve Comstock
This little shell exec does it for me:
#!/bin/sh
export _C89_SUSRLIB='TSH009.SOURCE.MACLIB'
c89 -e // "$@"
Put it in a file called "hlasm" or "asm" or ... which is executable and
on the PATH. Then invoke it similar to:
hlasm myprog.s -o myprog >myprog.lst 2>&1
Note that the suffix must be ".s"
Also, as you have noticed, change the _C89_SUSRLIB to include your macro
libraries.
--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology
This message (including any attachments) contains confidential
information intended for a specific individual and purpose, and its
content is protected by law. If you are not the intended recipient, you
should delete this message and are hereby notified that any disclosure,
copying, or distribution of this transmission, or taking any action
based on it, is strictly prohibited.
>The IFL is just like any other IBM CPU which has backup processors already
>designed into the CEC. So just like you don't usually buy a spare CPU for
>z/OS you wouldn't buy a spare IFL
>
>phil
Nope
I run 2 cpu's z/os on 2 sites doing Data sharing and i use dynamic VIPA
between the 2 z/990 balancing loads and using also either site as takeover
site ( even if i need CBU)
These 2 computer rooms , are there because you could lose one .( and it has
happened ), or you can lose one part of the nework , or your front end
firewall etc ...
If i was running my WAS on z/linux i would be obliged to do the same .
For our distributed world we run in the same way like for z/OS and keep
running on the remaining site .
We are using clustering all the time , Lotus Notes for example is balanced
on 2 servers (windows) on 2 sites , and you could kill one any time (
accident , maintenance , whatever ) , it keeps running on the other side .
With vmotion we can move a live linux partition from one place to another
without the user even knowing it .
Of course the SAN are replicated in both sites like the PPRC for Z/os
The WAS are balanced between both sites , the firewalls , etc ..
Mixing loads of light and busy lpars on 2 sites allows you to have security
even if it is not perfect .
Bruno
Bruno(dot)sugliani(at)groupemornay(dot)asso(dot)fr
>Another argument is z/VM is so tough. Back in the late 1980s I was
>forced over into VM (Dark Side for an MVS Bigot) of IBM systems and
>mastered the work much, much less than a year where MVS takes years
>to be able to do most all of it.
Well, I'm a TSO bigot from way back, but picking up VM is no big deal.
Bringing it up is a piece of cake.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT
ISO position; see <http://patriot.net/~shmuel/resume/brief.html>
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)
I have notes in my SYS1.PARMLIB that refer to a recommendation that Greg
Dyck made in 1998: he said that the SMF address space could wind up
allocating the user catalog forever.
Is this still the case?
--
David Andrews
A. Duda and Sons, Inc.
david....@duda.com
Thundering silence was the response.
To me these are "system" datasets - they are my responsibility. As such
they stay in the (shared) MCAT as SYS1.&SYSNAME..MAN?
Can't conceive of why I would want different.
Merely my practise of course, not (necessarily) the distillation of
current wisdom.
Shane ...
Shane,
> To me these are "system" datasets - they are my responsibility. As such
> they stay in the (shared) MCAT as SYS1.&SYSNAME..MAN?
> Can't conceive of why I would want different.
Can you now use systems symbols to catalog VSAM datasets ?
Uhh, it's new to me.
Walter Marguccio
z/OS Systems Programmer
Munich - Germany
> Can you now use systems symbols to catalog VSAM datasets ?
> Uhh, it's new to me.
Well ....
actually in this site, yes, you can. An in-house utility that invokes
IDCAMS.
I was being liberal with the language - the reference was to usage of
the catalog entry, rather than actually creating it.
Shane ...
I moved them to a usercat years ago when I did a full-system replacement
and I had two masters to deal with. While SMF may never let go of that
catalog, it hasn't caused an issue for me. (yet!)
--
David Andrews
A. Duda and Sons, Inc.
david....@duda.com
----------------------------------------------------------------------
>On Thu, 2006-10-19 at 12:09 -0400, David Andrews wrote:
>> SMF recording datasets (MANx) needn't be in the master catalog... but
>> what is the current recommendation?
>
>Thundering silence was the response.
>To me these are "system" datasets - they are my responsibility. As such
>they stay in the (shared) MCAT as SYS1.&SYSNAME..MAN?
>Can't conceive of why I would want different.
>
>Merely my practise of course, not (necessarily) the distillation of
>current wisdom.
>
In addition... their use is really single system in scope, so there
is no real benefit to putting them in a usercat... even if you don't
share a master catalog.
Even if you still create new master catalogs when you upgrade the
OS (which I don't know why anyone would do these days) you can still
IPL with SMF not active and REPRO the MAN data sets from the old
mastercat to the new one.
Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - GITO
mailto:mark....@zurichna.com
z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html
> I keep notes on what I update. And all that I have updated are a few
> files in /etc such as /etc/rc (to start some daemons at IPL time),
> /etc/resolv.conf, and some others. I generally copy from the old /etc
> and /var to the new one via "pax" like:
>
> pax -rw -k -pe -v /etc/* /newsys/etc
> pax -rw -k -pe -v /var/* /newsys/var
John, does this actually work ???.
I was just cleaning up a test system (that had several hundred files
added in /etc - gotta keep them comms guys on a shorter leash), and
decided to try this.
I would always end up with superfluous /etc and /service directories
under my newsys (equivalent) mountpoint. Yes, I had the leading slash -
the doco mentions "cd" to the source directory then
"pax {options} * /newsys ..."
This works.
Eventually had to abandon the command(s) John suggested, and knock up a
script to do the merge. Anybody have any ideas ???.
Shane ...
(oh, and recursive "diff" seems to work fine)
Sorry, no. The actual commands are:
pax -rw -k -pe -v /etc/* /newsys
pax -rw -k -pe -v /var/* /newsys
The original would have ended up with /newsys/etc/etc and
/newsys/var/var. I apologize for leading you down the wrong path. I did
just test the above commands on my z/OS 1.6 system in a UNIX shell. They
did have one unfortunate effect that I had not noticed. Duplicate files
were not replaced, but symlinks WERE replaced. I was lucky that this did
not cause me any problems.
> I was just cleaning up a test system (that had several hundred files
> added in /etc - gotta keep them comms guys on a shorter leash), and
> decided to try this.
> I would always end up with superfluous /etc and /service directories
> under my newsys (equivalent) mountpoint. Yes, I had the
> leading slash -
> the doco mentions "cd" to the source directory then
> "pax {options} * /newsys ..."
> This works.
> Eventually had to abandon the command(s) John suggested, and
> knock up a
> script to do the merge. Anybody have any ideas ???.
>
> Shane ...
> (oh, and recursive "diff" seems to work fine)
>
Hum, I wonder what problem I had with it. Too long ago and my mind has
left me for sunnier climes lately.
--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology
This message (including any attachments) contains confidential
information intended for a specific individual and purpose, and its
content is protected by law. If you are not the intended recipient, you
should delete this message and are hereby notified that any disclosure,
copying, or distribution of this transmission, or taking any action
based on it, is strictly prohibited.
> Sorry, no. The actual commands are:
>
> pax -rw -k -pe -v /etc/* /newsys
> pax -rw -k -pe -v /var/* /newsys
>
> The original would have ended up with /newsys/etc/etc and
> /newsys/var/var. I apologize for leading you down the wrong path.
Cheers - so what I saw was what you expected, but not what I expected. I
had a requirement to copy the serverpac /etc to a new HFS and merge a
current /etc over the top with noreplace.
Expecting to get a merged /newsys, I wound up with /newsys/etc
and /newsys/service.
As it had to be a deployable batch procedure, I gave up and went with a
script that changes to each of the source directories in turn.
Thanks ... Shane