Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Q: Configure TCP/IP Services for a node not yet booted?

5 views
Skip to first unread message

Ken.Fa...@gmail.com

unread,
Jul 11, 2008, 3:10:05 PM7/11/08
to
I'm in the middle of a migration of our VMS cluster from
Multinet 4.4A to TCP/IP Services 5.4ECO7 under VMS 7.3-2
on GS1280's. We have 3 production nodes and one
non-production node in the cluster.

1) I've created a second system disk (backup/image from
the cluster system disk);
2) I've booted the non-prod node from the new system disk;
3) I've installed TCP/IP Services on the new system disk
and done TCPIP$CONFIG for the non-prod node;
4) Rebooted the non-prod node with Multinet disabled and
TCPIP$STARTUP enabled.

So far so good, the non-prod node is running TCP/IP Services
with no problems.

QUESTION: Is there a way to configure the interfaces for
the production nodes on the new system disk
*before* each of the production nodes boots
from that disk.

I've been told I can, but from a somewhat unreliable
source. :-( It seems to me that while I can mount the
new system disk on the other cluster nodes without
booting them from it, trying to run TCPIP$CONFIG from
that disk probably wouldn't work because it would
look for TCPIP$CONFIGURATION.DAT, etc., in "SYS$SYSTEM"
which would not exist on the old system disk.

Is there a way to specify a node to configure other
than the node that's running TCPIP$CONFIG?

Thanks, Ken

JF Mezei

unread,
Jul 11, 2008, 4:02:49 PM7/11/08
to
Ken.Fa...@gmail.com wrote:

> Is there a way to specify a node to configure other
> than the node that's running TCPIP$CONFIG?

Look at:

DIR SYS$SYSTEM:TCPIP*.DAT*
This gives you the core config files.

And then look at:

SHOW LOG TCPIP* on a running system
You'll find logicals pointing to all of those.

TCPIP> is used to populate those files.

This is just a utility. It can be used when TCPIP is down except for a
few commands (like SHOW DEVICE).


Some of the config files can be copied from another node. the
TCPIP$SERVICE.DAT file for instance is node agnostic. It just defines
services. It is in the TCPIP$CONFIGURATION where a service is attached
to a node with its default enabled or not state.

(SET CONF ENABLE_SERVICE )

Generally SET CONF xxxx go into the configuration.dat file , except for
SET INTERFACE which goes to it as well.

I think there is one command which has a /PERM needed to make it go to
the confguration file, but Hoff's comments about me have disrupted the
flow of blood to that part of the brain .....


Make sure you backup the files on the running system before trying to
muck with file son another disk just in case TCPIP utility disregards
logicals and goes for the files in SYS$SYSTEM.

VAXman-

unread,
Jul 11, 2008, 5:30:06 PM7/11/08
to

After they are copied to appropriate roots, I would think that logicals
could be defined to point to the appropriate files and then use...

$ TCPIP SET CONFIGURATION INTERFACE {...}

Hell, the TCPIP$CONFIG might even function with logicals appropriately
defined. I haven't looked since I have no need to configure any boxes
in this fashion but I may take a looksee when I have some free time.

--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)COM

"Well my son, life is like a beanstalk, isn't it?"

Copyright 2008 Brian Schenkenberger. Any publication of _this_ usenet article
outside of usenet _must_ include its contents _in_its_entirety_ including this
copyright notice, disclaimer and quotations.

The citizens of our state must be free, within reason, to speak out on matters
of public concern. So long as they state the facts implicated fairly and
express their opinions, even in the most colorful and hyperbolic terms, their
speech should be protected by us. -- NJ Superior Court Appellate Div. (NJSC)

... pejorative statements of opinion are entitled to constitutional protection
no matter how extreme, vituperous, or vigorously expressed they may be. (NJSC)

"Coding is _not_ a crime!" Support the EFF: http://www.eff.org

Ken.Fa...@gmail.com

unread,
Jul 11, 2008, 7:43:31 PM7/11/08
to
On Jul 11, 1:02 pm, JF Mezei <jfmezei.spam...@vaxination.ca> wrote:

> Ken.Fairfi...@gmail.com wrote:
> > Is there a way to specify a node to configure other
> > than the node that's running TCPIP$CONFIG?
>
> Look at:
>
> DIR SYS$SYSTEM:TCPIP*.DAT*
> This gives you the core config files.
>
> And then look at:
>
> SHOW LOG TCPIP* on a running system
> You'll find logicals pointing to all of those.
>
> TCPIP> is used to populate those files.
[...]

See that won't work, because:
=======================================================
Syskhf> tcpip
%DCL-W-IVVERB, unrecognized command verb - check validity and spelling
\TCPIP\
Syskhf>
========================================================

Given cluster nodes A,B,C & D, A,B & C have booted
DSA0: and do *not* have TCP/IP Services installed.
Node D has botted off DSA10: and *does* have
TCP/IP Services installed, and configured for node D.

I'd be very worried to run, e.g., TCPIP SET CONF INTERF
on node D *for* A's interfaces since some of the infromation
wind's up. or uses, entries in the DSA10: SYS$COMMON
TCPIP databases.

On the other hand, I don't know, but possibly could
try, to run on node A, do a SET COMMAND to
node D's DCLTABLES, define the logicals and
try to invoke the TCPIP command (there'd be more,
I'd need to define logicals for the TCPIP images
'cause they won't be found in node A's SYS$SYSTEM
or SYS$LIBRARY, etc.).

I was actually hoping there was a way to the SET CONF
type command with a node qualifier, but I haven't found
anything like that yet. :-(

Other ideas?

Thanks, Ken

JF Mezei

unread,
Jul 11, 2008, 9:11:21 PM7/11/08
to
Ken.Fa...@gmail.com wrote:


> Given cluster nodes A,B,C & D, A,B & C have booted
> DSA0: and do *not* have TCP/IP Services installed.
> Node D has botted off DSA10: and *does* have
> TCP/IP Services installed, and configured for node D.

Does this mean that you intend to progressively reboot A B C from DSA10: ?

AKA: you want to pre-populate their root on DSA10: with all the TCPIP
stuff ?

OK, then it can get interesting. And I erred in a previous description.

TCPIP$SERVICE.DAT is shared in VMS$COMMON.SYSEXE.

From some 2006 work I did
http://groups.google.com/group/comp.os.vms/msg/774df4e15e5a07b3?hl=xx-elmer&dmode=source

TCPIP$SERVICE.DAT in SYS$COMMON:[SYSEXE] has, as primary key, the first
28 bytes of the record: This 28 byte key must be unique in the indexed file.

16 bytes -> Service name
4 bytes -> unknown (seems to always be 0000100)
2 bytes -> port number
2 bytes -> protocol 006 = TCPIP 011 = UDP
4 bytes -> address to listen on.

If the address is set to 00000000 ( 0.0.0.0 ) then it will listen on all
interfaces for that node, and this makes the record node neutral and is
thus usable on any node in the cluster.

So for instance, you'll have 2 "BIND" records, one for TCP and one for
UDP service, both having 0 as IP to listen to.

Both these records would be usable for any node in the cluster since
they are not tied to any specific interface.

> I'd be very worried to run, e.g., TCPIP SET CONF INTERF
> on node D *for* A's interfaces since some of the infromation
> wind's up. or uses, entries in the DSA10: SYS$COMMON
> TCPIP databases.

Digging a bit deeper, you are correct. The node name is part of the
record in the TCPIP$CONFIGURATION.DATA

So if you are on node D, chances are that the TCPIP utility would use
node "D" as part of the record.

So the best bet would be to make TCPIP utility work on nodes A B C and
point the logicals to D's SYS$COMMON so that each node can then
contribute its own node-specific configuration data.

One text you can make:

from node a, SHOW SERVICES should display the defined services the same
as on node D, but they should all be disabled.

Again, backing up all of the TCPIP$*.DAT in SYS$COMMON:[SYSEXE] before
you make modifications ensures that if you mess up, you can come back.


To get TCPIP utility to run on node A B C, you can use VERB to extract
the CLD from node D and then apply it to your process on node A, and
then you'll need to define logicals for

TCPIP$IPC_SHR.EXE;1 TCPIP$ACCESS_SHR.EXE;1 (you may need some more, not
100% sure those are the only 2 TCPIP specific that are needed.)


BTW:

$MC TCPIP$UCP is the same as $ TCPIP

So you might be able to skip the .CLD portion and just invoke it with MC
TCPIP$UCP (or MC dsa010:[vms$common.sysexe]tcpip$ucp )

You can start by running the utility with no logicals defined and it
will probably create the files in node A's SYS$SYSTEM. Play with the
definitions until you have the syntax right. Then define the logicals to
point to the node D's system disk, and apply those commands.

Remember that there is no need to define the services on node A, once
you set the logical TCPIP$SERVICE to point to node D's disk, then these
services will be visible. You can then do the TCPIP> SET CONF
ENABLE_SERVICE for invididual services you want each node to have
enabled (aka: automatically started).

So basically, define logicals to point to the common future disk, and
from each node, run the commands necessary to define the interfaces and
whatever else you need defined.

Beware however that many services create directories in node-specific root.

Once A has rebooted into DSA10:, you can then use @TCPIP$CONFIG to
enable individual services and this will ensure the right no-de-specific
directories are created and populated with template files.

You could also do that manually if you want. make 100% sure that no
matter what you do, that all nodes share a common sysuaf. Services are
assigned a UIC in what is essentially a random process. So if
TCPIP$CONFIG doesn't see there is already a TCPIP$SMTP, it will create a
new one with a UIC that will be different than TCPIP$SMTP that already
exists on node D. So when you merge everything, you can have some problems.


Also, remember that SYSLOG and RSH both use 514 as port (one UDP, the
other TCP).
You need to manually add records in TCPIP$SERVICE.DAT for this to work,
and only one can be SET CONF ENABLE_SERVICE. You will need to TCPIP
ENABLE SERVICE manually for the second one after the first one was
started automatically when TCPIP starts up.

DUMP/RECORD is your friend here. Once you have TCPIP> running on node
A, add an interface, and from node D, do a DUMP/RECORD of
TCPIP$/CONFIGURATION.DAT and you should see a NEW record has been added
(as opposed to an existing one having been modified).

R.A.Omond

unread,
Jul 12, 2008, 3:09:52 AM7/12/08
to
Ken.Fa...@gmail.com wrote:
> [...snip...]

>
> QUESTION: Is there a way to configure the interfaces for
> the production nodes on the new system disk
> *before* each of the production nodes boots
> from that disk.
>
> I've been told I can, but from a somewhat unreliable
> source. :-( It seems to me that while I can mount the
> new system disk on the other cluster nodes without
> booting them from it, trying to run TCPIP$CONFIG from
> that disk probably wouldn't work because it would
> look for TCPIP$CONFIGURATION.DAT, etc., in "SYS$SYSTEM"
> which would not exist on the old system disk.
>
> Is there a way to specify a node to configure other
> than the node that's running TCPIP$CONFIG?

I would strongly recommend NOT attempting to do this.
IIRC some of the configuration files use the hostname
as a key into index files. One of my customers got
burned by trying to do something similar (they changed
the IP address of an interface, changed the system name,
rebooted and wondered why it didn't come up with the
new IP address).

Bart...@gmail.com

unread,
Jul 12, 2008, 4:52:28 PM7/12/08
to
On Jul 12, 9:09 am, "R.A.Omond" <Roy.Om...@BlueBubble.UK.Com> wrote:

Confirmed. I have worked with a fairly complex TCP/IP configuration.
To keep things managable, I put most of the configuration in DCL
files. However, a new member added to the cluster required a manual
configuration using TCPIP$CONFIG first.

HTH,

Bart Zorn

Ken.Fa...@gmail.com

unread,
Jul 14, 2008, 5:59:29 PM7/14/08
to

Roy and Bart,

In my case, the cluster membership (and names) is staying
the same. As I said in the original post, my task is to migrate
from Multinet to TCP/IP Services, and I'm trying to do that with
the least downtime to production.

To that end, I "cloned" the system disk The original sysdisk
is DSA0:, new sysdisk is DSA10:, production nodes A, B
and C still booted from DSA0:, dev node D booted from
DSA10:. (All nodes are still in the same cluster, just booted
from different system disks.) I've installed TCP/IP Services on
DSA10:, configured node D in TCP/IP Services and rebooted
to prove the configuration.

For the production nodes, I will retain the configuration
currently active under Multinet, e.g., all of nodes A, B and C's
interfaces will keep there IP addresses, etc. What I need to
do is configure those nodes' interfaces in the TCP/IP Services
configuration. If I can do that directly by pointing
TCP$CONFIGURATION at the files on DSA10: from the
production nodes, and configure their respective interfaces,
I'm way ahead.

Am I correct to assume that when Roy (and JF) said that
entries in TCP$CONFIGURATION are "indexed by node",
that "node" in this cases is, e.g., SCSNODE or SCSSYTEMID?
Then it would seem I could proceed with the "pre-configuration"
step.

But the suggestion to put the configuration commands in a
DCL procedure (which was also suggest off-list) to be executed
on first reboot also saves time over going through the prompts
in TCP$CONFIG.COM.

Thanks, Ken

JF Mezei

unread,
Jul 14, 2008, 6:38:39 PM7/14/08
to
K
> Am I correct to assume that when Roy (and JF) said that
> entries in TCP$CONFIGURATION are "indexed by node",
> that "node" in this cases is, e.g., SCSNODE or SCSSYTEMID?

I believe it is SCSNODE

DUMP/RECORD is your friend there. Get the SCSSYSTEMID value and look for
it in hex in the dump just to make sure. But the nodename is definitely
there.

> Then it would seem I could proceed with the "pre-configuration"
> step.

Could you shutdown D, separate it from the cluster, and then boot it
minimal as Node A, you could then use TCPIP$CONFIG at your leasure, shut
it down, boot it as Node B, rin TCPIP$CONFIG at your leasure and do the
same for node C.

Then you can reconnect the node to the lan and reboot it as D. And when
you reboot the other nodes into the new system disk, they'll have their
node specific configs.

The other option, as I originally suggested is to use the TCPIP utility
on each of the node to affect the files on DSA10: by refining the
logicals to point to the DSA10: files.

You *MIGHT* (with a lot of checking) be able to use TCPIP$CONFIG on Node
A (with proper logicals defined, and SET COMMAND for TCPIP) and then you
would need to backup the node specific directories that are created when
you configure services over to DSA10: to the appropriate roots for each
node. (this can be cloned for all 3 nodes since those directories are
populated with files from TCPIP$TEMPLATES.TLB )


> But the suggestion to put the configuration commands in a
> DCL procedure (which was also suggest off-list) to be executed
> on first reboot also saves time over going through the prompts
> in TCP$CONFIG.COM.

How much of downtime is allowed ? Could could boot without starting your
apps (halfway between minuimum and full boot) and then manually use
TCPIP$CONFIG, and then reboot fully. This might be the safest bet.

What we don't know is how your 3 nodes share the loads/applications and
how long you can have one node shutdown at any point in time.

Ken.Fa...@gmail.com

unread,
Jul 14, 2008, 6:56:55 PM7/14/08
to
On Jul 14, 3:38 pm, JF Mezei <jfmezei.spam...@vaxination.ca> wrote:
> K
>
> > Am I correct to assume that when Roy (and JF) said that
> > entries in TCP$CONFIGURATION are "indexed by node",
> > that "node" in this cases is, e.g., SCSNODE or SCSSYTEMID?
>
> I believe it is SCSNODE
>
> DUMP/RECORD is your friend there. Get the SCSSYSTEMID value and look for
> it in hex in the dump just to make sure. But the nodename is definitely
> there.

Will do, later in the week when I have more time. Thanks for the
various hints, JF. :-)

> > Then it would seem I could proceed with the "pre-configuration"
> > step.
>
> Could you shutdown D, separate it from the cluster, and then boot it
> minimal as Node A, you could then use TCPIP$CONFIG at your leasure, shut
> it down, boot it as Node B, rin TCPIP$CONFIG at your leasure and do the
> same for node C.

That's probably more scary (risky?) to me. I know it should be
OK in principle, but fat-fingering can lead to some issues I'd
rather not deal with (yes, the new node won't be allowed to
join the cluster, but I saw something close to a cluster hang
when changed SHADOW_SYS_UNIT for node D in the *wrong*
ALPHAVMSSYS.PAR and it tried to mount DSA10 as DSA0
first time around(!)...).

[...]


> The other option, as I originally suggested is to use the TCPIP utility
> on each of the node to affect the files on DSA10: by refining the
> logicals to point to the DSA10: files.

This is what I'm leaning toward doing.

> You *MIGHT* (with a lot of checking) be able to use TCPIP$CONFIG on Node
> A (with proper logicals defined, and SET COMMAND for TCPIP) and then you
> would need to backup the node specific directories that are created when
> you configure services over to DSA10: to the appropriate roots for each
> node. (this can be cloned for all 3 nodes since those directories are
> populated with files from TCPIP$TEMPLATES.TLB )

I agree that it seem like this ought to work, but doing the individual
configuration commands inside the TCPIP utility seems somewhat
more controlled (at least to the extent that I'm confident what's
going on).

[...]


> How much of downtime is allowed ? Could could boot without starting your
> apps (halfway between minuimum and full boot) and then manually use
> TCPIP$CONFIG, and then reboot fully. This might be the safest bet.

Well, agreed about suppressing all the disk mounts and applications
startup that aren't needed. But running TCPIP$CONFIG.COM is not
particularly quick and leads to typos under time pressure. If I have
to wait to boot the new system disk before configuring, then I'll do
that
configuration via a DCL command procedure.

> What we don't know is how your 3 nodes share the loads/applications and
> how long you can have one node shutdown at any point in time.

In this case, I want all three (production) nodes using the same IP
stack at
any time the application is up (Cerner Millennium), so we'll make it a
full
application, full cluster downtime.

Thanks for all the ideas, Ken

etms...@yahoo.co.uk

unread,
Jul 15, 2008, 4:16:01 AM7/15/08
to

If it's full applicaiton, full cluster downtime, the safest way is
going to be to run TCPIP$CONFIG when the nodes come back up. There is
likely to be stuff going on behind the command procedure that you
wouldn't necessarily be able to see from the effects (a bit like if
you don't let net$configure start the network then DECnet Plus won't
work properly!)

Steve

Bart...@gmail.com

unread,
Jul 15, 2008, 5:06:08 AM7/15/08
to
On Jul 15, 12:56 am, Ken.Fairfi...@gmail.com wrote:
> On Jul 14, 3:38 pm, JF Mezei <jfmezei.spam...@vaxination.ca> wrote:

[ S n i p . . . ]

> > You *MIGHT* (with a lot of checking) be able to use TCPIP$CONFIG on Node
> > A (with proper logicals defined, and SET COMMAND for TCPIP) and then you
> > would need to backup the node specific directories that are created when
> > you configure services over to DSA10: to the appropriate roots for each
> > node. (this can be cloned for all 3 nodes since those directories are
> > populated with files from TCPIP$TEMPLATES.TLB )
>
> I agree that it seem like this ought to work, but doing the individual
> configuration commands inside the TCPIP utility seems somewhat
> more controlled (at least to the extent that I'm confident what's
> going on).

Be VERY careful here. The entire TCP/IP Services package is VERY
intolerant for any disk except for the system disk. SYS$SPECIFIC and
SYS$SYSDEVICE are being used hardcoded all over the place. I once
tried to persuade TCP/IP services to run from a non-system disk but I
gave up.

At DECUS in San Diego, in 1999, I had a lengthy discussion with
several members of the TCP/IP engineering team. One item was just this
aspect. They agreed that it should be possible to configure TCP/IP on
an other disk than the system disk. Unfortunately, it is still on the
list of possible future enhancements (I hope). They even had a
dedicated QA engineer at that time!

Bart

JF Mezei

unread,
Jul 15, 2008, 6:01:56 AM7/15/08
to
Bart...@gmail.com wrote:

> Be VERY careful here. The entire TCP/IP Services package is VERY
> intolerant for any disk except for the system disk. SYS$SPECIFIC and
> SYS$SYSDEVICE are being used hardcoded all over the place. I once
> tried to persuade TCP/IP services to run from a non-system disk but I
> gave up.


The setup of the core stuff requires only the use of one utility, the
TCPIP utility (TCPIP$UCP.EXE). It can be easier to control.

Worse case scenario, copy the TCPIP$*.DAT files from the "D" system disk
over to the A-B-C system disk. Those files will contain the D setup.
Then use TCPIP utility on each of the A-B-C nodes to add each node's
specific stuff. This would have a fully polyulated TCPIP*.DAT files
residing on DSA1, and you can move them back over to the DSA10: system disk.


Where system disk dependencies really exist in in TCPIP$CONFIG because
it creates directories specific to each service and those are created
either in the specific root or vms$common depending on the service. Some
are treated at the top of the system disk as well.


Note to the OP: Make sure you scan thorugh TCPIP$CONFIG.COM to look at
the code that determines, based on name of the ethernet device, the name
of the interfaces you will have on that system.

If the ethernet device names on all 4 systems are the same device names,
then you can assume the same interface names. But if the ethernet
devices are different, then you'll have different interface names.

here is the relevant definition:
TCPIP$EDEV = "0 XE:DE XQ:QE ES:SE ET:NE EX:XE EF:FE EZ:ZE EC:CE" + -
"ER:RE EW:WE EB:BE EI:IE LL:LE EG:GE VL:VE"

My ethernet devide is an EWA0, which means my internet interface is WE0
(from the EW:WE item)

R.A.Omond

unread,
Jul 15, 2008, 6:29:47 AM7/15/08
to
JF Mezei wrote:
> Bart...@gmail.com wrote:
>
>> Be VERY careful here. The entire TCP/IP Services package is VERY
>> intolerant for any disk except for the system disk. SYS$SPECIFIC and
>> SYS$SYSDEVICE are being used hardcoded all over the place. I once
>> tried to persuade TCP/IP services to run from a non-system disk but I
>> gave up.
>
>
> [...snip...]

> Note to the OP: Make sure you scan thorugh TCPIP$CONFIG.COM to look at
> the code that determines, based on name of the ethernet device, the name
> of the interfaces you will have on that system.
>
> If the ethernet device names on all 4 systems are the same device names,
> then you can assume the same interface names. But if the ethernet
> devices are different, then you'll have different interface names.
>
> here is the relevant definition:
> TCPIP$EDEV = "0 XE:DE XQ:QE ES:SE ET:NE EX:XE EF:FE EZ:ZE EC:CE" + -
> "ER:RE EW:WE EB:BE EI:IE LL:LE EG:GE VL:VE"
>
> My ethernet devide is an EWA0, which means my internet interface is WE0
> (from the EW:WE item)

To Ken (Fairfield), I was just about to add a similar reply to JF's.

Unless the interfaces are all the same on all 4 systems, you're going
to get into a *huge* mess, even if you manage to get round the
indexed-by-node-name issue.

I think you'd be well advised to not go ahead with your original plan.

Just my 2c worth.

JF Mezei

unread,
Jul 15, 2008, 12:14:14 PM7/15/08
to
R.A.Omond wrote:

> Unless the interfaces are all the same on all 4 systems, you're going
> to get into a *huge* mess, even if you manage to get round the
> indexed-by-node-name issue.

Not necessarily. The logic to determine interface name is pretty
straighforward in tcpip$config.com

Once you have Node D all configured and running, it makes for a good
template to see how to do the other nodes manually.

Ken.Fa...@gmail.com

unread,
Jul 15, 2008, 7:30:46 PM7/15/08
to
On Jul 15, 3:01 am, JF Mezei <jfmezei.spam...@vaxination.ca> wrote:
[...]

> Note to the OP: Make sure you scan thorugh TCPIP$CONFIG.COM to look at
> the code that determines, based on name of the ethernet device, the name
> of the interfaces you will have on that system.
>
> If the ethernet device names on all 4 systems are the same device names,
> then you can assume the same interface names. But if the ethernet
> devices are different, then you'll have different interface names.
>
> here is the relevant definition:
> TCPIP$EDEV = "0 XE:DE XQ:QE ES:SE ET:NE EX:XE EF:FE EZ:ZE EC:CE" + -
> "ER:RE EW:WE EB:BE EI:IE LL:LE EG:GE VL:VE"
>
> My ethernet devide is an EWA0, which means my internet interface is WE0
> (from the EW:WE item)

These systems have either 4 or 5 devices each, but all are EW's.
So node D has EWA0, EWB0, EWC0 and EWD0, but node C
has in addition EWF0. So I have WE0-WE3 on two nodes and
WE0-WE4 on the other two nodes.

That does mean that any interface configuration must be done
from the node that "owns" the interfaces.

Still, my configuration is simple enough: all services that can
be enable cluster wide are; the SYS$SPECIFIC directories are
only TCPIP$NTP, TCPIP$BIND, TCPIP$SMTP, and TCPIP$ETC,
the later of which has only template files (i.e., .DAT files with
comments).

So it looks like there are the 7 logicals pointing to "database"
files in SYS$COMMON:[SYSEXE], and if I define those
appropriately, e.g., on node C with DSA10: mounted, I'd
think I could run TCPIP$UCP.EXE from DSA10: and "see"
all the configurations...and add those for node C.

I would also expect the directories for BIND and NTP,
etc., to get created on DSA0: in C's SYS$SPECIFIC,
but those can be copied to DSA10 after the fact.

So what is the gotcha here? (I intend to avoid
TCPIP$CONFIG.COM for a variety of reasons.)

Thanks, Ken

Ken.Fa...@gmail.com

unread,
Jul 15, 2008, 7:33:44 PM7/15/08
to
On Jul 15, 3:29 am, "R.A.Omond" <Roy.Om...@BlueBubble.UK.Com> wrote:
[...]

> To Ken (Fairfield), I was just about to add a similar reply to JF's.
>
> Unless the interfaces are all the same on all 4 systems, you're going
> to get into a *huge* mess, even if you manage to get round the
> indexed-by-node-name issue.

I'm not sure which one of JF's creative solutions you
are thinking of, but if I configure a given node's interfaces
while logged into that node, how do I get into a mess
with indexed-by-node-name issues???

> I think you'd be well advised to not go ahead with your original plan.

Thanks for your concern, Ken

Ken.Fa...@gmail.com

unread,
Jul 15, 2008, 7:37:15 PM7/15/08
to

Why do I need to know anything to worry about that logic,
or indeed, why do I need to think of node D's configuration
as a template, if I'm do node C's configuration from node C?

As I've said more than once in this thread, I intend to avoid
TCPIP$COM.COM which likely *does* make assumptions
about the product being installed on the system disk that
is the node's boot disk.

-Ken

JF Mezei

unread,
Jul 15, 2008, 9:28:22 PM7/15/08
to
Ken.Fa...@gmail.com wrote:

> Why do I need to know anything to worry about that logic,
> or indeed, why do I need to think of node D's configuration
> as a template, if I'm do node C's configuration from node C?

Setting up the inferface is fairly straighforwards since you can compare
the permanent settings on both nodes.

But for the name, route and hosts, you need to be careful because there
are dynamically created entities, and statically created ones. (SHOW
CONF instead of SHOW). Again, DUMP/record gives you a good idea of the
permanent entries you need.

Also remember that you need a TCPIP$NETWORK file even if it is empty.

And remember to create an entry in the hosts database (SET NAME ) for
each node because this is where it picks up its name if I remember
correctly.

Remember to make backups of your TCPIP*.dat files before starting.

0 new messages