CVS Update: (clem) rocks/src/roll/service-pack/src/usersguide

0 views
Skip to first unread message

c...@rocks-127.sdsc.edu

unread,
Feb 4, 2013, 8:27:06 PM2/4/13
to rocks-clusters-de...@googlegroups.com, anoop.r...@gmail.com, greg....@gmail.com, mason...@gmail.com, philip.pa...@gmail.com, luca.c...@gmail.com
clem 13/02/04 17:27:06

Modified: src/roll/service-pack/src/usersguide fixes.sgml
installing.sgml
Log:
first cleanup of the service pack user guide

Revision Changes Path
1.13 +1 -0 rocks/src/roll/service-pack/src/usersguide/fixes.sgml

Index: fixes.sgml
===================================================================
RCS file: /home/cvs/CVSROOT/rocks/src/roll/service-pack/src/usersguide/fixes.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -u -b -t -w -r1.12 -r1.13
--- fixes.sgml 5 Jan 2011 19:00:53 -0000 1.12
+++ fixes.sgml 5 Feb 2013 01:27:05 -0000 1.13
@@ -10,6 +10,7 @@

<listitem>
<para>
+ TODO update this
In specific configurations, when you run "rocks sync users", you
could see the error message:
</para>



1.20 +2 -42 rocks/src/roll/service-pack/src/usersguide/installing.sgml

Index: installing.sgml
===================================================================
RCS file: /home/cvs/CVSROOT/rocks/src/roll/service-pack/src/usersguide/installing.sgml,v
retrieving revision 1.19
retrieving revision 1.20
diff -u -b -t -w -r1.19 -r1.20
--- installing.sgml 14 Feb 2011 22:14:17 -0000 1.19
+++ installing.sgml 5 Feb 2013 01:27:05 -0000 1.20
@@ -70,24 +70,8 @@

<para>
<screen>
-error - membership "Login" already exists
-{appliance} [graph=string] [membership=string] [node=string] [os=string] [public=bool]
-error - firewall rule already exists
-{appliance} [action=string] [chain=string] [network=string] [output-network=string] [protocol=string] [service=string]
-error - firewall rule already exists
-{appliance} [action=string] [chain=string] [network=string] [output-network=string] [protocol=string] [service=string]
-error - firewall rule already exists
-{appliance} [action=string] [chain=string] [network=string] [output-network=string] [protocol=string] [service=string]
-error - firewall rule already exists
-{appliance} [action=string] [chain=string] [network=string] [output-network=string] [protocol=string] [service=string]
-error - firewall rule already exists
-{appliance} [action=string] [chain=string] [network=string] [output-network=string] [protocol=string] [service=string]
-error - firewall rule already exists
-{appliance} [action=string] [chain=string] [network=string] [output-network=string] [protocol=string] [service=string]
-error - firewall rule already exists
-{appliance} [action=string] [chain=string] [network=string] [output-network=string] [protocol=string] [service=string]
-error - firewall rule already exists
-{appliance} [action=string] [chain=string] [network=string] [output-network=string] [protocol=string] [service=string]
+
+TODO check if we have some warnings
</screen>
</para>

@@ -95,16 +79,6 @@
The above error messages can be safely ignored.
</para>

-<para>
-The configuration graph inside the SGE roll contains the links that properly
-configures the frontend to install "login appliances".
-To say it a different way, if you didn't install the SGE roll, then you
-wouldn't be able to install login appliances (and you should be able to
-install login appliances without the SGE roll).
-This is one of the bugs that service pack roll fixes -- if the frontend already
-has the ability to install login appliances, then when the service pack is
-applied, you will see the above error messages.
-</para>
</warning>

</section>
@@ -128,20 +102,6 @@
</screen>
</para>

-<warning>
-<para>
-It is critical that you run cluster-kickstart-pxe as it will force the
-compute nodes to PXE boot.
-It is important that you PXE boot the nodes for the first install,
-because with a PXE boot based install, the nodes with get their initrd from
-the frontend and inside the initrd is a new tracker-client that is compatible
-with the new tracker-server.
-After the first install, a new initrd will be on
-the hard disk of the installed nodes and then it is safe to run
-/boot/kickstart/cluster-kickstart.
-</para>
-</warning>
-
</section>






Reply all
Reply to author
Forward
0 new messages