Enter passphrase for key '/root/.ssh/id_rsa':
I should not have to do this each time I login to a compute node. Can
anyone tell me how to resolve such an issue?
Chris Penn...
--
" A Mathematician is a machine for turning coffee into theorems." -Erdös,
Paul
Nov. 1992
# ssh-agent $SHELL
# ssh-add
Will allow passwordless access to your nodes. You can also remove the
password
on root's ssh key, with
# ssh-keygen -p
This is standard ssh stuff, nothing rocks-specific other than root's public
key is
copied to each node's /root/.ssh/authorized_keys file at installation time.
-P
--
Philip Papadopoulos, PhD
University of California, San Diego
858-822-3628
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.sdsc.edu/pipermail/npaci-rocks-discussion/attachments/20071030/d4a9301f/attachment.html
~]# ssh-agent $SHELL
~]# ssh-add
Enter passphrase for /root/.ssh/id_rsa:
~]# ssh compute-0-0
Last login: Wed Oct 31 15:12:35 2007 from oe.local
Rocks Compute Node
Rocks 4.3 (Mars Hill)
Profile built 13:52 30-Oct-2007
Kickstarted 14:00 30-Oct-2007
compute-0-0~]#
However when i login out of the front node, I need to do this process over
each time. I would like it so that I only need to do this once. How do
configure things so that I can ssh to any of the nodes as root (or another
user, assuming the user exists on that node) from the master node,
permanently?
Chris Penn
> Doing as you suggested works temporarily:
> as root on the frontend
> ~]# ssh-agent $SHELL
> ~]# ssh-add
> Enter passphrase for /root/.ssh/id_rsa:
> ~]# ssh compute-0-0
> Last login: Wed Oct 31 15:12:35 2007 from oe.local
> Rocks Compute Node Rocks 4.3 (Mars Hill)
> Profile built 13:52 30-Oct-2007
> Kickstarted 14:00 30-Oct-2007
> compute-0-0~]#
> However when i login out of the front node, I need to do this
> process over each time. I would like it so that I only need to do
> this once. How do configure things so that I can ssh to any of the
> nodes as root (or another user, assuming the user exists on that
> node) from the master node, permanently?
>
Here is one way to do it (playing with ssh keys).
1. copy the root's cluster ssh keys onto your workstation's .ssh
directory (do not erase your existing keys)
MacMini:~ cd
MacMini:~ bourdin$ scp ro...@schur2.math.lsu.edu:.ssh/id_\* .
id_rsa 100% 951 0.9KB/
s 00:00
id_rsa.pub 100% 234 0.2KB/
s 00:00
MacMini:~ bourdin$ mv id_rsa .ssh/id_rsa_schur2
MacMini:~ bourdin$ mv id_rsa.pub .ssh/id_rsa_schur2.pub
MacMini:~ bourdin$ mv id_rsa .ssh/id_rsa_schur2
MacMini:~ bourdin$ mv id_rsa.pub .ssh/id_rsa_schur2.pub
2. sign the cluster's key on your workstation and ssh without password
MacMini:~ bourdin$ ssh-add .ssh/id_rsa_schur2
Enter passphrase for .ssh/id_rsa_schur2:
Identity added: .ssh/id_rsa_schur2 (.ssh/id_rsa_schur2)
MacMini:~ bourdin$ ssh ro...@schur2.math.lsu.edu
Last login: Wed Oct 31 22:06:03 2007 from ip72-207-248-75.br.br.cox.net
Rocks Frontend Node - schur2 Cluster
Rocks 4.3 (Mars Hill)
Profile built 16:59 08-Oct-2007
Kickstarted 12:34 08-Oct-2007
[root@schur2 ~]#
3. At this point, you should be able to ssh to your frontend, but
ssh'ing to your compute nodes still doesn't work. You need to force
your workstation to forward its agent. Edit your workstation's .ssh/
config and add the lines
Host schur2.math.lsu.edu
ForwardAgent yes
For the users, I use a different trick. You could probably adapt it to
the root's account. It is easier on the users' account since they are
mounted on the compute nodes. These are a few lines from my user add
script (of course you will have to change the cluster hostname and
some environment variable names):
echo "Creating passwordless key for the clusters"
if test -e ${HOMEDIR}/.ssh/id_rsa_cluster ; then
echo " $HOMEDIR/.ssh/id_rsa_cluster already exists."
else
ssh-keygen -t rsa -P "" -f $HOMEDIR/.ssh/id_rsa_cluster -C "$ID
for schur cluster"
fi
echo "Setting passwordless ssh between frontend and compute nodes"
echo from=\"schur*\" `cat $HOMEDIR/.ssh/id_rsa_cluster.pub` >>
$HOMEDIR/.ssh/authorized_keys
chmod 644 $HOMEDIR/.ssh/authorized_keys
echo Host compute-\* >> $HOMEDIR/.ssh/config
echo identityfile ~/.ssh/id_rsa_cluster >> $HOMEDIR/.ssh/config
In short: this creates a password-less ssh key which is trusted only
when the connection is initiated from the frontend (the 'from=' line
in the authorized keys) and is used when connecting to the compute
nodes (the Host ) line in the config file .
Have a look at http://www.ibm.com/developerworks/library/l-keyc.html
for more on ssh keys management
HTH
Blaise
--
Department of Mathematics, Louisiana State University, Baton Rouge, LA
70803, USA
http://www.math.lsu.edu/~bourdin
I. If you clear the password on root's ssh key on the frontend, this will:
A. Give you passwordless access from the frontend to all nodes in
your cluster
B. Will not give you passwordless access from nodes in your cluster
to frontend (or other nodes).
II. If user's clear the password on their ssh-key.
A. Give passwordless access from frontend to nodes as user
B. Give passwordless access from nodes to frontend as user
Why the difference -- If you have root on the frontend, you own the cluster.
It isn't really a security risk if the
ssh keypair being used is local to the cluster only. Having root "hacked"
on compute node does not imply you
own the entire cluster only that node.
For users -- their access is symmetric. This makes batch jobs and other
goodies actually practical.
The above is simple, balanced in terms of convenience and security. It is
critical that the passwordless keys are only used within
the cluster. Some cluster admins change the permissions of the ~user/.ssh so
that only an admin can put an authorized key into .ssh/authorized_keys (that
helps somewhat). Some also further restrict ssh settings about which
external machines be used to remotely access the cluster. No security
system is perfect and one must strike a personal balance of ease-of-use
versus chance of an account being compromised through password or key
hijacks. We have systems in which we use a commerical-grade on-time
password system as the only way to remotely access the system. Even that has
vulnerabilities.
-P
--
Philip Papadopoulos, PhD
University of California, San Diego
858-822-3628
-------------- next part --------------
An HTML attachment was scrubbed...