[Rocks-Discuss] Copying all files to all compute nodes

894 views
Skip to first unread message

Amit U Sinha

unread,
Jan 12, 2010, 6:38:27 PM1/12/10
to Discussion of Rocks Clusters
There are some large binary files in the frontend which needs to be
copied to all compute nodes when they are built. We tried putting them
in /share/apps and then add the following lines in extend-compute:

cp /share/apps/large_files /start/partition1/

However, this doesn't works, perhaps because /share/apps is not
mounted while extend-compute is processed. Is there an easy way to let
some files be copied to compute nodes each time a node is rebuilt?

Thanks,
-- Amit


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

jean-francois prieur

unread,
Jan 12, 2010, 9:51:47 PM1/12/10
to Discussion of Rocks Clusters
You could try using the 411 service, any files listed in /var/411/Files.mk
are automatically sent to the compute nodes when they are built.

Edit the file /var/411/Files.mk and add to the FILES_NOCOMMENT section with
the path to the file you want replicated eg. /etc/rc.d/rc.local

Then run make clean and make, when you do rocks sync users this will
replicate the file specified in Files.mk to the nodes. This also
happens on
node reinstallation.

Don't know how well it scales to large files though but may be worth a try.
The documentation here is out of date but will give you an idea how the
system works

http://www.rocksclusters.org/rocks-documentation/4.2/service-411.html

Regards,
JF Prieur

2010/1/12 Amit U Sinha <amit_...@dfci.harvard.edu>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.sdsc.edu/pipermail/npaci-rocks-discussion/attachments/20100112/9f154d9d/attachment.html

Scott L. Hamilton

unread,
Jan 13, 2010, 10:27:39 AM1/13/10
to Discussion of Rocks Clusters
Amit,

If you want to do this you can possibly use scp like this, as scp should
be available and does not require authentication.

scp headnodename:/share/apps/large_files /state/partition1/

or create a post install script to run on first boot as a service, the
last line of the script removes the script.

Another option is add a symlink in the /var/www/html folder to
/share/apps/large_files and use wget to copy them.

Scott

Empain Alain

unread,
Jan 13, 2010, 12:47:45 PM1/13/10
to Discussion of Rocks Clusters
Hello,

I have a frontend and a test compute node (not yet a NAS).

>From the frontend,

* I can execute anything on the compute node with
rocks iterate host ...

* the addition of a user succeded, and I can see it on the compute node,
with its .ssh files.

But either ssh root@compute-0-0 or user@compute-0-0 fail :

I receive the password prompt, I type the same password than on the
front-end, but I get :
'Permission denied, please try again'.

Moreover, rocks-console does not work :
'vncviewer: unable to open display "" '

Perhaps I missed something obvious.

Thanks for any hint,

Alain

--


Dr Alain EMPAIN, Bioinformatics, Bryology
National Botanic Garden of Belgium alain....@br.fgov.be
University of Liège, GIGA +1, Alma-in-silico alain....@ulg.ac.be
Rue des Martyrs, 11 B-4550 Nandrin
Mobile: +32 497 701764 HOME:+32 85 512341 ULG: +32 4 3664157


Greg Bruno

unread,
Jan 13, 2010, 2:16:58 PM1/13/10
to Discussion of Rocks Clusters
On Wed, Jan 13, 2010 at 9:47 AM, Empain Alain <Alain....@ulg.ac.be> wrote:
> Hello,
>
> I have a frontend and a test compute node (not yet a NAS).
>
> >From the frontend,
>
> * I can execute anything on the compute node with
> rocks iterate host ...
>
> * the addition of a user succeded, and I can see it on the compute node,
> with its .ssh files.
>
> But either ssh root@compute-0-0 or user@compute-0-0 fail :

after you added the user, did you run:

# rocks sync users

- gb

Empain Alain

unread,
Jan 13, 2010, 4:05:46 PM1/13/10
to Discussion of Rocks Clusters

Hello Bruno,

yes I did it, and I used 'rocks iterate' to verify the presence of the
user within the compute-0-0 files passwd, shadow, its ~/ files, including
the .ssh files.

Thanks,

Alain


>
> - gb

Reply all
Reply to author
Forward
0 new messages