file { "/home/directory1":ensure => directory,owner => "user",group => "group",mode => "755",}mount { "/home/directory1":device => "our-thumper.domain.com:/export/directory1",atboot => yes,fstype => "nfs",options => "tcp,hard,intr,rw,bg",name => "/home/directory1",ensure => mounted,remounts => true,pass => "0",require => File["/home/directory1"],
}
device => "our-thumper.domain.com:/export/$name",
atboot => yes,
fstype => "nfs",
options => "tcp,hard,intr,rw,bg",
name => "/home/$name",
ensure => mounted,
remounts => true,
pass => "0",
require => File["/home/$name"],
}
--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.
To post to this group, send email to puppet...@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.
I have several NFS mounts to manage, on many systems. On each system, I must ensure that the root directory and path exist and have the correct permissions beforehand, then ensure they are mounted in Puppet.
Therefore, unless you do something to ensure your FS unmounted before the File is applied, the File will sometimes manage the local directory, but other times manage the remote one. That may be tolerable, [...]
This is something I've been concerned about -- and how to properly approach this.For example, we can use Puppet to ensure that the directories (mount points) exist and that the entries are present in /etc/fstab -- but I grow very concerned about automating the NFS-mount part of this.I don't think we'd want to use autofs, as the namespace isn't visible unless you "cd" directly into it. We nixed this idea with /home, for example.
What would be the safest ideal way to approach this?
If Puppet were to manage /home/something, an NFS mount, and ensure it's mounted... it would automatically look to see if both /home and / were also mounted?
In most cases, on our older systems, /home is actually just on / -- a full partition that sits on a raid5 layer. So, at best, Puppet would just get a standard error that / and /home are already present and mounted.What I'm concerned about is:- Ensuring the directories are present, with correct permissions and ownership
- Ensuring that the NFS mount is active and available (possibly send out an error vis syslog if not)
- NOT causing some bizarre cascade of mount issues by Puppet repeatedly attempting to fix something it cannot, in the case of an error that requires manual intervention.
Our environment is growing substantially, to the point where manually editing fstab is becoming a real PITA, and also creates an environment for inconsistencies (and minor typos). So I really need Puppet to manage those mounts.
I'm not sure I would need automounter for these.
I've been playing around with this code and have encountered several errors. As noted below, there is going to be an issue with /home; however, I thought I could get around that by declaring that /first/, which won't work -- as it complains about duplicate declarations of /home.
class nfs_mounts_prod {define nfs_mounts {
$server = "ourserver.com"$options = "tcp,rw,hard,intr,vers=3,tcp,rsize=32768,wsize=32768,bg"# These needed to be defined here, it would not work outside of the class definition$prod_mounts = ['201301','201301pod',]file { "/home":ensure => directory,owner => "root",group => "root",mode => "0755",}
file { "/home/${name}":ensure => directory,owner => "16326",group => "90",mode => "0755",require => File["/home"],} # filemount { "/home/${name}":device => "${server}:/export/prod/${name}",atboot => yes,fstype => nfs,options => "${options}",name => "/home/${name}",ensure => mounted,remounts => true,pass => "0",require => File["/home/${name}"],} # mount
} # nfs_mountsnfs_mounts { $prod_mounts: }} # class nfs_mounts_prodCan you tell me what's wrong -- or if this is even going to work :-)
Thanks for the reference, John.
We need to ensure that these remote mounts are owned/grouped by specific UID/GID -- hence why I had ownership involved there. We could do this via UID/GID only (not name) if that works better? I don't understand how apply that ownership to /home/201301 would affect / or /home.
Then, Puppet would need to check that it's present, has the correct permissions and owership, and ensure it's mounted -- or, in the case of aged data, not mounted and not present.