I need a virtual exported resource, or something similar

273 views
Skip to first unread message

Stephan

unread,
Jan 9, 2014, 2:49:35 PM1/9/14
to puppet...@googlegroups.com
Hi All,

So here's my use case:

I've got an application with multiple environments, say live, qa and dev, and each environment has multiple servers. The actual application requires an NFS mount mounted on each of these servers. Each environment has it's own NFS drive.
I also have a management server which needs to mount all these NFS drives of every environment.

I use a mount resource included on each environment server to mount each NFS drive, with the help of an $environment variable, which points it to the right share on the NFS server, which is all working fine.

Now I want to puppetize the mounts of all NFS shares on the management server as well, so I thought of using something like this in the actual environment server manifest:

@@mount { mgmtnfs-$environment:
    name => "/$path-$environment"
    fstype => "nfs"
}

and I wanted to collect that in the management server manifest with

Mount <<||>>

Problem is that each exported resource must be globally unique across every single node, not for every environment. That means that if two servers export this resource to the same nfs mount I'll get an error. I don't want an individual nfs mount on the mgmt server per node, but per environment. So I can't use $host instead of $environment

If I would use local resources in the mgmt server manifest I would have to set up 10 mount resources individually, since that's how many environments I have. Actually 30, since every environment has not 1 but 3 separate NFS mounts. Since that would be a manual step for every new environment, and duplication of code, I consider it bad practice.

In my head the most elegant solution to this would be to have a resource which is both virtual and exported, so that it can be requested to be "realized" by every environment server, but is collected only once. I don't think that is currently possible (or is it?). My questions are: Would it be worth a feature request? And are there other ways to get this done in a tidy manner?

Thanks
Stephan

Christopher Wood

unread,
Jan 9, 2014, 2:59:15 PM1/9/14
to puppet...@googlegroups.com
(inline)

On Thu, Jan 09, 2014 at 06:49:35AM -0800, Stephan wrote:
> Hi All,
>
> So here's my use case:
>
> I've got an application with multiple environments, say live, qa and dev,
> and each environment has multiple servers. The actual application requires
> an NFS mount mounted on each of these servers. Each environment has it's
> own NFS drive.
> I also have a management server which needs to mount all these NFS drives
> of every environment.
>
> I use a mount resource included on each environment server to mount each
> NFS drive, with the help of an $environment variable, which points it to
> the right share on the NFS server, which is all working fine.
>
> Now I want to puppetize the mounts of all NFS shares on the management
> server as well, so I thought of using something like this in the actual
> environment server manifest:
>
> @@mount { mgmtnfs-$environment:
>     name => "/$path-$environment"
>     fstype => "nfs"
> }

Could you maybe use "mgmtnfs-${environment}-${fqdn}" (or add more unique-ish suffix strings) in the resource title? If I recall correctly giving each resource a unique title will ensure that each server+environment's mount is a uniquely named resource.

(I just just be rhubarbing on, haven't used exported resources.)

> and I wanted to collect that in the management server manifest with
>
> Mount <<||>>
>
> Problem is that each exported resource must be globally unique across
> every single node, not for every environment. That means that if two
> servers export this resource to the same nfs mount I'll get an error. I
> don't want an individual nfs mount on the mgmt server per node, but per
> environment. So I can't use $host instead of $environment
>
> If I would use local resources in the mgmt server manifest I would have to
> set up 10 mount resources individually, since that's how many environments
> I have. Actually 30, since every environment has not 1 but 3 separate NFS
> mounts. Since that would be a manual step for every new environment, and
> duplication of code, I consider it bad practice.
>
> In my head the most elegant solution to this would be to have a resource
> which is both virtual and exported, so that it can be requested to be
> "realized" by every environment server, but is collected only once. I
> don't think that is currently possible (or is it?). My questions are:
> Would it be worth a feature request? And are there other ways to get this
> done in a tidy manner?
>
> Thanks
> Stephan
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users...@googlegroups.com.
> To view this discussion on the web visit
> [1]https://groups.google.com/d/msgid/puppet-users/2fcaa251-714b-4c41-b995-c48e87cb1520%40googlegroups.com.
> For more options, visit [2]https://groups.google.com/groups/opt_out.
>
> References
>
> Visible links
> 1. https://groups.google.com/d/msgid/puppet-users/2fcaa251-714b-4c41-b995-c48e87cb1520%40googlegroups.com
> 2. https://groups.google.com/groups/opt_out

Stephan

unread,
Jan 9, 2014, 3:12:31 PM1/9/14
to puppet...@googlegroups.com, christop...@pobox.com


On Thursday, January 9, 2014 2:59:15 PM UTC, Christopher Wood wrote:

Could you maybe use "mgmtnfs-${environment}-${fqdn}" (or add more unique-ish suffix strings) in the resource title? If I recall correctly giving each resource a unique title will ensure that each server+environment's mount is a uniquely named resource.

No unfortunately that's not possible because that would give me NFS mounts per node, not per environment. I only need one mount per environment.

Christopher Wood

unread,
Jan 9, 2014, 3:30:19 PM1/9/14
to puppet...@googlegroups.com
Re-reading the original post with this in mind, I think I understand this a bit more. I personally wouldn't use exported resources for this. Instead, something along these lines:

hiera hash with each environment and its associated mount point
hiera lookup on a non-management node grabs its environment
environment is used to determine which mount point via hiera_hash
management node uses create_resources and hiera_hash to make its mount points

Possibly I'm getting closer?

> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users...@googlegroups.com.
> To view this discussion on the web visit
> [1]https://groups.google.com/d/msgid/puppet-users/26e1282e-45f0-46cd-a7b5-36d50fd012e2%40googlegroups.com.
> For more options, visit [2]https://groups.google.com/groups/opt_out.
>
> References
>
> Visible links
> 1. https://groups.google.com/d/msgid/puppet-users/26e1282e-45f0-46cd-a7b5-36d50fd012e2%40googlegroups.com
> 2. https://groups.google.com/groups/opt_out

Stephan

unread,
Jan 9, 2014, 4:07:17 PM1/9/14
to puppet...@googlegroups.com, christop...@pobox.com

Possibly I'm getting closer?

That might be an idea ... I've got it on my task list for a while to look into hiera. This might be the solution for this and several other issues.

Thanks a lot for the hint!

jcbollinger

unread,
Jan 9, 2014, 4:48:10 PM1/9/14
to puppet...@googlegroups.com
Hi All,

So here's my use case:

I've got an application with multiple environments, say live, qa and dev, and each environment has multiple servers. The actual application requires an NFS mount mounted on each of these servers. Each environment has it's own NFS drive.
I also have a management server which needs to mount all these NFS drives of every environment.

I use a mount resource included on each environment server to mount each NFS drive, with the help of an $environment variable, which points it to the right share on the NFS server, which is all working fine.

Now I want to puppetize the mounts of all NFS shares on the management server as well, so I thought of using something like this in the actual environment server manifest:

@@mount { mgmtnfs-$environment:
    name => "/$path-$environment"
    fstype => "nfs"
}

and I wanted to collect that in the management server manifest with

Mount <<||>>

Problem is that each exported resource must be globally unique across every single node, not for every environment. That means that if two servers export this resource to the same nfs mount I'll get an error. I don't want an individual nfs mount on the mgmt server per node, but per environment. So I can't use $host instead of $environment



I'm not sure I follow how many distinct shares are exported, by which NFS servers, or by which machines any of those are mounted.  Nevertheless, the key is probably for the Mount resources to be exported by the nodes serving the shares (which does not itself cause them to have the Mounts in their own catalogs).  That will allow you to ensure that each distinct share is exported exactly once.  You could and probably should then have every node that wants any of those mounts collect the appropriate exported ones instead of declaring them independently.  More generally, a node should export only resources that are in some way specific to themselves.

Also, I would recommend that you declare appropriate tags on your Mount resources.  That could facilitate distinguishing Mounts exported for the present purpose from unrelated Mounts that might in the future be exported for some entirely different purpose.

For example, the NFS servers might declare this:

@@mount { "myapp-nfs-$environment":
    name => "${myapp::path}-$environment",
    fstype => "nfs",
    device => "${::fqdn}:/path/to/share",
    options => 'defaults',
    tag => 'myapp-nfs'
}

Your per-environment servers could then do this:

Mount <<| title == "myapp-nfs-$environment" |>>
file { '/local/alias/for/mount/point':
  ensure => 'link',
  target => "${myapp::path}-$environment"
}

And your management server would do this:

Mount <<| tag == 'myapp-nfs' |>>


John

Stephan Eckweiler

unread,
Jan 9, 2014, 5:28:55 PM1/9/14
to puppet...@googlegroups.com

 machines any of those are mounted.  Nevertheless, the key is probably for the Mount resources to be exported by the nodes serving the shares (which does not itself cause them to have the Mounts in their own catalogs).  

That's principally a great idea and would solve my problem, the only problem is that my NAS heads aren't running puppet. They are some useless proprietary EMC boxes. But thanks for the solution, I'll keep that in mind since we might replace them at some point with Linux-based NAS heads. 

Garrett Honeycutt

unread,
Jan 10, 2014, 12:19:38 AM1/10/14
to puppet...@googlegroups.com

Hi,

I handle NFS mounts by declaring them as a hash in Hiera. Through the power of Hiera, you could specify mounts at any level of the hierarchy including per host and/or per environment. The mount itself is done with the types[1] module and handling the nfs client portion is handled by the nfs[2] module.

Example Hiera entry using the YAML backend

  types::mounts:
    /srv/nfs/home:
      device: nfsserver:/export/home
      fstype: nfs
      options: rw,rsize=8192,wsize=8192

[1] - https://github.com/ghoneycutt/puppet-module-types
[2] - https://github.com/ghoneycutt/puppet-module-nfs

BR,
-g

Andrey Kozichev

unread,
Jan 13, 2014, 5:49:36 PM1/13/14
to puppet...@googlegroups.com

Hello guys,

I am working on similar task.

Trying to find the way to define all NFS shares somewhere in the high level of hierarchy and then just add/remove them on the node level just by name.

So I am using the class which accepts mountpoint names, then I want to do hiera lookup for each name to expand all the options of the mountpoint and create resources based on that.
This scenario works well if on the class input I just supply single mountpoint -> then I do hiera("mountpointname") and create_resources()
But if I want to have multiple Mountpoints defined per host I need to supply an Array to the class and then iterate it and fetch details for each mountpoint. I can probably do this by using new 3.2 syntax with "each", but I would like to avoid this.

Do you have any better way to implement this?

My target is to define Mountpoints in single place in Hiera and then use it for different hosts using Hiera.

Andrey

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.

Ian Mortimer

unread,
Jan 14, 2014, 2:38:33 AM1/14/14
to puppet...@googlegroups.com
On 14/01/14 03:49, Andrey Kozichev wrote:

> This scenario works well if on the class input I just supply single
> mountpoint -> then I do hiera("mountpointname") and create_resources()
> But if I want to have multiple Mountpoints defined per host I need to
> supply an Array to the class and then iterate it and fetch details for
> each mountpoint. I can probably do this by using new 3.2 syntax with
> "each", but I would like to avoid this.
>
> Do you have any better way to implement this?

If in hiera you define a hash of hashes (instead of an array of hashes)
you can pass that to create_resources which will create a resource for
each hash.


--
Ian
i.mor...@uq.edu.au Ian Mortimer
Tel: +61 7 3346 8528 Science IT
University of Queensland

Andrey Kozichev

unread,
Jan 14, 2014, 12:37:56 PM1/14/14
to puppet...@googlegroups.com
can't seem figure this out.
Here is my data:

common.yaml

nfsshares:

  "nfsshare-public":

    name: /var/public

    device: "hostname1:/vol/public"

    remounts: true

    options: 'rw,bg,hard'

  "nfsshare-private":

    name: /var/private

    ensure: mounted

    device: 'hostname2:/var/private'

    remounts: true

    options: 'rw,bg,hard'  


now on the node level:

my-test-server.yaml

nfs::client::nfs_resouce_name: [ 'nfsshare-public',  "nfsshare-private" ]

  

struggling to make class "nfs::client"

to create resources  'nfsshare-public',  "nfsshare-private" based on the above data in hiera without making "create_resources" inside of "create_resources" or run loop  on nfs_resouce_name and then create_resource for each item.

There must be simpler way.


Any hints appreciated.


Andrey


Andrey




--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/52D4A329.9000608%40uq.edu.au.

jcbollinger

unread,
Jan 14, 2014, 11:16:21 PM1/14/14
to puppet...@googlegroups.com


On Tuesday, January 14, 2014 6:37:56 AM UTC-6, Andrew wrote:
can't seem figure this out.
Here is my data:

common.yaml

nfsshares:

  "nfsshare-public":

    name: /var/public

    device: "hostname1:/vol/public"

    remounts: true

    options: 'rw,bg,hard'

  "nfsshare-private":

    name: /var/private

    ensure: mounted

    device: 'hostname2:/var/private'

    remounts: true

    options: 'rw,bg,hard'  


now on the node level:

my-test-server.yaml

nfs::client::nfs_resouce_name: [ 'nfsshare-public',  "nfsshare-private" ]

  

struggling to make class "nfs::client"

to create resources  'nfsshare-public',  "nfsshare-private" based on the above data in hiera without making "create_resources" inside of "create_resources" or run loop  on nfs_resouce_name and then create_resource for each item.

There must be simpler way.


Any hints appreciated.



I'm not altogether clear on what you do or don't want to do, but your data appear to be pointing in this direction:

class nfs::client (
    $nfs_resource_name
    ) {
  nfs::share { ${nfs_resource_name}:
    sharedata => hiera('nfsshares')
  }
}

define nfs::share (
    $sharedata
    ) {
  $my_data = { $title => $sharedata[$title] }
  create_resources('mount', $my_data)

  # if you don't want to use create_resources()
  # then you can put a regular mount declaration
  # there.  It only needs to handle one resource.
}

I'm assuming there that you must accommodate cases where the 'nfsshares' data contains more shares than you want to declare for the given node, else create_resources() could more directly be applied to the problem.

Alternatively, you could write a custom function that creates a hash containing just the wanted elements by selecting elements from the overall hash based on array of keys.  You could use create_resources() directly on that without an intervening defined type. Such a function would be sufficiently general to be reusable, and it could be expressed very compactly in Ruby.


John

Message has been deleted

Garrett Honeycutt

unread,
Jan 15, 2014, 4:17:32 AM1/15/14
to puppet...@googlegroups.com

Hi,


I have this implemented[1] such that you define a hash of your mounts somewhere in Hiera. If you want to do a merge lookup against Hiera so that if you specify the hash at multiple levels it gets all that it matches (ie: fqdn, profile and environment levels), you can do this by setting nfs::hiera_hash: true

[1] - https://github.com/ghoneycutt/puppet-module-nfs 

BR,
-g

Andrey Kozichev

unread,
Jan 15, 2014, 7:27:08 AM1/15/14
to puppet...@googlegroups.com

Thank you John, it looks like what I need.
I was doing something similar but was getting an error on create_resource, I think I see now what was wrong. I will try this.

Andrey

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/c816acf4-26df-4a6d-881c-ed15457eb10a%40googlegroups.com.

Andrey Kozichev

unread,
Jan 15, 2014, 7:29:05 AM1/15/14
to puppet...@googlegroups.com

Thanks for the link. I will have a look.

On 15 Jan 2014 04:16, "Garrett Honeycutt" <g...@garretthoneycutt.com> wrote:


On Monday, January 13, 2014 9:38:33 PM UTC-5, ianm wrote:
I have this implemented[1] such that you define a hash of your mounts somewhere in Hiera. If you want to do a merge lookup against Hiera so that if you specify the hash at multiple levels it gets all that it matches (ie: fqdn, profile and environment levels), you can do this by setting nfs::hiera_hash: true

[1] - https://github.com/ghoneycutt/puppet-module-nfs

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.

Stephan

unread,
Jan 16, 2014, 4:08:47 PM1/16/14
to puppet...@googlegroups.com, christop...@pobox.com
Hi Christopher,

I've spent some time getting my head around Hiera now, and would appreciate some help with how to implement your suggestion:


hiera hash with each environment and its associated mount point
hiera lookup on a non-management node grabs its environment
environment is used to determine which mount point via hiera_hash
management node uses create_resources and hiera_hash to make its mount points

I've now got this :hierarchy:
  - "nodes/%{::hostname}"
  - "application_env/%{::application_env}"
  - common

So first I'm assigning a node via hiera to an environment:

hiera/nodes/box1.yaml:
 ---
application::env: "live"


Then I'm setting an external fact named application_env which I pick up by Hiera later on (Not sure this construct is good practice, suggestions welcome).

Then I'm configuring unrelated environment-specific settings:

hiera/application_env/live.yaml:
---
application::setting1: true
application::setting2: false
application::setting3:
   - 'foo'
   - 'bar'

So the thing is that as soon as I know the environment name I know everything I need to know to create a nfs mount resource inside the puppet module:

mount { appnfs:
   device   => "${mountip}:/application_${env},
   fstype   => "nfs",
   name     => "$mountdir",
}

The same would be true on my Management server, with the difference that the name would be name => "${mountdir}_${env}".

So what I don't understand is the hiera hash per environment bit. So I guess I could create a single Hash with all the environments instead of the above one yaml file per environment. Then I can put hashes in each of these hash values with the unrelated application settings. But wouldn't I need for create_resources a second unrelated hash with all environments with all the settings for the mount resource, like device, fstype, name and so on? At that point I would again have double configuration, two separate hashes, one for the whole rest, one for the mount resources. Also I actually have 3 NFS mounts per environment, just mentioned one for simplification. Would that be a 3rd and 4th hash with mostly duplicate data? What if I need something else on the Management server in future per environment ... a 5th hash for that?

My hypothetical virtual exported resource somehow sounds like a more intuitive approach. an exported resource, which is only virtual, and can therefore realized more than once, namely by every application server which is exporting it. This would also help in case I have to configure an exception, like I need this nfs mount on all environments, except on the environment called data_migration. A hash of all environments, used with create_resources wouldn't pick that exception up, right? But maybe I'm not fully understanding Hiera's possibilities here?

Thanks
Stephan

Andrey Kozichev

unread,
Jan 16, 2014, 5:17:03 PM1/16/14
to puppet...@googlegroups.com
Thank you guys, I have figured it out.

I am defining all shares in one file and introducing 1 extra argument which tells if share enable/disabled.(all disabled by default)

In my defined resource I am doing one more hiera lookup for this argument "hiera("nfs_mount::${name}")"  and if true - use create_resources.

This way I have 1 file in hiera with all definitions + nfs_mount::${name}: true  define per host basis for servers where I need share this mounted.


The thing is - that was my initial plan to do, but I kept getting "resource already defined" when multiple shares existed with the same mountpoint name.

Looks like it was happening due to the incorrect layout I have created in hiera(I have assigned "name" of the resource to be the same as the mount point). Now I have changed this and all works as expected.

it used to be:
nfsshares:
  share1:
    name: /mnt
  share2:
    name: /mnt

Changed to:

nfsshares:
  share1:
    dir: /mnt
  share2:
    dir: /mnt

Regards,
Andrey


  


Andrey






Reply all
Reply to author
Forward
0 new messages