|
Actually, I find that per-api modules are very *in*convenient for a few reasons reasons. First, they require the author of the module to re-solve the problem of how to talk to the REST interface every single time a new module is written. Secondly, if someone has not authored a module already, then I can either give up or try to engineer my own. This leaves me with the standard puppet resources, or I must write my own ruby module. If I want to make that idempotent, it makes it even more complicated. So here's what that looks like using the existing puppet resources for foreman:
exec
|
{
|
"/bin/curl -s -k --user '${user}:${password}' --request POST -H 'Content-Type: application/json' --data '{\"compute_attribute\":{
|
\"vm_attrs\":{
|
\"cpus\":\"2\",
|
\"corespersocket\":\"1\",
|
\"memory_mb\":\"1024\",
|
\"cluster\":\"Dev-Test\",
|
\"resource_pool\":\"DEV-TEST\",
|
\"path\":\"/Datacenters/Lake Forest DEV/vm/Katello Deployments\",
|
\"guest_id\":\"rhel7_64Guest\",
|
\"scsi_controller_type\":\"ParaVirtualSCSIController\",
|
\"hardware_version\":\"vmx-11\",
|
\"memoryHotAddEnabled\":\"1\",
|
\"cpuHotAddEnabled\":\"1\",
|
\"interfaces_attributes\":
|
{
|
\"0\":{
|
\"type\":\"VirtualVmxnet3\",
|
\"network\":\"Development\"
|
},
|
\"1\":{
|
\"type\":\"VirtualVmxnet3\",
|
\"network\":\"Machine NFS\"
|
}
|
},
|
\"volumes_attributes\":{
|
\"0\":{
|
\"datastore\":\"LF_NOC_DEV_0104_08\",
|
\"name\":\"Hard disk\",
|
\"size_gb\":\"20\",
|
\"thin\":\"true\",
|
\"eager_zero\":\"false\"
|
}
|
}
|
}
|
}}' https://${::fqdn}/api/compute_profiles/`/bin/curl -s -k --user '${user}:${password}' --request POST -H 'Content-Type: application/json' --data '{\"name\":\"1 - Micro - CPU-2, RAM-1GB\"}' https://${::fqdn}/api/compute_profiles | /usr/bin/sed 's/\"id\":/\\n/g' | /usr/bin/tail -n 1 | /usr/bin/awk -F',' '{print \$1}' | /usr/bin/tr -d \"\\n\"`/compute_resources/`/usr/bin/curl -k -s --user '${user}:${password}' https://${::fqdn}/api/compute_resources | /usr/bin/grep 'results' | /usr/bin/sed 's/}/\\n/g' | /usr/bin/grep '${city} ${env} VMWare' | /usr/bin/sed 's/\"id\":/\\n/g' | /usr/bin/awk -F',' '{print \$1}' | /usr/bin/egrep '[0-9].*' | /usr/bin/awk -F',' '{print \$1}' | /usr/bin/tail -n 1 | /usr/bin/tr -d \"\\n\"`/compute_attributes":
|
logoutput => true,
|
unless => "/bin/curl -s -k --user '${user}:${password}' https://${::fqdn}/api/compute_profiles | grep \"1 - Micro - CPU-2, RAM-1GB\"",
|
path => ['/bin', '/sbin/', '/usr/bin', '/usr/sbin']
|
}
|
exec
|
{
|
"/usr/bin/curl -k -s --user '${service_account}:${service_account_pwd}' --request DELETE -H 'Content-Type: application/json' https://${::fqdn}/api/compute_profiles/`/usr/bin/curl -k -s --user '${service_account}:${service_account_pwd}' https://${::fqdn}/api/compute_profiles | /usr/bin/grep 'results' | /usr/bin/sed 's/}/\\n/g' | /usr/bin/grep -v '1 - Micro - CPU-2, RAM-1GB|2 - Small - CPU-2, RAM-2GB|3 - Medium - CPU-2, RAM-4GB|4 - Large - CPU-2, RAM-8GB|5 - Extra Large - CPU-4, RAM-16GB|6 - Double Extra Large - CPU-8, RAM-32GB|7 - Quadruple Extra Large - CPU-16, RAM-64GB|8 - Admin Small - CPU-2, RAM-2GB' | /usr/bin/sed 's/\"id\":/\\n/g' | /usr/bin/awk -F',' '{print \$1}' | /usr/bin/egrep '^[0-9].*' | /usr/bin/tail -n 1`":
|
logoutput => true,
|
onlyif => "/usr/bin/curl -k -s --user '${service_account}:${service_account_pwd}' https://${::fqdn}/api/compute_profiles | /usr/bin/grep 'results' | /usr/bin/sed 's/}/\\n/g' | /usr/bin/egrep -v '1 - Micro - CPU-2, RAM-1GB|Example2' | /usr/bin/sed 's/\"id\":/\\n/g' | /usr/bin/awk -F',' '{print \$1}' | /usr/bin/egrep '^[0-9].*'",
|
path => ['/bin', '/sbin/', '/usr/bin', '/usr/sbin']
|
}
|
And yes, I could probably use classes to pass a list to the first exec and make that more of a function call to achieve some iteration/looping and then pass that a similar list to a second class (eg, I have 9 of the first exec calls) the names of the compute resources to a second one, but that is just really ugly. How do I even concatenate the variables into a search string?
The second exec call is intended to achieve some level of idempotency, but is faux idemptoency. What if someone changes the RAM on the object, but not the name? So, you are kind of forced into using ruby which is much less approachable for those without a background in Ruby/Rails - which is probably more likely for the infrastructure developers and those coming from an ops background (as opposed to those developers with a dev background). But what if we could add a resource to puppet and have something like:
curl
|
{
|
"https://${::fqdn}/api/compute_profiles/$['1 - Micro - CPU-2, RAM-1GB']/compute_attributes
|
swagger => "the_foreman.yaml"
|
interval => 60 # this value would be in seconds and would be the spacing between URL calls
|
compute_attribute => {
|
"vm_attrs" => {
|
"cpus" => "2",
|
"corespersocket" => "1",
|
"memory_mb" => "1024",
|
"cluster" => "Dev-Test",
|
"resource_pool" => "DEV-TEST",
|
"path" => "/Datacenters/Lake Forest DEV/vm/Katello Deployments",
|
"guest_id" => "rhel7_64Guest",
|
"scsi_controller_type" => "ParaVirtualSCSIController",
|
"hardware_version" => "vmx-11",
|
"memoryHotAddEnabled" => "1",
|
"cpuHotAddEnabled" => "1",
|
"interfaces_attributes" =>
|
{
|
"0" => {
|
"type" => "VirtualVmxnet3",
|
"network" => "Development"
|
},
|
"1" => {
|
"type" => "VirtualVmxnet3",
|
"network" => "Machine NFS"
|
}
|
},
|
"volumes_attributes" => {
|
"0" => {
|
"datastore" => "LF_NOC_DEV_0104_08",
|
"name" => "Hard disk",
|
"size_gb" => "20",
|
"thin" => "true",
|
"eager_zero" => "false"
|
}
|
}
|
}
|
},
|
Puppet would then validate each individual parameter to make sure it has not been changed. This is much more "puppet-y" and elegant while the former is ugly and tedious. Instead of depending on someone to write a module, I just need a swagger file and I can manage the configuration directly without a module at all. Similarly if you did want a module to make the manifest less "ugly" that is still an option, but with a new resource type like this that becomes a trivial task anyone can do instead of requiring a ruby expert.
garethr-kubernetes sees the same thing, which is presumably why he wrote the module, but it lacks 1) idempotency and 2) has the potential to be hard on my server by calling a huge batch of API calls every 90 minutes and every puppet run could render by server unusable during puppet runs. The puppet service could go over the configuration one url at a time more in a more gentle manner.
Ultimately, this is asking for puppet to support a new platform for configuration management alongside Windows and Linux: REST. Not all devices can have an agent and not all devices have linux. A storage SAN, networking equipment, load balancers - all may not be capable of a windows or Linux agent - but using a REST API puppet could manager these configurations too and lead to a much better SDN type environment agent-less-ly.
|