Extracting gzipped sql archive, importing and finally deleting

109 views
Skip to first unread message

Richie

unread,
Jun 6, 2014, 10:30:56 AM6/6/14
to puppet...@googlegroups.com

I'm a little stuck with a puppet manifest due to the declarative nature of things and it's taking some time for the concept to fully sink in so could someone guide me in the right direction here (if this is the correct route of course).

At the moment the setup is using vagrant, puppet and modules from puppet forge. The site manifest declares your typical LAMP stack which in it's most basic form works fine using the puppet modules and configuring accordingly.

Currently I have a 'files/mysql/backup.sql.gz' structure inside Vagrantfile root dir and unfortunately the gz won't extract to /tmp/ using the /vagrant/files/mysql/back.sql.gz path as it's not recognised even though ssh'ing in reveals it - at a guess shares aren't active whilst provisioning?

Following the successful extraction of the sql backup I'd like to import it then remove all traces of it so I guess the big question here would be is a routine task like this of scope for puppet and if not what approach should one taken given you can't declare the same resource twice (e.g. file { '/tmp/backup.sql':... ), one to ensure it exists and the other to ensure it's deleted.

Thanks, any help appreciated.

jcbollinger

unread,
Jun 9, 2014, 10:04:11 AM6/9/14
to puppet...@googlegroups.com


On Friday, June 6, 2014 9:30:56 AM UTC-5, Richie wrote:

I'm a little stuck with a puppet manifest due to the declarative nature of things and it's taking some time for the concept to fully sink in so could someone guide me in the right direction here (if this is the correct route of course).


I like to describe Puppet DSL's declarative nature by focusing on how you use it, rather than on how you can classify it.  Specifically, writing Puppet manifests is an exercise in modeling the desired target state of your machines, as opposed to some other CM systems' focus on how to modify the target system.  The Puppet approach has great advantages, especially for managing heterogeneous infrastructure, but you do have to make a mental adjustment to make the best use of it.

 

At the moment the setup is using vagrant, puppet and modules from puppet forge. The site manifest declares your typical LAMP stack which in it's most basic form works fine using the puppet modules and configuring accordingly.

Currently I have a 'files/mysql/backup.sql.gz' structure inside Vagrantfile root dir and unfortunately the gz won't extract to /tmp/ using the /vagrant/files/mysql/back.sql.gz path as it's not recognised even though ssh'ing in reveals it - at a guess shares aren't active whilst provisioning?


I'm not a big Vagrant guy, so I don't think I fully understand the question.  I would expect, however, that which filesystems are mounted during vagrant-directed provisioning would be under vagrant's control.

 

Following the successful extraction of the sql backup I'd like to import it then remove all traces of it so I guess the big question here would be is a routine task like this of scope for puppet and if not what approach should one taken given you can't declare the same resource twice (e.g. file { '/tmp/backup.sql':... ), one to ensure it exists and the other to ensure it's deleted.


Remember that with Puppet you are modeling the target state.  If /tmp/backup.sql is not part of the desired final state, then it should not appear (as a resource in its own right) in the model you construct.  That doesn't mean Puppet cannot handle this situation -- there are a variety of approaches to problems like this.  Which solution would be best depends on your specific circumstances, however.  A rather important consideration there is whether the target machines will continue to be managed by Puppet after provisioning, as opposed to Puppet being used exclusively as part of the one-time provisioning process for each machine.

Either way, though, I have a hard time believing that vagrant cannot be persuaded to drop the uncompressed file into the location of your choice on the target machine during provisioning.  If you can make that happen then taking it the rest of the way via Puppet will be easier than making Puppet do the whole thing.


John

Nikola Petrov

unread,
Jun 11, 2014, 12:55:52 PM6/11/14
to puppet...@googlegroups.com
Here is something that works for us:


define omysql::do($source=undef, $db, $content=undef) {
include omysql
include mysql::params

$script = "${omysql::sql_snippets}/${name}.sql"

file { $script:
mode => '0600',
source => $source,
content => $content,
}

exec {"mysql-import-${name}":
path => ['/bin', '/sbin', '/usr/bin'],
command => "mysql --defaults-file=/root/.my.cnf -A ${db} < ${script} && touch ${script}.semaphore",
creates => "${script}.semaphore",
require => [File[$script],Package[$mysql::params::server_package_name]],
timeout => '0',
}
}

Basically this creates a semaphore file to indicate if the file was
already imported. This was used on a simple project to do migrations and
initial data imports. Note that it requireds the puppetlabs module and
adds our sane defaults in omysql. A basic usage would be:

omysql::do{'my_cool_migration':
source => 'puppet:///modules/myproject/data.sql'
}

or

omysql::do{'my_cool_migration':
content => 'alter table ...'
}

This won't delete the data sql neither the 'alter table'. Not sure how
you would do that...

Note that for more complex requirements I would write a provider. Tell
us if you do so, I would be glad to use it(the define is just a little
hacky for this)


>
> --
> You received this message because you are subscribed to the Google Groups "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/707c54cc-6c07-424c-a12a-617516d4aee7%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages