I'm also a big fan of composing tool chains from piece parts - scripts
linked by pipes, mashups of functions accessed via RESTful APIs, etc.
Rundeck looks pretty interesting - given that it can manage shell
scripts and their execution via ssh. I'm sort of wondering what other
kinds of tools people are using for adding a job and configuration
control overlay to shell scripts. And what about orchestration tools
that that take a "mashup maker" approach? Pointers, thoughts, ....
Thanks,
Miles Fidelman
--
In theory, there is no difference between theory and practice.
In practice, there is. .... Yogi Berra
Check out https://github.com/dtolabs/rerun to help organize your shell scripts. Rundeck definitely supports the sys admin that prefers/is forced to use shell scripts without having to rewrite anything.
-Noah
Noah Campbell
415-513-3545
noahca...@gmail.com
Now THAT's the kind of thing I'm looking for. Thanks, Noah! (Note: I'm
definitely in the "prefers to" rather than "forced to" category - a big
fan of the unix approach of re-using and composing small tools, rather
than biting off more monolithic approaches.)
Miles
Alex and I used to quip that bash was a mature platform for writing system automation and it's a shame no-one ever wrote a framework to support just this. Alex made that a reality with rerun.
-Noah
Noah Campbell
415-513-3545
noahca...@gmail.com
Noah Campbell wrote:
> Alex Honor, who wrote Rerun, is also the master mind behind RunDeck,
> FWIW.
>
> Alex and I used to quip that bash was a mature platform for writing
> system automation and it's a shame no-one ever wrote a framework to
> support just this. Alex made that a reality with rerun.
>
> -Noah
Noah (or Alex)
I'm curious now. How many customers use RunDeck without some kind of CM
system? Most of the RunDeck users I've spoken to were either Puppet or
Chef users too or were planning to integrate one or the other or a like
CM system.
Is there a pool of non-CM using RunDeck users out there?
Regards
James Turnbull
- --
Author of:
* Pro Puppet (http://tinyurl.com/ppuppet)
* Pro Linux System Administration (http://tinyurl.com/linuxadmin)
* Pro Nagios 2.0 (http://tinyurl.com/pronagios)
* Hardening Linux (http://tinyurl.com/hardeninglinux)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iQEcBAEBAgAGBQJPZkoxAAoJECFa/lDkFHAyp3oIAKuljLw0tKUMWjIsqvPpAHFB
Kw3jdn31Dq4PykKmRAI6BAUjH0P+Q9iaPyLcyZod7/T+0Td1KJIMwSO6tD1hfrrC
mlX5E/FJesJia3CnxxxgVqUj6McC/0ldjpkKsDZnEpbXbo/QqGM3aQE6fa3akX7I
cVDB7D7SjpMQyonAFVf73xWmUZK704VgoowlMYWQ4Lj7bjjZDRxqe2J+0k7xo/YI
f1aDn2PsT3eC9UU6fKWAYKi24iZ1+KwJWzZe5Izzqnzr4WVO2aZWBqCTJMv20PFI
ILG0og2C8ABw2sRYNWEciZbNu6AzK/+CfoVQee2IAz1cdzQc2DmkP4zkOsOR054=
=Nm91
-----END PGP SIGNATURE-----
Ahh... that makes sense, since both pieces of code have "dto solutions"
stamped all over them. :-)
> Alex and I used to quip that bash was a mature platform for writing system automation and it's a shame no-one ever wrote a framework to support just this. Alex made that a reality with rerun.
Glad I'm not the only one who thinks this. I might say it's just that
I'm old and set in my ways, except the machines & VMs I'm about to
provision are for Erlang and CouchDB development/deployment - building
on a high-availability configuration (drbd, pacemaker, etc.) that took a
while to get right and documented in scripts, config. files, and various
scripts. All I want to do is capture and replay the manual provisioning
& configuration steps, not rewrite everything in chief, or puppet, or
whatever. :-)
I am curious, though - did you guys find anything that came close, or
used for inspiration (other than, maybe, tcl/tk)?
Cheers,
CMDBs have a notorios problem of getting out of sync with reality. Traditionally, the value of a CMDB decayed exponentially. Tools like Puppet, Chef, ControlTier, flip this around and provide models that drive the configuration parameters to their "scripts."
Rundeck took a different approach that complements the above quite nicely. We explicitly excluded the model and instead rely on integrations to provide that information. Sometimes, DNS provides all the details you need to know about an environment. Other times (more often then not) they exist in a spreadsheet somewhere. At times, they're in puppet or chef or tivoli or vSphere. Sometimes they're in Rackspace, Rightscale or AWS databases.
Rundeck will source this information and then use it to generate a list of nodes it needs to execute against. I've written (Miles you'll appreciate this) bash scripts that consume vendor apis and then transform it (via xmlstarlet) into json. Twenty lines of bash script and 1000s of nodes because immediately available to run commands against.
Sometimes one data source enriches another and this is where command chaining comes into play. Without getting fancy, I wrote a simple CGI service that exposes this to Rundeck (https://github.com/dtolabs/taps). Others have written EC2 integration (https://github.com/dtolabs/java-ec2-nodes), Chef (https://github.com/opscode/chef-rundeck), puppet (https://github.com/jamtur01/puppet-rundeck) and Jenkins (https://github.com/vbehar/jenkins-rundeck-plugin).
Find what's working in your environment and integrate that. Don't underestimate a Google Docs spreadsheet in terms of maintainability, cost, and conceptual understanding. You can always pave that cow path later.
-Noah
Noah Campbell
415-513-3545
noahca...@gmail.com
Glad I'm not the only one who thinks this. I might say it's just thatI'm old and set in my ways, except the machines & VMs I'm about toprovision are for Erlang and CouchDB development/deployment - buildingon a high-availability configuration (drbd, pacemaker, etc.) that took awhile to get right and documented in scripts, config. files, and variousscripts. All I want to do is capture and replay the manual provisioning& configuration steps, not rewrite everything in chief, or puppet, orwhatever. :-)
Noah Campbell wrote:> Alex and I used to quip that bash was a mature platform for writing system automation and it's a shame no-one ever wrote a framework to support just this. Alex made that a reality with rerun.
Glad I'm not the only one who thinks this. I might say it's just that
I'm old and set in my ways, except the machines & VMs I'm about to
provision are for Erlang and CouchDB development/deployment - building
on a high-availability configuration (drbd, pacemaker, etc.) that took a
while to get right and documented in scripts, config. files, and various
scripts. All I want to do is capture and replay the manual provisioning
& configuration steps, not rewrite everything in chief, or puppet, or
whatever. :-)I am curious, though - did you guys find anything that came close, or
used for inspiration (other than, maybe, tcl/tk)?
>
> Rundeck will source this information and then use it to generate a list of nodes it needs to execute against. I've written (Miles you'll appreciate this) bash scripts that consume vendor apis and then transform it (via xmlstarlet) into json. Twenty lines of bash script and 1000s of nodes because immediately available to run commands against.
I'm not sure "appreciate" is the right word. Commiserate, maybe?
> Find what's working in your environment and integrate that. Don't underestimate a Google Docs spreadsheet in terms of maintainability, cost, and conceptual understanding. You can always pave that cow path later.
>
Absolutely. Also a big whiteboard :-)
Cheers,
jtimberman wrote:
> However, I think that in the modern age of configuration management
> systems, using legacy shell script code is an anti-pattern. Plus, at
> least with Chef you can just drop your shell scripts in as a resource :).
And ditto with Puppet.
Regards
James Turnbull
- --
Author of:
* Pro Puppet (http://tinyurl.com/ppuppet)
* Pro Linux System Administration (http://tinyurl.com/linuxadmin)
* Pro Nagios 2.0 (http://tinyurl.com/pronagios)
* Hardening Linux (http://tinyurl.com/hardeninglinux)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iQEcBAEBAgAGBQJPZmNtAAoJECFa/lDkFHAyn/oH/R1LMeUHKLXcM4irbVPtogHM
ptlDn46kN1yXCQvv0TLrlY6AsfY6GtN/0shH0j0dgQ0xZzgrtMPVdjwezd5/d4xu
ubV5c8dh9EePi9lOTrnMFwMSM6ULfruSqkTVWPSTKhrvL1/6uUJN/bBF7D+Pasrl
PsJLIpbtUGb8t/AvPSNOwIp3AfPqHpRIvNDXoKQuqQ3omC/e8JTNabC8SddzxFwt
iLK3yOeFQ7VzYwhTWj9FSiKOT50+4oWs9nsfYltIDRX3wq5vwMLzVkvwy2rFC9Pd
PVRiUH0qQDCTS2m+1WYpM1lTjGU3QZ7N8lJSyY1qlady/hHP4RrhTNN7EbMExlI=
=uc68
-----END PGP SIGNATURE-----
That's very interesting. Thanks!
>
> However, I think that in the modern age of configuration management
> systems, using legacy shell script code is an anti-pattern. Plus, at
> least with Chef you can just drop your shell scripts in as a resource :).
Well.... modern configuration management systems are pretty new. I
expect a LOT more systems are managed with scripts than with cfengine,
bfg2, puppet, chef, etc. combined. I kind of like taking incremental
steps, and I expect chef or puppet will be on the path eventually. Just
not the next step.
-s
Well.... modern configuration management systems are pretty new. I
expect a LOT more systems are managed with scripts than with cfengine,
bfg2, puppet, chef, etc. combined. I kind of like taking incremental
steps, and I expect chef or puppet will be on the path eventually. Just
not the next step.
Seriously... this set of arguments is not new.
In the network management arena (where I come from), OpenView, or
Tivoli, or (name your network management system) was supposed to solve
all problems. Sure, Nagios seems to be on top right now, but partially
because it integrates nicely with lots of other tools. And then there's
Zabbix, and ZenOSS, and a dozen more. And not a one of them solves all
of ones operational needs - you always need to filter/monitor log files,
and run trend analysis tools, etc., etc.
If you're developing software, you need a CVS - but it can be CVS, Git,
Bazaar, Subversion, etc., etc. Any one almost always customizes
configuration and processes. And then there are choices regarding
language, run-time environment, build tools, etc., etc.
Ultimately, it always comes down to a choice between:
- picking a framework and integrating into it, or,
- starting with smaller "building blocks" and creating assemblages of
those over time
Both approaches have their strengths and weaknesses, but neither is a
panacea.
>
> It's great that you carefully documented your deployment steps. What
> happens when one of your nodes reacts in an unexpected way? How do
> you test that the scripts have executed correctly across all systems
> and that their end state is valid? What kind of unit and integration
> tests do you execute to ensure that your shell scripts don't blow your
> systems - or each other - up after minor changes?
>
> The reason IT drifts toward a framework is that it forces you to think
> about these problems, rather than re-inventing everything in an ad-hoc
> fashion.
That's assuming that a framework handles a reasonable range of cases.
And it's still the case that most problems tend to be a level of detail
below what can be handled automatically. In my experience, people tend
to abandon frameworks because they're too rigid to handle the full range
of cases. Your mileage may vary.
At least in my experience, a lot of problems have a lot more to do with
finding dependencies and incompatibilities, the first time you install a
piece of code, and things that break when upgrading. Then it's a matter
of repeatability.
Avoiding software that doesn't include a 'make test' regression test,
and using a good package manager to catch dependencies (at the risk of
starting another war, apt beats everything else, hands down), solves
LOTS of problems. Then it's a matter of making sure to have a good
checklist when you want to build the next machine - be it a manual
checklist or an automated one.
>
> If a vendor or developer tossed you an app with a half-assed
> implementation of encryption, you'd point out the myriad libraries and
> tools available for robust and secure encryption that they could have
> used. Why do we refuse to accept this kind of re-implementation in
> commercial software but continue to inflict it upon ourselves as
> sysadmins?
>
Umm... bash, apt, make, debconf, pre-seeding files - why reinvent stuff
that works?
>
> Eric Shamow
> Professional Services
> http://puppetlabs.com/
> (c)631.871.6441
>
Well... I understand that you have to take that position, and that
various frameworks are STARTING to mature, but I'm not quite ready to
take that kind of leap. I'm not quite ready to pick a framework that
constrains lots of choices at other points in the toolchain.
> Actually modern config management tools are 20 years old. The rapid
> adoption is new ;)
I believe that IBM largely solved this problem decades ago for their mainframes, just like they largely solved this problem with regards to multiple different OSes running in Virtual Memory environments.
Everyone else since then has been trying to re-discover and re-invent many of the same wheels, many times over.
--
Brad Knowles <br...@shub-internet.org>
LinkedIn Profile: <http://tinyurl.com/y8kpxu>
This is the crux of it.
Bash, apt, make, deboconf, and pre-seeding do NOT work.
Modern config management represents a completely different technique
for infrastructure management. The tools merely enable the technique.
It's up to you to create a model.
Web frameworks and the network tools you mentioned impose a model.
(rails, django, tivoli, openview, etc). If you need to step outside
the model, you need to invent techniques to hack around it.
-s
All I want to do is capture and replay the manual provisioning & configuration steps, not rewrite everything in chief, or puppet, or whatever. :-)
Thanks for the suggestion, and that certainly is useful for some aspects
of provisioning.
But.. I was under the impression that all checkinstall does is wrap
package management glue around building a package from a source tarball.
I'm thinking about a much broader range of activities:
- installing and configuring hypervisors and Dom0s
- configuring network interfaces and firewall rules
- building and configuring file systems and logical volumes
- configuring disk mirroring and failover rules
- building VMs
- wiring up families of things (e.g., wiring together postfix, amavisd,
spamassassin, and a list manager)
- ...
> We are trying to keep Rundeck as a coordination hub that is "powered
> by" underlying frameworks.
Out of more than idle curiousity, have you seen anybody couple rundeck
with Erlang/OTP build->hot-deploy->runtime infrastructure
My immediate questions have been motivated by cleaning up a small
in-house cluster. Down the road we're doing some development based on
Erlang and a couple of noSQL databases that run in the Erlang
environment - for those, we'll be deploying really stripped down nodes
(o/s + erlang run-time), and deploying all our application code into the
Erlang environment. That's the environment we hope we'll have to scale,
and our intent is to leverage the Erlang/OTP infrastructure.
Thanks,
- provide for editing/managing/versioning scripts (script = anything
that can be invoked at the command line)
- a library of control functions for use within scripts
- invoking scripts, combinations of scripts, pipelines of scripts (in
the Unix sense of pipes) - locally, remotely, across multiple machines
- remote script execution via ssh, rather than some kind of agent
- providing a simple database for keeping track of variables used by
scripts (e.g., IP addresses, DNS records for use by a provisioning
script) - that can be accessed from scripts
- accessing the above via cli, RESTful API, GUI
- cross-platform
- (nice-to-have) minimal environmental requirements (i.e., a step above
the unix shell, say the gnu buildtool suite rather than the JVM + a mass
of libraries, or the ruby ecosystem -- a self-configuring basic
environment would be nice, like perl+cpan)
A bunch of people pointed at rundeck and rerun, both of which look incredibly helpful, and
couple of folks on the devops list pointed me at this: https://bdsm.beginrescueend.com/modules/shell
Essentially it's management framework for running shell scripts, written by Wayne Seguin at Engine Yard,
also the author of RVM (Ruby Version Manager). Information on the web site is just a bit disorganized,
but there's a pretty good manual in pdf, an introductory slideshow, and a comprehensive git repo.
Wayne just spent the morning walking me through it - an incredibly powerful tool, which I'm now going to
go off and use as I rebuild a couple of servers.
I expect that a combination of these tools, along with git and a simple database for tracking things like
IP numbers and DNS records is going to do the job nicely.
Miles Fidelman