tools for managing and composing shell scripts

1,082 views
Skip to first unread message

Miles Fidelman

unread,
Mar 18, 2012, 10:03:25 AM3/18/12
to devops-t...@googlegroups.com
Over the years, I've built up a lot of shell scripts for provisioning
and configuration - which makes converting to something like chef or
puppet a somewhat daunting task (or even something like FAI).

I'm also a big fan of composing tool chains from piece parts - scripts
linked by pipes, mashups of functions accessed via RESTful APIs, etc.

Rundeck looks pretty interesting - given that it can manage shell
scripts and their execution via ssh. I'm sort of wondering what other
kinds of tools people are using for adding a job and configuration
control overlay to shell scripts. And what about orchestration tools
that that take a "mashup maker" approach? Pointers, thoughts, ....

Thanks,

Miles Fidelman


--
In theory, there is no difference between theory and practice.
In practice, there is. .... Yogi Berra


Noah Campbell

unread,
Mar 18, 2012, 4:10:41 PM3/18/12
to devops-t...@googlegroups.com
Hi Miles,

Check out https://github.com/dtolabs/rerun to help organize your shell scripts. Rundeck definitely supports the sys admin that prefers/is forced to use shell scripts without having to rewrite anything.

-Noah

Noah Campbell
415-513-3545
noahca...@gmail.com

Miles Fidelman

unread,
Mar 18, 2012, 4:34:14 PM3/18/12
to devops-t...@googlegroups.com
Noah Campbell wrote:
> Hi Miles,
>
> Check out https://github.com/dtolabs/rerun to help organize your shell scripts. Rundeck definitely supports the sys admin that prefers/is forced to use shell scripts without having to rewrite anything.
>

Now THAT's the kind of thing I'm looking for. Thanks, Noah! (Note: I'm
definitely in the "prefers to" rather than "forced to" category - a big
fan of the unix approach of re-using and composing small tools, rather
than biting off more monolithic approaches.)

Miles

Noah Campbell

unread,
Mar 18, 2012, 4:42:20 PM3/18/12
to devops-t...@googlegroups.com
Alex Honor, who wrote Rerun, is also the master mind behind RunDeck, FWIW.

Alex and I used to quip that bash was a mature platform for writing system automation and it's a shame no-one ever wrote a framework to support just this. Alex made that a reality with rerun.

-Noah

Noah Campbell
415-513-3545
noahca...@gmail.com

James Turnbull

unread,
Mar 18, 2012, 4:48:52 PM3/18/12
to devops-t...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Noah Campbell wrote:
> Alex Honor, who wrote Rerun, is also the master mind behind RunDeck,
> FWIW.
>
> Alex and I used to quip that bash was a mature platform for writing
> system automation and it's a shame no-one ever wrote a framework to
> support just this. Alex made that a reality with rerun.
>
> -Noah

Noah (or Alex)

I'm curious now. How many customers use RunDeck without some kind of CM
system? Most of the RunDeck users I've spoken to were either Puppet or
Chef users too or were planning to integrate one or the other or a like
CM system.

Is there a pool of non-CM using RunDeck users out there?

Regards

James Turnbull

- --
Author of:
* Pro Puppet (http://tinyurl.com/ppuppet)
* Pro Linux System Administration (http://tinyurl.com/linuxadmin)
* Pro Nagios 2.0 (http://tinyurl.com/pronagios)
* Hardening Linux (http://tinyurl.com/hardeninglinux)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJPZkoxAAoJECFa/lDkFHAyp3oIAKuljLw0tKUMWjIsqvPpAHFB
Kw3jdn31Dq4PykKmRAI6BAUjH0P+Q9iaPyLcyZod7/T+0Td1KJIMwSO6tD1hfrrC
mlX5E/FJesJia3CnxxxgVqUj6McC/0ldjpkKsDZnEpbXbo/QqGM3aQE6fa3akX7I
cVDB7D7SjpMQyonAFVf73xWmUZK704VgoowlMYWQ4Lj7bjjZDRxqe2J+0k7xo/YI
f1aDn2PsT3eC9UU6fKWAYKi24iZ1+KwJWzZe5Izzqnzr4WVO2aZWBqCTJMv20PFI
ILG0og2C8ABw2sRYNWEciZbNu6AzK/+CfoVQee2IAz1cdzQc2DmkP4zkOsOR054=
=Nm91
-----END PGP SIGNATURE-----

Miles Fidelman

unread,
Mar 18, 2012, 4:55:37 PM3/18/12
to devops-t...@googlegroups.com
Noah Campbell wrote:
> Alex Honor, who wrote Rerun, is also the master mind behind RunDeck, FWIW.

Ahh... that makes sense, since both pieces of code have "dto solutions"
stamped all over them. :-)

> Alex and I used to quip that bash was a mature platform for writing system automation and it's a shame no-one ever wrote a framework to support just this. Alex made that a reality with rerun.

Glad I'm not the only one who thinks this. I might say it's just that
I'm old and set in my ways, except the machines & VMs I'm about to
provision are for Erlang and CouchDB development/deployment - building
on a high-availability configuration (drbd, pacemaker, etc.) that took a
while to get right and documented in scripts, config. files, and various
scripts. All I want to do is capture and replay the manual provisioning
& configuration steps, not rewrite everything in chief, or puppet, or
whatever. :-)

I am curious, though - did you guys find anything that came close, or
used for inspiration (other than, maybe, tcl/tk)?

Cheers,

Miles Fidelman

unread,
Mar 18, 2012, 5:07:30 PM3/18/12
to devops-t...@googlegroups.com
James Turnbull wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Noah Campbell wrote:
>> Alex Honor, who wrote Rerun, is also the master mind behind RunDeck,
>> FWIW.
>>
>> Alex and I used to quip that bash was a mature platform for writing
>> system automation and it's a shame no-one ever wrote a framework to
>> support just this. Alex made that a reality with rerun.
>>
>> -Noah
> Noah (or Alex)
>
> I'm curious now. How many customers use RunDeck without some kind of CM
> system? Most of the RunDeck users I've spoken to were either Puppet or
> Chef users too or were planning to integrate one or the other or a like
> CM system.
>
> Is there a pool of non-CM using RunDeck users out there?
>
>
Perhaps a more general question: what CM systems are people using with
Rundeck - it seems to support several. I'd be particularly interested
in tying Rundeck to a configuration database, a la those in network
management systems - I find the administrivia of managing IP addresses,
DNS records, and user accounts/privileges to be the most time-consuming
and annoying part of provisioning. (All the chef and puppet examples I
see, seem to involve using something else for initial PXE booting, and
for plugging configuration information into recipes manually. Somehow,
my idea of a configuration management system includes keeping track of
all the detailed, site-specific configuration in a database.)

Noah Campbell

unread,
Mar 18, 2012, 5:39:59 PM3/18/12
to devops-t...@googlegroups.com
If Alex is lurking, I'm sure he'll provide his views on this. In the mean time, I'll give you mind ;^)

CMDBs have a notorios problem of getting out of sync with reality. Traditionally, the value of a CMDB decayed exponentially. Tools like Puppet, Chef, ControlTier, flip this around and provide models that drive the configuration parameters to their "scripts."

Rundeck took a different approach that complements the above quite nicely. We explicitly excluded the model and instead rely on integrations to provide that information. Sometimes, DNS provides all the details you need to know about an environment. Other times (more often then not) they exist in a spreadsheet somewhere. At times, they're in puppet or chef or tivoli or vSphere. Sometimes they're in Rackspace, Rightscale or AWS databases.

Rundeck will source this information and then use it to generate a list of nodes it needs to execute against. I've written (Miles you'll appreciate this) bash scripts that consume vendor apis and then transform it (via xmlstarlet) into json. Twenty lines of bash script and 1000s of nodes because immediately available to run commands against.

Sometimes one data source enriches another and this is where command chaining comes into play. Without getting fancy, I wrote a simple CGI service that exposes this to Rundeck (https://github.com/dtolabs/taps). Others have written EC2 integration (https://github.com/dtolabs/java-ec2-nodes), Chef (https://github.com/opscode/chef-rundeck), puppet (https://github.com/jamtur01/puppet-rundeck) and Jenkins (https://github.com/vbehar/jenkins-rundeck-plugin).

Find what's working in your environment and integrate that. Don't underestimate a Google Docs spreadsheet in terms of maintainability, cost, and conceptual understanding. You can always pave that cow path later.

-Noah

Noah Campbell
415-513-3545
noahca...@gmail.com

Eric Shamow

unread,
Mar 18, 2012, 5:42:21 PM3/18/12
to devops-t...@googlegroups.com
On Sunday, March 18, 2012 at 4:55 PM, Miles Fidelman wrote:
Glad I'm not the only one who thinks this. I might say it's just that
I'm old and set in my ways, except the machines & VMs I'm about to
provision are for Erlang and CouchDB development/deployment - building
on a high-availability configuration (drbd, pacemaker, etc.) that took a
while to get right and documented in scripts, config. files, and various
scripts. All I want to do is capture and replay the manual provisioning
& configuration steps, not rewrite everything in chief, or puppet, or
whatever. :-)

Not to turn this into a full-on toolset debate, but…

It's great that you carefully documented your deployment steps.  What happens when one of your nodes reacts in an unexpected way?  How do you test that the scripts have executed correctly across all systems and that their end state is valid?  What kind of unit and integration tests do you execute to ensure that your shell scripts don't blow your systems - or each other - up after minor changes?

The reason IT drifts toward a framework is that it forces you to think about these problems, rather than re-inventing everything in an ad-hoc fashion.

If a vendor or developer tossed you an app with a half-assed implementation of encryption, you'd point out the myriad libraries and tools available for robust and secure encryption that they could have used.  Why do we refuse to accept this kind of re-implementation in commercial software but continue to inflict it upon ourselves as sysadmins?


-- 

Eric Shamow
Professional Services

jtimberman

unread,
Mar 18, 2012, 6:15:13 PM3/18/12
to devops-t...@googlegroups.com
On Sunday, March 18, 2012 2:55:37 PM UTC-6, Miles Fidelman wrote:
Noah Campbell wrote:

> Alex and I used to quip that bash was a mature platform for writing system automation and it's a shame no-one ever wrote a framework to support just this.  Alex made that a reality with rerun.

Glad I'm not the only one who thinks this. I might say it's just that
I'm old and set in my ways, except the machines & VMs  I'm about to
provision are for  Erlang and CouchDB development/deployment - building
on a high-availability configuration (drbd, pacemaker, etc.) that took a
while to get right and documented in scripts, config. files, and various
scripts.  All I want to do is capture and replay the manual provisioning
& configuration steps, not rewrite everything in chief, or puppet, or
whatever. :-)

I am curious, though - did you guys find anything that came close, or
used for inspiration (other than, maybe, tcl/tk)?

If you really want to stick with Shell scripts, you might also be interested in the BDSM Project by the same guy that wrote RVM.


However, I think that in the modern age of configuration management systems, using legacy shell script code is an anti-pattern. Plus, at least with Chef you can just drop your shell scripts in as a resource :).

Miles Fidelman

unread,
Mar 18, 2012, 6:23:35 PM3/18/12
to devops-t...@googlegroups.com
Noah Campbell wrote:
> Rundeck took a different approach that complements the above quite nicely. We explicitly excluded the model and instead rely on integrations to provide that information. Sometimes, DNS provides all the details you need to know about an environment. Other times (more often then not) they exist in a spreadsheet somewhere. At times, they're in puppet or chef or tivoli or vSphere. Sometimes they're in Rackspace, Rightscale or AWS databases.
... which can sometimes only be accessed via a web screen. Boy do I
hate DNS registrars in this regard.

>
> Rundeck will source this information and then use it to generate a list of nodes it needs to execute against. I've written (Miles you'll appreciate this) bash scripts that consume vendor apis and then transform it (via xmlstarlet) into json. Twenty lines of bash script and 1000s of nodes because immediately available to run commands against.

I'm not sure "appreciate" is the right word. Commiserate, maybe?

> Find what's working in your environment and integrate that. Don't underestimate a Google Docs spreadsheet in terms of maintainability, cost, and conceptual understanding. You can always pave that cow path later.
>

Absolutely. Also a big whiteboard :-)

Cheers,

James Turnbull

unread,
Mar 18, 2012, 6:36:35 PM3/18/12
to devops-t...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

jtimberman wrote:

> However, I think that in the modern age of configuration management
> systems, using legacy shell script code is an anti-pattern. Plus, at
> least with Chef you can just drop your shell scripts in as a resource :).

And ditto with Puppet.

Regards

James Turnbull

- --
Author of:
* Pro Puppet (http://tinyurl.com/ppuppet)
* Pro Linux System Administration (http://tinyurl.com/linuxadmin)
* Pro Nagios 2.0 (http://tinyurl.com/pronagios)
* Hardening Linux (http://tinyurl.com/hardeninglinux)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJPZmNtAAoJECFa/lDkFHAyn/oH/R1LMeUHKLXcM4irbVPtogHM
ptlDn46kN1yXCQvv0TLrlY6AsfY6GtN/0shH0j0dgQ0xZzgrtMPVdjwezd5/d4xu
ubV5c8dh9EePi9lOTrnMFwMSM6ULfruSqkTVWPSTKhrvL1/6uUJN/bBF7D+Pasrl
PsJLIpbtUGb8t/AvPSNOwIp3AfPqHpRIvNDXoKQuqQ3omC/e8JTNabC8SddzxFwt
iLK3yOeFQ7VzYwhTWj9FSiKOT50+4oWs9nsfYltIDRX3wq5vwMLzVkvwy2rFC9Pd
PVRiUH0qQDCTS2m+1WYpM1lTjGU3QZ7N8lJSyY1qlady/hHP4RrhTNN7EbMExlI=
=uc68
-----END PGP SIGNATURE-----

Miles Fidelman

unread,
Mar 18, 2012, 6:39:01 PM3/18/12
to devops-t...@googlegroups.com
jtimberman wrote:
>
> If you really want to stick with Shell scripts, you might also be
> interested in the BDSM Project by the same guy that wrote RVM.
>
> * https://bdsm.beginrescueend.com/

That's very interesting. Thanks!

>
> However, I think that in the modern age of configuration management
> systems, using legacy shell script code is an anti-pattern. Plus, at
> least with Chef you can just drop your shell scripts in as a resource :).

Well.... modern configuration management systems are pretty new. I
expect a LOT more systems are managed with scripts than with cfengine,
bfg2, puppet, chef, etc. combined. I kind of like taking incremental
steps, and I expect chef or puppet will be on the path eventually. Just
not the next step.

Sean OMeara

unread,
Mar 18, 2012, 6:49:09 PM3/18/12
to devops-t...@googlegroups.com
Actually modern config management tools are 20 years old. The rapid
adoption is new ;)

-s

jtimberman

unread,
Mar 18, 2012, 6:52:32 PM3/18/12
to devops-t...@googlegroups.com
On Sunday, March 18, 2012 4:39:01 PM UTC-6, Miles Fidelman wrote:

Well.... modern configuration management systems are pretty new.  I
expect a LOT more systems are managed with scripts than with cfengine,
bfg2, puppet, chef, etc. combined.  I kind of like taking incremental
steps, and I expect chef or puppet will be on the path eventually.  Just
not the next step.


It is 2012. Let's have a short history lesson.

* Cfengine: released in 1993.
* Puppet: released in 2004.
* Chef: released in 2009.

The reasons why people are promoting these tools and the idea of a framework is for the same reasons why people are building modern web applications in frameworks, instead of just writing CGI scripts.

Miles Fidelman

unread,
Mar 18, 2012, 6:56:54 PM3/18/12
to devops-t...@googlegroups.com
Eric Shamow wrote:
> On Sunday, March 18, 2012 at 4:55 PM, Miles Fidelman wrote:
>> Glad I'm not the only one who thinks this. I might say it's just that
>> I'm old and set in my ways, except the machines & VMs I'm about to
>> provision are for Erlang and CouchDB development/deployment - building
>> on a high-availability configuration (drbd, pacemaker, etc.) that took a
>> while to get right and documented in scripts, config. files, and various
>> scripts. All I want to do is capture and replay the manual provisioning
>> & configuration steps, not rewrite everything in chief, or puppet, or
>> whatever. :-)
>
> Not to turn this into a full-on toolset debate, but…
Why not? :-)

Seriously... this set of arguments is not new.

In the network management arena (where I come from), OpenView, or
Tivoli, or (name your network management system) was supposed to solve
all problems. Sure, Nagios seems to be on top right now, but partially
because it integrates nicely with lots of other tools. And then there's
Zabbix, and ZenOSS, and a dozen more. And not a one of them solves all
of ones operational needs - you always need to filter/monitor log files,
and run trend analysis tools, etc., etc.

If you're developing software, you need a CVS - but it can be CVS, Git,
Bazaar, Subversion, etc., etc. Any one almost always customizes
configuration and processes. And then there are choices regarding
language, run-time environment, build tools, etc., etc.

Ultimately, it always comes down to a choice between:
- picking a framework and integrating into it, or,
- starting with smaller "building blocks" and creating assemblages of
those over time

Both approaches have their strengths and weaknesses, but neither is a
panacea.

>
> It's great that you carefully documented your deployment steps. What
> happens when one of your nodes reacts in an unexpected way? How do
> you test that the scripts have executed correctly across all systems
> and that their end state is valid? What kind of unit and integration
> tests do you execute to ensure that your shell scripts don't blow your
> systems - or each other - up after minor changes?
>
> The reason IT drifts toward a framework is that it forces you to think
> about these problems, rather than re-inventing everything in an ad-hoc
> fashion.

That's assuming that a framework handles a reasonable range of cases.
And it's still the case that most problems tend to be a level of detail
below what can be handled automatically. In my experience, people tend
to abandon frameworks because they're too rigid to handle the full range
of cases. Your mileage may vary.

At least in my experience, a lot of problems have a lot more to do with
finding dependencies and incompatibilities, the first time you install a
piece of code, and things that break when upgrading. Then it's a matter
of repeatability.

Avoiding software that doesn't include a 'make test' regression test,
and using a good package manager to catch dependencies (at the risk of
starting another war, apt beats everything else, hands down), solves
LOTS of problems. Then it's a matter of making sure to have a good
checklist when you want to build the next machine - be it a manual
checklist or an automated one.

>
> If a vendor or developer tossed you an app with a half-assed
> implementation of encryption, you'd point out the myriad libraries and
> tools available for robust and secure encryption that they could have
> used. Why do we refuse to accept this kind of re-implementation in
> commercial software but continue to inflict it upon ourselves as
> sysadmins?
>

Umm... bash, apt, make, debconf, pre-seeding files - why reinvent stuff
that works?


>
> Eric Shamow
> Professional Services
> http://puppetlabs.com/
> (c)631.871.6441
>

Well... I understand that you have to take that position, and that
various frameworks are STARTING to mature, but I'm not quite ready to
take that kind of leap. I'm not quite ready to pick a framework that
constrains lots of choices at other points in the toolchain.

Brad Knowles

unread,
Mar 18, 2012, 7:00:42 PM3/18/12
to devops-t...@googlegroups.com, Brad Knowles
On Mar 18, 2012, at 10:49 PM, Sean OMeara wrote:

> Actually modern config management tools are 20 years old. The rapid
> adoption is new ;)

I believe that IBM largely solved this problem decades ago for their mainframes, just like they largely solved this problem with regards to multiple different OSes running in Virtual Memory environments.

Everyone else since then has been trying to re-discover and re-invent many of the same wheels, many times over.

--
Brad Knowles <br...@shub-internet.org>
LinkedIn Profile: <http://tinyurl.com/y8kpxu>

Sean OMeara

unread,
Mar 18, 2012, 7:25:43 PM3/18/12
to devops-t...@googlegroups.com
>> Umm... bash, apt, make, debconf, pre-seeding files - why reinvent stuff that
> works?
>>
> Well... I understand that you have to take that position, and that various
> frameworks are STARTING to mature, but I'm not quite ready to take that kind
> of leap.  I'm not quite ready to pick a framework that constrains lots of
> choices at other points in the toolchain.

This is the crux of it.

Bash, apt, make, deboconf, and pre-seeding do NOT work.

Modern config management represents a completely different technique
for infrastructure management. The tools merely enable the technique.
It's up to you to create a model.

Web frameworks and the network tools you mentioned impose a model.
(rails, django, tivoli, openview, etc). If you need to step outside
the model, you need to invent techniques to hack around it.

-s

Message has been deleted

botchagalupe

unread,
Mar 18, 2012, 9:43:38 PM3/18/12
to devops-t...@googlegroups.com
I think of cfengine, puppet and chef as a third generation of configuration management.  Most of the tools prior were primarily focused on packages as the end state not as the server or service.  With the convergence of cloud, vitalization and massive web scale it is IMHO impossible to live with out them... 

Schlomo Schapiro

unread,
Mar 19, 2012, 2:28:26 AM3/19/12
to devops-t...@googlegroups.com
Hi,

On 18 March 2012 21:55, Miles Fidelman <mfid...@meetinghouse.net> wrote:
All I want to do is capture and replay the manual provisioning & configuration steps, not rewrite everything in chief, or puppet, or whatever. :-)

http://asic-linux.com.mx/~izto/checkinstall/ will do exactly that and turn arbitrary installation procedures into a package. I use this from time to time and it works quite well. On most distros it is already in the repositories.

Regards,
Schlomo

Miles Fidelman

unread,
Mar 19, 2012, 3:05:45 AM3/19/12
to devops-t...@googlegroups.com
Schlomo Schapiro wrote:
> Hi,
>
> On 18 March 2012 21:55, Miles Fidelman <mfid...@meetinghouse.net
> <mailto:mfid...@meetinghouse.net>> wrote:
>
> All I want to do is capture and replay the manual provisioning &
> configuration steps, not rewrite everything in chief, or puppet,
> or whatever. :-)
>
>
> http://asic-linux.com.mx/~izto/checkinstall/
> <http://asic-linux.com.mx/%7Eizto/checkinstall/> will do exactly that
> and turn arbitrary installation procedures into a package. I use this
> from time to time and it works quite well. On most distros it is
> already in the repositories.
>

Thanks for the suggestion, and that certainly is useful for some aspects
of provisioning.

But.. I was under the impression that all checkinstall does is wrap
package management glue around building a package from a source tarball.

I'm thinking about a much broader range of activities:
- installing and configuring hypervisors and Dom0s
- configuring network interfaces and firewall rules
- building and configuring file systems and logical volumes
- configuring disk mirroring and failover rules
- building VMs
- wiring up families of things (e.g., wiring together postfix, amavisd,
spamassassin, and a list manager)
- ...

Alex Honor

unread,
Mar 18, 2012, 7:34:33 PM3/18/12
to devops-toolchain
Many of our customers use or want to use a configuration management
tool like Puppet or Chef. The line between configuration management
use cases and orchestration use cases makes tool selection a bit more
customer specific.

Many like the DSL layer provided by tools like Puppet and Chef since
they provide abstraction over the resources they need to manage and
establish a conceptual management model. A management model helps
people adopt a common problem solving approach. This is a crucial
capability for an organization to be successful.

For me the question comes down to having "modular automation": a
feature that allows reusable code to be organized into a framework. A
framework also can help abstract process but just as importantly a
framework abstracts how the code is invoked by establishing a common
calling interface. Tools like Fabric, Salt and Juju offer modular
automation and I often see these in orchestration scenarios.
There is no one-size fits all solution and therefore, I focus firstly
on loosely coupled management architectures that support tool swapping
for scale and evolution.

I think the Rundeck community uses the tool in several typical ways:

* Job console: Save routine procedures into "jobs", simple workflow
structures (classic RBA).
* Ad hoc commands: Run any command or script across a set of metadata
tagged hosts.
* Orchestration of CM tool actions to manage complex provisioning
scenarios.

We are trying to keep Rundeck as a coordination hub that is "powered
by" underlying frameworks.
> Comment: GPGTools -http://gpgtools.org
> Comment: Using GnuPG with Mozilla -http://enigmail.mozdev.org/

Miles Fidelman

unread,
Mar 20, 2012, 10:53:23 AM3/20/12
to devops-t...@googlegroups.com
Alex Honor wrote:
> Many of our customers use or want to use a configuration management
> tool like Puppet or Chef. The line between configuration management
> use cases and orchestration use cases makes tool selection a bit more
> customer specific.
>
> Many like the DSL layer provided by tools like Puppet and Chef since
> they provide abstraction over the resources they need to manage and
> establish a conceptual management model. A management model helps
> people adopt a common problem solving approach. This is a crucial
> capability for an organization to be successful.
<snip>

> We are trying to keep Rundeck as a coordination hub that is "powered
> by" underlying frameworks.

Out of more than idle curiousity, have you seen anybody couple rundeck
with Erlang/OTP build->hot-deploy->runtime infrastructure

My immediate questions have been motivated by cleaning up a small
in-house cluster. Down the road we're doing some development based on
Erlang and a couple of noSQL databases that run in the Erlang
environment - for those, we'll be deploying really stripped down nodes
(o/s + erlang run-time), and deploying all our application code into the
Erlang environment. That's the environment we hope we'll have to scale,
and our intent is to leverage the Erlang/OTP infrastructure.

Thanks,

Sean OMeara

unread,
Mar 20, 2012, 11:05:48 AM3/20/12
to devops-t...@googlegroups.com
I like to manage erlang deployment with the Chef erl_call and SCM
resources. No idea how you'd do it with rundeck.
-s

Miles Fidelman

unread,
Mar 23, 2012, 1:27:04 PM3/23/12
to devops-t...@googlegroups.com
Thanks to all for input. I was looking for a tool that would provide
for something like this:

- provide for editing/managing/versioning scripts (script = anything
that can be invoked at the command line)
- a library of control functions for use within scripts
- invoking scripts, combinations of scripts, pipelines of scripts (in
the Unix sense of pipes) - locally, remotely, across multiple machines
- remote script execution via ssh, rather than some kind of agent
- providing a simple database for keeping track of variables used by
scripts (e.g., IP addresses, DNS records for use by a provisioning
script) - that can be accessed from scripts
- accessing the above via cli, RESTful API, GUI
- cross-platform
- (nice-to-have) minimal environmental requirements (i.e., a step above
the unix shell, say the gnu buildtool suite rather than the JVM + a mass
of libraries, or the ruby ecosystem -- a self-configuring basic
environment would be nice, like perl+cpan)


A bunch of people pointed at rundeck and rerun, both of which look incredibly helpful, and
couple of folks on the devops list pointed me at this: https://bdsm.beginrescueend.com/modules/shell

Essentially it's management framework for running shell scripts, written by Wayne Seguin at Engine Yard,
also the author of RVM (Ruby Version Manager). Information on the web site is just a bit disorganized,
but there's a pretty good manual in pdf, an introductory slideshow, and a comprehensive git repo.

Wayne just spent the morning walking me through it - an incredibly powerful tool, which I'm now going to
go off and use as I rebuild a couple of servers.

I expect that a combination of these tools, along with git and a simple database for tracking things like
IP numbers and DNS records is going to do the job nicely.

Miles Fidelman

Reply all
Reply to author
Forward
0 new messages