I find myself generalizing common approaches like so:
* Build-driven configuration: Build process generates packaged
configuration artifacts that can be released and installed without
customization. Note there could be multiple permutations of this. See
http://www.build-doctor.com/2010/03/26/supporting-multiple-environments-part-2/
* Deploy-driven: At deployment time, configuration is customized by
the deployment automation
* System configuration-driven: Rule and role based policy that
continuously runs enforcing compliance based on configuration
specification
One might imagine a particular tool and/or framework underlying those
approaches.
All of the above begs a final question: How are you communicating
configuration changes between development and operations teams (and
back again)?
* Release artifact is the communication. Developers drive the
configuration changes through to operation delivering working
configurations as part of the app release
* Operations plays "catch up". Operations analyzes file deltas between
the developer release and updates their own methods to replicate
similar features using their own toolset
* Hit and miss. No consistent standard so changes get lost in the
cracks
> All of the above begs a final question: How are you communicating
> configuration changes between development and operations teams (and
> back again)?
Depending on what kind of configuration change, all of them are in
version control, so there's an audit trail of what was done when, by
whom, with hopefully a comment or two about it.
Depending on the nature of the change:
- If there's a CM ticket, those are for all to see and notifications
usually go out via email well ahead of time, when the change starts,
and when the change is finished.
- If there's not a ticket, those are for all to see and notifications
go out via email, IRC, or IM. We had an IM bot at Flickr which would
parrot messages sent to it to a list of people. Those messages also
were injected into IRC, where they were fed into a search engine for
easy finding later.
Ex: "4:07:24 PM FlickrIMBot: OpsJoe says: pushed config change to re-
point to "dbthing[1-2]" - renamed servers to dbthing1,2, and the old
ones to dbthingold1,2, which aren't doing anything, now."
If the change is large enough, they're also brought up as a reminder
in both the dev and ops weekly staff meetings. "FYI, this is happening
on Wednesday...."
Again, tho, I think that the actual type of change is important. Many
application-specific changes (i.e. what database to talk for what
data) was kept within the application, and so therefore a change made
there (feature flags, dark launches, etc.) was really no different
than a normal code deploy.
-j
On Mar 26, 2:04 pm, Alex-SF <aho...@users.sourceforge.net> wrote:
> How do you release configuration?
> * Lump it inside the deployable code artifact
> * Break it out as a separate artifact that goes out along side the
> code artifact?
> * Generate a working configuration based on templates or aggregated
> fragments
> * Leave it to an external system configuration management layer.
> * Manual customization at deployment time (yikes)
>
> I find myself generalizing common approaches like so:
> * Build-driven configuration: Build process generates packaged
> configuration artifacts that can be released and installed without
> customization. Note there could be multiple permutations of this. Seehttp://www.build-doctor.com/2010/03/26/supporting-multiple-environmen...
I've never really broken it down into generalised categories before -
thanks Alex!
"All of the above begs a final question: How are you communicating
configuration changes between development and operations teams (and
back again)? "
Given my point above, I find this particularly tricky. Putting
configuration into source sounds great, but does that work with tools
like ControlTier (e.g. changing JAVA_OPTS)? (Bit of a CT newbie, so if
anyone can enlighten me please do!)
Cheers,
Adrian Howchin
-Noah
> To unsubscribe from this group, send email to devops-toolchain+unsubscribegooglegroups.com or reply to this email with the words "REMOVE ME" as the subject.
>
Not just configuraion, but software and hardware deployment in general, for example replacing a filer or upgrading apache, etc. I have also found it difficult to find an organization that will even buy into the concept much less a QA team that can handle it.
Scott M
On Mar 29, 2010 8:08 PM, "Noah Campbell" <noahca...@gmail.com> wrote:
I think treating configuration as code is a very important concept.
Treating it otherwise leads towards dysfunction. The tough part to
tackle is configuration by its very nature requires cross departmental
coordination where code is typically isolated to engineering (however
QA is required to coordinate the change as well, but in my experience,
it's difficult to find a strong QA department that will scrutinize
configuration).
-Noah
On Mon, Mar 29, 2010 at 4:27 PM, ahowchin <samp...@gmail.com> wrote:
> I find that "configuration...
We spend a lot of time and effort communicating changes to
configurations. Most of it is manual in some way. First Dev
communicates to CM (usually by email) of changes in daily builds, then
CM communicates to Ops prior to a staging/production release to
aggregate all changes of a sprint to one big change. We try to capture
all changes in a file, but we usually resort to revision diffs in
subversion of the configuration files to capture all changes. We tend
to spend days prior to a release first aggregating all the
configuration changes, then communicating them to Ops, then they make
the changes and we review them after they make them and prior to the
deployment. It's labor intensive, but effective, hence the desire to
find a tool to help automate this process.
Back communication (Ops to Dev) is not much of an issue as they tend
only to change values but not keys and our model allows Ops to manage
their own values.
Dan
On Mar 26, 5:04 pm, Alex-SF <aho...@users.sourceforge.net> wrote:
> How do you release configuration?
> * Lump it inside the deployable code artifact
> * Break it out as a separate artifact that goes out along side the
> code artifact?
> * Generate a working configuration based on templates or aggregated
> fragments
> * Leave it to an external system configuration management layer.
> * Manual customization at deployment time (yikes)
>
> I find myself generalizing common approaches like so:
> * Build-driven configuration: Build process generates packaged
> configuration artifacts that can be released and installed without
> customization. Note there could be multiple permutations of this. Seehttp://www.build-doctor.com/2010/03/26/supporting-multiple-environmen...
A few thoughts of my own:
What would it encompass (what is it's scope and purpose)? --> To
manage configuration changes in line with code changes, ensuring that
both sets move into an environment consistently (and preferably in a
automated/semi-automated manner, able to be triggered).
How would it push changes through - automated or on operator input (or
both)? Both
What would trigger a change in this config management system?
Schedules, either internal or external - e.g. through ControlTier.
What would the interface be - GUI, pure xml files, command-line only?
Mixture of all 3, but a strong xml/command-line interface would be
necessary to ensure interoperability with other tools (ie. the tools
API).
These are just my initial thoughts and imaginings - feel free to add/
destroy/suggest your own.
Cheers,
Adrian
I have to quote James White on Infrastructure here:
== Rules ==
On Infrastructure
—————–
There is one system, not a collection of systems.
The desired state of the system should be a known quantity.
The “known quantity” must be machine parseable.
The actual state of the system must self-correct to the desired state.
The only authoritative source for the actual state of the system is the system.
The entire system must be deployable using source media and text files.
I would dare to add to that:
The source media and text files must be versioned
I don't feel it is about treating configuration as code it is about
treating everything configs, code, firmware even as components in a
single system. Processes and tools have to be able to deal with the
entire stack from switch and router firmware, to high end SAN and NAS
configuration to serried ranks of servers. By the same token any
module of that system must be machine parseable, must be deployable by
source media, must be versioned.
QA is not really about testing[0] it really about Quality Assurance,
they are they gate keepers of our reputations, as devs and sysops we
write the unit tests and the BDT scripts that define the functional
testing of our systems.
[0] I seemed to have defined and written most of the infrastructure
tests at my previous position before handing them over to the QA team.
Jim :)
On Apr 1, 1:19 am, James Bailey <paradoxbo...@googlemail.com> wrote:
> I find myself generalizing common approaches like so:
> * Build-driven configuration: Build process generates packaged
> configuration artifacts that can be released and installed without
> customization. Note there could be multiple permutations of this. See http://www.build-doctor.com/2010/03/26/supporting-multiple-environmen...
I really like EJ Ciramella's build-doctor write-up on different
approaches and how each one "scales" better. I'd add a note that the
well defined configuration default concepts that I first read about in
the Postfix project and some folks call "convention over
configuration" helps scalability. This is incredibly useful when you
setup a new experimental environment it should for the most part work
and boot up without tremendous config work. So, when you chose a
template tool, make sure it has support for defaults and overloads.
Another thing that adds to "scalability" of config is incident driven
config overloading from the admin/operator somehow. Normal runtime
would probably not use this capability, but when you got to move fast
for an incident that doesn't have a defined procedure, you don't want
to go back and generate builds and push artifacts around. This
capability also helps developers and performance optimizers experiment
with configs prior to committing them to source control.