observation re. build systems

40 views
Skip to first unread message

Miles Fidelman

unread,
Dec 31, 2015, 1:47:52 PM12/31/15
to dev...@googlegroups.com
Folks,

I've recently found myself looking for a new distro for our production 
servers, and thinking a lot about build process automation. And, I've 
noticed something that seems more than a little disturbing.

<background>
One of my motivations has been the impact that widespread adoption of
systemd is going to have on our next upgrade - basically, all of
our code works just fine on classic environment (i.e., sysvinit) - but
it sure looks like I'll either have to manually tweak all of our
init code to run w/ systemd; or migrate to a new platform.  I've 
also been looking at alternate platforms -  particularly Solaris derivatives.

I'll note that most of our application code is built from source - basic "./config;
make; make install" - easy enough to automate on any environment that 
supports classic sysvinit. 
</background>

In looking at various distros, it seems like documentation of how to install 
and manage unpackaged software seems to have almost disappeared - i.e., 
it seems like an awful lot of distros seem to assume that EVERYTHING is packaged.

At least in my experience, the reverse is more common:

- developers tend to distribute source, built in their language-specific
development environment, "packaged" for cross-platform building (e.g., a
.tar file created using gnu autotools), or a .jar file, or what have you

- it's pretty rare for developers to package for more than a few,
particularly popular distros (if they package at all).

- when building production servers, it's a lot more reliable to
"./config; make; make install" than to rely on packages for anything other
than utilities and platform stuff

- an awful lot of stuff uses its own dependency resolution mechanisms
and repositories (e.g., perl w/ cpan)

The discrepancy between what developers produce, and an ecosystem that
supports easy cross-platform deployment, seems to be getting worse (the one
possible exception being the sort-of new GUIXSD environment, which takes a 
devops approach to scripting the build of a linux system - but it's all 
alpha code, and it seems to still have the notion that application code
has to be packaged - in its own format).

Not sure what can be done, or by whom, but seems like an issue worth raising.

Comments, thoughts?

Miles Fidelman
















-- 
In theory, there is no difference between theory and practice.
In practice, there is.  .... Yogi Berra





Ranjib Dey

unread,
Dec 31, 2015, 2:20:43 PM12/31/15
to dev...@googlegroups.com
Comments inline,

On Thu, Dec 31, 2015 at 10:47 AM, Miles Fidelman <mfid...@meetinghouse.net> wrote:
Folks,

I've recently found myself looking for a new distro for our production 
servers, and thinking a lot about build process automation. And, I've 
noticed something that seems more than a little disturbing.

<background>
One of my motivations has been the impact that widespread adoption of
systemd is going to have on our next upgrade - basically, all of
our code works just fine on classic environment (i.e., sysvinit) - but
systemd is capable of running sys-v style scripts as well. Migration does not need to be a big bang style execution. It can be incremental. Centos/REL 6->7 users already went through this process, ubuntu will be making this journey as well (16.04). I am migrating lot of things under systemd, i agree its very different than its older counterparts, and a lot of things can be changed to take advantage of it (systemd), but this can be done incrementally. Like you can continue using sys-v scripts and config etc and swap the init daemon, and then slowly convert the systv scripts , cron jobs, env var setting to systemd style (unit files etc)
it sure looks like I'll either have to manually tweak all of our
init code to run w/ systemd; or migrate to a new platform.  I've 
also been looking at alternate platforms -  particularly Solaris derivatives.

I'll note that most of our application code is built from source - basic "./config;
make; make install" - easy enough to automate on any environment that 
supports classic sysvinit. 
</background>

In looking at various distros, it seems like documentation of how to install 
and manage unpackaged software seems to have almost disappeared - i.e., 
yes. cause its hard to maintain software specific packaging details by distro maintainers. They do it for only a few core things.. but the trend is reversing. Not only packaging style is changing (like app container), it is also largely being offloaded to individual software maintainers (like projects now provide rpm/debian/dockerfiles etc).  
it seems like an awful lot of distros seem to assume that EVERYTHING is packaged.
right , and i think its a good idea. building from source is not convenient (time, build dependencies, cpu/memory footprint), harder to automate, and involves in operators understanding about that particular piece of software. 
At least in my experience, the reverse is more common:

- developers tend to distribute source, built in their language-specific
development environment, "packaged" for cross-platform building (e.g., a
.tar file created using gnu autotools), or a .jar file, or what have you
I dont think so. It varies greatly from project to project. Projects that are in golang, java , c now provides straight up binaries as part of their release. A bulk of other projects have established patterns in tooling, that create the packages (like fpm for debians, dockerfiles etc)
- it's pretty rare for developers to package for more than a few,
particularly popular distros (if they package at all).
tools like fpm can produce multiple format (deb, rpm). Some prefer to distributed language specific packages (like cpan modules, ruby gems, python eggs etc). Container based images (aci, docker ) will be more distro neutral. 
- when building production servers, it's a lot more reliable to
"./config; make; make install" than to rely on packages for anything other
than utilities and platform stuff
disagree :-) . 
- an awful lot of stuff uses its own dependency resolution mechanisms
and repositories (e.g., perl w/ cpan)

The discrepancy between what developers produce, and an ecosystem that
supports easy cross-platform deployment, seems to be getting worse (the one
possible exception being the sort-of new GUIXSD environment, which takes a 
devops approach to scripting the build of a linux system - but it's all 
alpha code, and it seems to still have the notion that application code
has to be packaged - in its own format).
right, this is a very hard problem. Two popular ways currently address this is to use containers (like lxc, rocket, docker etc) which encapsulates their dependencies without distrurbing the host os, or use something like omibus builder, which creates a fat build (everything above glibc is bundled in), chef, sensu etc uses this.
Not sure what can be done, or by whom, but seems like an issue worth raising.

Comments, thoughts?

Miles Fidelman

all my comments are pretty arbitrary, as i dont have data /stats to back this (like github project count etc) and biased by the things i experienced,  and what i have learned is what you can do is greatly depend on what constrains you have and what type of risk you are willing to take. Examples of constrains will be people & their skill set, time availability, how fluid is your infrastructure (all physical servers, all cloud stuff, mixed setup). Example of risks tech debts (where the current state of toolings and where my existing tools are) etc..
if you can share more context (like how many servers you have, are they physical or cloud based, whats the state of automation? how you roll out changes,. Your current distro .. etc)
hope this help
ranjib
 












-- 
In theory, there is no difference between theory and practice.
In practice, there is.  .... Yogi Berra





--

---
You received this message because you are subscribed to the Google Groups "devops" group.
To unsubscribe from this group and stop receiving emails from it, send an email to devops+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Matt Joyce

unread,
Dec 31, 2015, 2:31:26 PM12/31/15
to dev...@googlegroups.com
I am not a fan of systemd either.  It solves a few small esoteric problems at the cost of overwhelming complexity, and opportunity to build where you shouldn't.  It's all in all been a huge mistake from an automation standpoint.  At least that's my opinion.

As far as packaging developers code... I've had to tangle with ruby and python pretty extensively, and they both are basically impossible to package.  Most folks now seem to either be deploying into virtual environments or jails / containers / chroots.

There is inherent risk there.  But frankly at least it takes those problems off my plate.

=/

Keep it simple stupid, as true today as it ever was.

-Matt


Zach Hanna

unread,
Dec 31, 2015, 2:32:48 PM12/31/15
to Devops
Definitely interested in the outcome of this thread, and how it pertains to both onsite virtualized infrastructure as well as AWS infra controlled with puppet / jenkins and other tools. I too have seen a massive amount of increased adoption of either:
-Independent repositories (CPAN, and half a dozen CPAN clones for other languages)
-curl | bash

Ranjib Dey

unread,
Dec 31, 2015, 2:47:34 PM12/31/15
to dev...@googlegroups.com
On Thu, Dec 31, 2015 at 11:31 AM, Matt Joyce <mdj...@gmail.com> wrote:
I am not a fan of systemd either.  It solves a few small esoteric problems at the cost of overwhelming complexity, and opportunity to build where you shouldn't.  It's all in all been a huge mistake from an automation standpoint.  At least that's my opinion.

i dont like systemd (cause it feels like a monolith). But i disagree that it makes automation harder. In fact its makes thing lot more streamlined. It provides the common functionalities as a directive based system, which greatly simplifies automation. All the migration i have done using systemd has been very rewarding. I can give concrete data in this. One can get rid of hunderds of lines of bash codes spread across env file, init scripts, supervision dsl (monit . supervisord etc) using systemd. Moreover, it brings dbus apis, which means you cant start/stop/list services without writing anyfile in the system. Till now something like this was virtually impossible and inconsistent across distros. I would like to see counter examples, code samples will be awesome. 
As far as packaging developers code... I've had to tangle with ruby and python pretty extensively, and they both are basically impossible to package.  Most folks now seem to either be deploying into virtual environments or jails / containers / chroots.

This is an example : https://github.com/pagerduty/nut of how easy it is to package ruby, python code nowdays. Its one of the several tools. In this case im using an ephemeral container to build ruby project (example of dockerfile: https://github.com/PagerDuty/Nut/blob/master/examples/Dockerfile), resulting artifact is a debian, that does not need external ruby or containers to run. The build process also removes the need of installing development tools/libraries in host os, and instead offloads this to individual developers/project maintainers. 
The use of containers is new cause its available of mainline linux now, and it greatly simplifies things. But the underlying trick of assembling ruby/python + application code in a build root and then archiving it is pretty old, and fairly same as how distributions are made. I think this now will be more widespread or cross pollinated (embedded linux development is also similar)

There is inherent risk there.  But frankly at least it takes those problems off my plate.

=/

Keep it simple stupid, as true today as it ever was.

simplicity in sophisticated systems are an illusion created by clever abstraction. Like the gears in car, their implementations are very complex and their operation (automatic/manual drive) is rather simple. To put it more industry compatible words, UX and architecture are orthogonal. Stupid == better UX, stupid != simple architecture. systemd unit files are pretty stupid in that way :-) . 

Devdas Bhagat

unread,
Dec 31, 2015, 11:11:32 PM12/31/15
to dev...@googlegroups.com
On Thu, Dec 31, 2015 at 7:47 PM, Miles Fidelman
<mfid...@meetinghouse.net> wrote:
<snip>
>
> In looking at various distros, it seems like documentation of how to install
> and manage unpackaged software seems to have almost disappeared - i.e.,
> it seems like an awful lot of distros seem to assume that EVERYTHING is packaged.
>

Packaging for the distribution(s) you are running is not very
difficult. The only thing I would suggest not packaging is in-house
code which changes very rapidly.

> At least in my experience, the reverse is more common:
>
> - developers tend to distribute source, built in their language-specific
> development environment, "packaged" for cross-platform building (e.g., a
> .tar file created using gnu autotools), or a .jar file, or what have you
>
It is not very hard to write a spec file (or Debian equivalent).

> - it's pretty rare for developers to package for more than a few,
> particularly popular distros (if they package at all).
>
Why depend on developers?

Just run your own repository.

Devdas Bhagat

Matt Joyce

unread,
Dec 31, 2015, 11:22:11 PM12/31/15
to dev...@googlegroups.com, Devdas Bhagat
Fuck no.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Devdas Bhagat

unread,
Dec 31, 2015, 11:30:13 PM12/31/15
to dev...@googlegroups.com
On Fri, Jan 1, 2016 at 5:21 AM, Matt Joyce <mdj...@gmail.com> wrote:
> Fuck no.
>

What are you objecting to? Packaging? Running your on repo? Both?

Devdas Bhagat

Mark Goldfinch

unread,
Jan 1, 2016, 12:20:35 AM1/1/16
to dev...@googlegroups.com


On 1 Jan 2016 5:11 p.m., "Devdas Bhagat" <devd...@gmail.com> wrote:
> The only thing I would suggest not packaging is in-house
> code which changes very rapidly.

I'd disagree with this statement - with the use of fpm using OS-style packaging becomes a lot more accessible. Being able to electively leverage OS distributed dependencies where it makes sense is powerful.

Whether you're distributing installable artifacts to customers supporting multiple platforms or solely deploying to your own infrastructure is going to present quite different needs.

> Just run your own repository.

I agree with this - I do wish for better repository management tooling though.

Thanks,
Mark.

Ranjib Dey

unread,
Jan 1, 2016, 12:33:41 AM1/1/16
to dev...@googlegroups.com
there are certain disadvantages for packaging some apps in old fashioned debian/rpm way. 
- One has to ship the whole thing every time (code / asset everything) this is lot slower than doing something like svn or git based deployments (like what cap does). This is a whole different pain when you are deploying many times a day.
- Package based deployments requires the deploying users to have admin privileges..if you are doing capistrano style deployments this is not an absolute requirement.
- Package based deployments will also involve versioning each release, if you are doing automated deployments then this gets complicated. Automated version bumps have lot more complicated pitfalls (google/rob pikes work are an epic reading on this topic).
- OS packages allow running admin commands (like create user , remove directories etc)...configuration management systems are lot better to address similar things, and when used the config management system one has to upfront decide what goes inside config management system, what goes inside the pre-install/post install hooks of OS packages

I personally like package based deployments for all slow moving projects. It greatly simplifies thing from automation/process standpoint, but its not perfect. Above all it requires some understanding about packaging. fpm can ease creating the package, but you still have to come up with pre-install/post install hooks etc.


hth
ranjib

--

Mark Goldfinch

unread,
Jan 1, 2016, 12:46:51 AM1/1/16
to dev...@googlegroups.com


On 1 Jan 2016 6:33 p.m., "Ranjib Dey" <dey.r...@gmail.com> wrote:
> - One has to ship the whole thing every time (code / asset everything) this is lot slower than doing something like svn or git based deployments (like what cap does). This is a whole different pain when you are deploying many times a day.

It should be noted that going this way means you're allowing your apps to read from your code repo.  Depending upon how strict you want to control access to your code repo, this may or may not be a problem for you.

If you have many machines to deploy to, retrieving a set of artifacts over http/s may scale better than performing VCS update operations from one repo to many hosts.

The other points highlighted are valid concerns, do remember that you're already trusting you devs with your app though..

Thanks,
Mark.

James Holmes

unread,
Jan 1, 2016, 1:01:41 AM1/1/16
to dev...@googlegroups.com

Docker.

Reply all
Reply to author
Forward
0 new messages