Folks, I've recently found myself looking for a new distro for our production servers, and thinking a lot about build process automation. And, I've noticed something that seems more than a little disturbing. <background> One of my motivations has been the impact that widespread adoption of systemd is going to have on our next upgrade - basically, all of our code works just fine on classic environment (i.e., sysvinit) - but it sure looks like I'll either have to manually tweak all of our init code to run w/ systemd; or migrate to a new platform. I've also been looking at alternate platforms - particularly Solaris derivatives. I'll note that most of our application code is built from source - basic "./config; make; make install" - easy enough to automate on any environment that supports classic sysvinit. </background> In looking at various distros, it seems like documentation of how to install and manage unpackaged software seems to have almost disappeared - i.e., it seems like an awful lot of distros seem to assume that EVERYTHING is packaged. At least in my experience, the reverse is more common: - developers tend to distribute source, built in their language-specific development environment, "packaged" for cross-platform building (e.g., a .tar file created using gnu autotools), or a .jar file, or what have you - it's pretty rare for developers to package for more than a few, particularly popular distros (if they package at all). - when building production servers, it's a lot more reliable to "./config; make; make install" than to rely on packages for anything other than utilities and platform stuff - an awful lot of stuff uses its own dependency resolution mechanisms and repositories (e.g., perl w/ cpan) The discrepancy between what developers produce, and an ecosystem that supports easy cross-platform deployment, seems to be getting worse (the one possible exception being the sort-of new GUIXSD environment, which takes a devops approach to scripting the build of a linux system - but it's all alpha code, and it seems to still have the notion that application code has to be packaged - in its own format). Not sure what can be done, or by whom, but seems like an issue worth raising. Comments, thoughts? Miles Fidelman
-- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra
Folks, I've recently found myself looking for a new distro for our production servers, and thinking a lot about build process automation. And, I've noticed something that seems more than a little disturbing. <background> One of my motivations has been the impact that widespread adoption of systemd is going to have on our next upgrade - basically, all of our code works just fine on classic environment (i.e., sysvinit) - but
it sure looks like I'll either have to manually tweak all of our init code to run w/ systemd; or migrate to a new platform. I've also been looking at alternate platforms - particularly Solaris derivatives. I'll note that most of our application code is built from source - basic "./config; make; make install" - easy enough to automate on any environment that supports classic sysvinit. </background> In looking at various distros, it seems like documentation of how to install and manage unpackaged software seems to have almost disappeared - i.e.,
it seems like an awful lot of distros seem to assume that EVERYTHING is packaged.
At least in my experience, the reverse is more common: - developers tend to distribute source, built in their language-specific development environment, "packaged" for cross-platform building (e.g., a .tar file created using gnu autotools), or a .jar file, or what have you
- it's pretty rare for developers to package for more than a few, particularly popular distros (if they package at all).
- when building production servers, it's a lot more reliable to "./config; make; make install" than to rely on packages for anything other than utilities and platform stuff
- an awful lot of stuff uses its own dependency resolution mechanisms and repositories (e.g., perl w/ cpan) The discrepancy between what developers produce, and an ecosystem that supports easy cross-platform deployment, seems to be getting worse (the one possible exception being the sort-of new GUIXSD environment, which takes a devops approach to scripting the build of a linux system - but it's all alpha code, and it seems to still have the notion that application code has to be packaged - in its own format).
Not sure what can be done, or by whom, but seems like an issue worth raising. Comments, thoughts? Miles Fidelman
-- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra
--
---
You received this message because you are subscribed to the Google Groups "devops" group.
To unsubscribe from this group and stop receiving emails from it, send an email to devops+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I am not a fan of systemd either. It solves a few small esoteric problems at the cost of overwhelming complexity, and opportunity to build where you shouldn't. It's all in all been a huge mistake from an automation standpoint. At least that's my opinion.
As far as packaging developers code... I've had to tangle with ruby and python pretty extensively, and they both are basically impossible to package. Most folks now seem to either be deploying into virtual environments or jails / containers / chroots.
There is inherent risk there. But frankly at least it takes those problems off my plate.Keep it simple stupid, as true today as it ever was.
=/
On 1 Jan 2016 5:11 p.m., "Devdas Bhagat" <devd...@gmail.com> wrote:
> The only thing I would suggest not packaging is in-house
> code which changes very rapidly.
I'd disagree with this statement - with the use of fpm using OS-style packaging becomes a lot more accessible. Being able to electively leverage OS distributed dependencies where it makes sense is powerful.
Whether you're distributing installable artifacts to customers supporting multiple platforms or solely deploying to your own infrastructure is going to present quite different needs.
> Just run your own repository.
I agree with this - I do wish for better repository management tooling though.
Thanks,
Mark.
--
On 1 Jan 2016 6:33 p.m., "Ranjib Dey" <dey.r...@gmail.com> wrote:
> - One has to ship the whole thing every time (code / asset everything) this is lot slower than doing something like svn or git based deployments (like what cap does). This is a whole different pain when you are deploying many times a day.
It should be noted that going this way means you're allowing your apps to read from your code repo. Depending upon how strict you want to control access to your code repo, this may or may not be a problem for you.
If you have many machines to deploy to, retrieving a set of artifacts over http/s may scale better than performing VCS update operations from one repo to many hosts.
The other points highlighted are valid concerns, do remember that you're already trusting you devs with your app though..
Thanks,
Mark.
Docker.