general concepts around /etc/environment and the consistency of the CoreOS environment

569 views
Skip to first unread message

Darren Shepherd

unread,
Jul 9, 2014, 2:06:35 PM7/9/14
to coreo...@googlegroups.com
I'm kicking this thread off because I've been struggling with /etc/environment lately (not just that it went MIA, it's apparently coming back).  I want to give a bigger picture to what I'm doing so that maybe we can see if there's a different approach or something.  And ideally it would be nice to have my concerns addressed before stable is declared (probably not likely).  I know this is long, but please read.

I've been working on a really fun side project for awhile that has allowed me to do some fun experiments with running both KVM and Docker on CoreOS.  Basically you can easily turn a CoreOS cluster into a platform for running both VMs and Containers.  So a lot like OpenStack, but just really, really easy to run and Containers are first class citizens.  I've been trying to package it up and release it out to the community so other people can play with it, but that's where I've been having problems.  It's hard to ensure that it will work across EC2, GCE, Vagrant, Local install, iPXE, etc.

This is my approach.  I wanted to start with the assumption that you have a fleet cluster running.  You then download my unit file and do "fleetctl start fancy.unit".  And voilà!  You have your own personal EC2.  In order for this to work, I have to build upon what is available by default.  So docker already gives me a consistent view of the world and all my stuff runs in containers.  So that part is good.  The big hangup is about the IP of the server.  I need a consistent and reliable way to get the IP of server.  Since my installation method is from fleet, I can't rely on some special setup from cloud-config, also I don't want whatever cloud-config the user has screwing up things and making my stuff not work.

So in order to get the IP information about the server I've been using /etc/environment and COREOS_PUBLIC_IPV4, COREOS_PRIVATE_IPV4.  I've felt like I've been using some fragile back door by using this file.  Here's my issues

1) It doesn't seem to be documented anywhere.  Maybe it was never intended to be used, since 367 deleted /etc/environment.
2) It doesn't seem to exist or be populated in all environments.  Bare metal and iPXE don't seem to have the file?
3) /etc/environment is a general purpose file.  I don't know what may read it, but that files exists on other distros like Ubuntu, etc.  So people for whatever reason may want to put more crap in that file, or overwrite it, in the end screwing with the consistency of the environment.
4) PUBLIC_IPV4 and PRIVATE_IPV4 do not seem well defined, are sometime empty

It seems like knowing what the primary IP and the possible externally NAT'd IP is a basic need for any service discovery.  I want a very well known and consistent way to determine this information.  Here's what I propose, feel free to disagree, I don't assume to know everything.

First, making the definitions clear.  I would like PRIVATE_IPV4 to be defined as the IP that is bound to a local interface on the server.  This IP should be used for private cluster communication, but there is no actual guarantee that this IP traffic is in fact private.  In EC2 for example, this would be your VPC IP (172.16/12) and would essentially be private.  On bare metal, this will just be the IP of the local server which depending on the network environment may or may not be a private VLAN.  If you are running a service that only needs to be accessible from within the cluster, this is the IP you should use/publish.  Unless the configuration is overwritten (I'll get to configuration later), the IP should be assume to the 'src' IP of the default route.  For example, if you run "ip route get 8.8.8.8" the value of src should be used.  This means if you have a fancy setup with multiple bonds and nics and 15 IPs bound per bond, we can still consistently determine the IP because its based on the routing rules.

PUBLIC_IPV4 should be defined as the IP that can be published so that external clients can access a service provided by the cluster.  There is no guarantee that this IP is locally bound to an interface and it is safer to assume it is not.  PUBLIC_IPV4 should always be set.  In EC2 this would the be public IP/Elastic IP that gets NATd to your instance.  On bare metal, this by default (I'll talk about configuration later) will be the same as PRIVATE_IPV4.  If your service should be consumed by clients outside the cluster, you should publish this IP as your IP.  But, again, you can't assume it's locally bound.

Second, configuration.  I don't think these values should be stored in /etc/environment, but instead a different file like /etc/coreos-server-enviroment (I don't care about the actual name).  For backwards compatibility the data can be in both, but a different file should be the preferred approach.  The reason for this is consistency.  I don't want people screwing with this file.  As I mentioned before, people may use /etc/environment for their own units' needs and may overwrite the contents of the file from their cloud_config.  They won't have the ability to properly respect all the rules I just defined above and thus the values of COREOS_*_IPV4 may be bastardized especially if they used the same cloud-config across multiple environments.  While its useful to have the $*_ipv4 in cloudinit, it's not sufficient that they only exist there (as was made clear after 367 was released).  For my use case, I need to reference this information long after cloudinit has ran.

The PUBLIC_IPV4 and PRIVATE_IPV4 should be first class concepts in CoreOS and as such should have first class ways of managing them.  If you don't like the default rules you should be able to provide an OEM script to override them.  For example, /usr/share/oem/bin/coreos-{public,private}-ipv4.  I don't care where it is or what it is called.  The point being that if you have a more complex setup you can redefine them, but they will be still stored and made accessible in a consistent way.  Building more upon this concept, the default logic of defining public-ipv4 and private-ipv4 should be packaged in /usr and should ideally be a shell script.  The reason is that regardless of what documentation says, people can easily just look at the shell scripts to see what's going on.  The behaviour of getting the PUBLIC_IPV4 IP, which is cloud specific, will them just be packaged in default OEM partition for that cloud, similar to how /usr/share/oem/bin/coreos-setup-environment works today.

What do you think?  My main objective here is that I want these values to be well defined, managed, and first class entities in a CoreOS environment that users can rely on.

Darren

Seán C. McCord

unread,
Jul 9, 2014, 3:15:21 PM7/9/14
to coreo...@googlegroups.com
I completely agree with your assessments.  I don't know if your proposed solutions are perfect, but I don't really have anything better to offer, at this point.  I would also suggest that whatever facility is chosen to canonize PUBLIC_IPV4 and PRIVATE_IPV4 would also be extensible to PUBLIC_IPV6 and PRIVATE_IPV6 (yes, even though there is no NAT in IPV6, there is still a use for having public and private networks).
--
Seán C. McCord
ule...@gmail.com
CyCore Systems

Michael Marineau

unread,
Jul 9, 2014, 4:14:54 PM7/9/14
to coreos-dev
On Wed, Jul 9, 2014 at 11:06 AM, Darren Shepherd
<darren.s...@gmail.com> wrote:
> I'm kicking this thread off because I've been struggling with
> /etc/environment lately (not just that it went MIA, it's apparently coming
> back). I want to give a bigger picture to what I'm doing so that maybe we
> can see if there's a different approach or something. And ideally it would
> be nice to have my concerns addressed before stable is declared (probably
> not likely). I know this is long, but please read.

To clarify, it is coming back on EC2/OpenStack. The reason why writing
IP addresses there stopped in 367.0.0 was that we moved support for
detecting them from a stand-alone script into cloudinit in order to
gracefully support reading from either an EC2-compatible metadata
service or an OpenStack config drive which provides the same data. The
previous stand-alone script only worked with the metadata service and
caused booting OpenStack images on systems with only config drive to
hang forever waiting on the non-existant metadata service.

>
> I've been working on a really fun side project for awhile that has allowed
> me to do some fun experiments with running both KVM and Docker on CoreOS.
> Basically you can easily turn a CoreOS cluster into a platform for running
> both VMs and Containers. So a lot like OpenStack, but just really, really
> easy to run and Containers are first class citizens. I've been trying to
> package it up and release it out to the community so other people can play
> with it, but that's where I've been having problems. It's hard to ensure
> that it will work across EC2, GCE, Vagrant, Local install, iPXE, etc.

Indeed it is hard, fixing that is the eventual goal but it is a work
in progress.

>
> This is my approach. I wanted to start with the assumption that you have a
> fleet cluster running. You then download my unit file and do "fleetctl
> start fancy.unit". And voilà! You have your own personal EC2. In order
> for this to work, I have to build upon what is available by default. So
> docker already gives me a consistent view of the world and all my stuff runs
> in containers. So that part is good. The big hangup is about the IP of the
> server. I need a consistent and reliable way to get the IP of server.
> Since my installation method is from fleet, I can't rely on some special
> setup from cloud-config, also I don't want whatever cloud-config the user
> has screwing up things and making my stuff not work.
>
> So in order to get the IP information about the server I've been using
> /etc/environment and COREOS_PUBLIC_IPV4, COREOS_PRIVATE_IPV4. I've felt
> like I've been using some fragile back door by using this file. Here's my
> issues

Well, your intuition was right there, it is brittle. Setting those
values never worked properly except for EC2, OpenStack, GCE, and
Vagrant. We did document the feature in cloudinit that relies on those
($public_ipv4 and $private_ipv4) but I purposefully didn't document
the /etc/environment half of the implementation because I was doubtful
that it was a workable long-term solution. The primary reason for
adding this in the first place is that the current etcd implementation
is tied closely to a one-node-one-ip setup so to get etcd working at
all we needed some way to accommodate that limitation. Eventually etcd
will be fixed to lift that limitation, so my hope is that the need for
this will be reduced.

>
> 1) It doesn't seem to be documented anywhere. Maybe it was never intended
> to be used, since 367 deleted /etc/environment.
> 2) It doesn't seem to exist or be populated in all environments. Bare metal
> and iPXE don't seem to have the file?
> 3) /etc/environment is a general purpose file. I don't know what may read
> it, but that files exists on other distros like Ubuntu, etc. So people for
> whatever reason may want to put more crap in that file, or overwrite it, in
> the end screwing with the consistency of the environment.

The scripts that write to this file try to update it rather than
overwrite it for this reason, granted they are all quick-n-dirty bash
scripts that are not immune to race conditions but it was sufficient
for an initial proof-of-concent implementation.

> 4) PUBLIC_IPV4 and PRIVATE_IPV4 do not seem well defined, are sometime empty
>
> It seems like knowing what the primary IP and the possible externally NAT'd
> IP is a basic need for any service discovery. I want a very well known and
> consistent way to determine this information. Here's what I propose, feel
> free to disagree, I don't assume to know everything.

This is the basic summary of the trouble here, it is simply impossible
to robustly guess at what the "primary IP" of a host is and even more
impossible to guess at what an external NAT address might be. It
should be possible to at least guess well enough for many environments
but that must be implemented carefully to avoid hanging for an
excessive amount of time during boot. There will always be plenty of
environments where the guess work fails and we need to be well behaved
and fast everywhere.
If a generic implementation for detecting these vales returns to
CoreOS I have a couple requirements to avoid this being a persistent
thorn in our side:
- It must be integrated with coreos-cloudinit. We need to support
configuring networking statically in addition to DHCP as well as
networks that don't have a route to the Internet. The primary way we
have for users to provide configuration of any kind is via a cloud
config, the primary consumer of the PUBLIC/PRIVATE_IPV4 values. This
is why I never "fixed" the old generic coreos-setup-environment script
to simply wait until it found a route to the public Internet. Since
that implementation required coreos-setup-environment to finish before
coreos-cloudinit could run waiting on networking could deadlock boot.
So cloudinit needs some fairly complex logic in order be able to
behave sanely.
- It must have a pretty clear scope of what we can and cannot support.
Networking is complicated and we need to be able to clearly document
when and where it is possible to rely on these values and when it
isn't. Ideally the line between the two doesn't hinge on some
arbitrary timeout during boot, inevitably there will be environments
where networking gets configured properly before the timeout but
sometimes will be a little slower, leaving the values undefined and
then breaking services that depend on those values in unexpected ways.

Right now I don't know if it is possible to meet those requirements.

The better solution is for services to not assume that networking is
simple and depend on being configured in this way. For example what I
would like to see in etcd is something like this: (disclaimer, this is
just my own pondering, not a fully fleshed out design, etcd's future
may look quite different)
- Support an arbitrary number of addresses per node, advertising all
possible addresses a node knows about to its peers.
- Peers would connect via the first working address, prioritizing
better looking routes to try first.
- By default nodes would dynamically watch local addresses, updating
the list it advertises as needed.
- In addition to local addresses it should be possible to give a node
any number of other addresses to advertise in order to support NAT or
complex firewall setups.

Most services don't need to be quite that smart since most services
aren't organized into a many-to-many cluster. For a simple service
like a HTTP server it should be configured to not care about the local
address at all, or even what its absolute URL may be when possible. In
the few cases where an absolute URL must be used (such as in a
redirect from HTTP to HTTPS) the Host: header in the request should be
used.

The bit that lies in between those two in complexity is service
discovery where services self-publish themselves into etcd or similar.
It is handy when it is possible to do so via a simple curl command
when the service starts and stops and providing these variables would
facilitate that but a more intelligent tool would be better. I think
it would be worth while to include a tool in coreos for this purpose.
It would include the logic you described above for determining a
reasonable default IP to advertise and optionally include all other
addresses as secondary choices. It would also be able to robustly
update and cleanup state in etcd and when used as a long-running
process could use a ttl and periodic updates to ensure an unclean
shutdown doesn't leave stale data behind. Best of all dealing with
this sort of logic at this level avoids the complexity of making sure
it works everywhere and doesn't risk deadlocking boot and cloudinit.

Darren Shepherd

unread,
Jul 9, 2014, 5:21:51 PM7/9/14
to coreo...@googlegroups.com
So reading somewhat between the lines it seems like the *IPV4 variables were a quick hack to get etcd working, but they unfortunately got a little wider use than expected.  It sounds like I should just stop using the variables and cook up some other approach that works for me.

The use case I'm personally looking at is where services self-publish themselves into ETCD.  This is supposed to be a selling point of etcd in that you can do service discovery, but I'm finding it difficult at the moment to easily do that in CoreOS.  I think CoreOS should definitly do something here to make service discovery much simpler.  But that is complicated and I don't have a great answer.  A lot of people gravitate towards DNS based solutions (skydns, consul).  I honestly don't think that is all that great because of caching and TTL issues and it puts your service discovery tool in the critical path.

Darren

Michael Marineau

unread,
Jul 9, 2014, 5:29:43 PM7/9/14
to coreos-dev
On Wed, Jul 9, 2014 at 2:21 PM, Darren Shepherd
<darren.s...@gmail.com> wrote:
> So reading somewhat between the lines it seems like the *IPV4 variables were
> a quick hack to get etcd working, but they unfortunately got a little wider
> use than expected. It sounds like I should just stop using the variables
> and cook up some other approach that works for me.

Yeah, the initial thought was that they would be generally useful but
it was only until later that reality fully set in about what a big can
of worms this is.

>
> The use case I'm personally looking at is where services self-publish
> themselves into ETCD. This is supposed to be a selling point of etcd in
> that you can do service discovery, but I'm finding it difficult at the
> moment to easily do that in CoreOS. I think CoreOS should definitly do
> something here to make service discovery much simpler. But that is
> complicated and I don't have a great answer. A lot of people gravitate
> towards DNS based solutions (skydns, consul). I honestly don't think that
> is all that great because of caching and TTL issues and it puts your service
> discovery tool in the critical path.

The current gap here remains because it's taken some time to shake out
what patterns make the most sense with etcd. Now that there's been
enough time for ideas to bake a bit we need to move some things like
this from proof-of-concept curl commands to usable production ready
tools. :)

xie...@gmail.com

unread,
Apr 20, 2015, 7:03:13 AM4/20/15
to coreo...@googlegroups.com
Dear all,

I wonder after this long time, what is the current thinking/design on this issue?

Best regards,

Dong

Alex Crawford

unread,
Apr 20, 2015, 1:46:49 PM4/20/15
to coreo...@googlegroups.com
On 04/20, xie...@gmail.com wrote:
> I wonder after this long time, what is the current thinking/design on this
> issue?
We are moving to a much more open and structured model. The basics can
be found in the Ignition docs [1] (Ignition will come to replace
cloudinit). The idea is to drop the variable substition entirely and
just let systemd handle that for us. The service file will source
whichever files it needs (may be provided by us or the user) to define
variables. This new model will allow the user/OEM to write their own
metadata service and it will work in exactly the same way as our
metadata service.

[1] https://github.com/coreos/ignition/blob/master/doc/requirements.md#oem-metadata

-Alex

Dong Xie

unread,
Apr 20, 2015, 1:51:20 PM4/20/15
to coreo...@googlegroups.com
Great! Time wise?

From: Alex Crawford
Sent: ‎20/‎04/‎2015 18:46
To: coreo...@googlegroups.com
Subject: Re: general concepts around /etc/environment and the consistency ofthe CoreOS environment

Alex Crawford

unread,
Apr 20, 2015, 2:13:02 PM4/20/15
to coreo...@googlegroups.com
On 04/20, Dong Xie wrote:
> Great! Time wise?

Tough to say. It depends on how busy I am. :)
I'd guess that we'll have something by the beginning of July.

-Alex

xie...@gmail.com

unread,
Apr 20, 2015, 2:37:52 PM4/20/15
to coreo...@googlegroups.com
Hi, Alex,

So I posted my other question in this thread:

Let's assume you will have plenty of time to release this into (stable?) by July. :) Then what should be the interim solution? Was it worth it to build an OEM or just use a simpler solution (one I can think of: define a unit that somehow can run in between 20-cloudinit file is written for etc.d and before etcd is started, with a bash file to do the string sub, is this possible?)

Thanks a lot!

Best,

Dong

Alex Crawford

unread,
Apr 20, 2015, 2:44:20 PM4/20/15
to coreo...@googlegroups.com
On 04/20, xie...@gmail.com wrote:
> So I posted my other question in this thread:
> https://groups.google.com/forum/#!topic/coreos-dev/YbjPYCldhDo

I'll leave a full response over there. (I have it queued up in my
mailbox)

-Alex
Reply all
Reply to author
Forward
0 new messages