Amazon EC2 auto-scaling and minion keys

1,535 views
Skip to first unread message

Oleg Anashkin

unread,
May 23, 2013, 5:39:16 PM5/23/13
to salt-...@googlegroups.com
Hello,

How can I reconcile ec2 scaling managed by Amazon and spawned minions being automatically accepted by the master?

Solution with preseeded keys will not work because minions will be added or removed on demand by Amazon so their ids are unknown in advance. Since passing the same key in user_data is not recommended, I can't see how to dynamically seed a new key on master every time Amazon decides to spawn a new minion and then pass this new key dynamically in user_data.

I have tried to find existing solution to this question in docs and on github, but couldn't find anything. What I think I need is a cloud-init script to use for every instance in a scaling unit which makes the dynamically spawned minion auto-connect to master and configure the instance, while not giving up security.

Any ideas?

Thanks,
Oleg

Joseph Hall

unread,
May 23, 2013, 6:04:14 PM5/23/13
to salt-...@googlegroups.com
Hi Oleg,

Have you looked at Salt Cloud?

http://salt-cloud.readthedocs.org/en/latest/

It doesn't address Amazon's own auto-scaling feature at this point,
but it might get you closer to your goal.

Do you know if Amazon sends any kind of trigger anywhere when it
automatically spins up an instance? If we can keep an eye on the
triggers, perhaps we could cook something up to properly seed them.
> --
> You received this message because you are subscribed to the Google Groups
> "Salt-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to salt-users+...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>



--
"In order to create, you have to have the willingness, the desire to
be challenged, to be learning." -- Ferran Adria (speaking at Harvard,
2011)

Oleg Anashkin

unread,
May 23, 2013, 6:16:58 PM5/23/13
to salt-...@googlegroups.com
Yes, I did look at salt-cloud and it doesn't help with my scenario. In application to ec2, it is essentially just a wrapper on top of boto which allows me to spawn instances - I can do the same in AWS console manually.

Amazon itself doesn't send any trigger, but since it can execute any custom script in cloud-init you can program any trigger you want there. I guess it's possible from this cloud-init to call some script from master which will seed new keys, purge old keys and return unique preseeded keys to each minion. This approach should work but it seems like a complicated workaround and I think it should be implemented as a core salt functionality.

I'm actually surprised that this quite common use case is not already supported by salt.

Joseph Hall

unread,
May 23, 2013, 6:30:07 PM5/23/13
to salt-...@googlegroups.com
Well, not boto specifically, but still just an API wrapper.

It isn't exactly a common use case, at least among our users that have
said anything, but it has been brought up once or twice in passing.
But as you know, it's a difficult thing to attack, especially since
Amazon doesn't seem to have any sort of trigger.

Cloud Init is one solution, hackish though it may or may not be.
Another solution is to take the control from Amazon and exercise it
yourself. Salt now has the ability to both monitor events on a system,
and react to said events. I don't know of anyone currently using this
to do their own autoscaling, but it certainly is possible, and puts
you in control instead of leaving it up to somebody else. Plus, it
would allow you to take advantage of other public clouds, and run a
hybrid cloud with potentially better resilience against regional
downtime.

If I were to tackle this, the first thing I would look at might be
using cloud-init to fire some sort of trigger for you, and use the
reactor system in Salt to deal with it. Because the instance that is
being auto-spun up would have a set of SSH keys that are known to be
good, you could use the saltify driver in Salt Cloud to log in and
spin up Salt for you, and auto-sign the keys.

Are we getting closer to a reasonable solution for you?

Oleg Anashkin

unread,
May 23, 2013, 7:28:43 PM5/23/13
to salt-...@googlegroups.com
Thank you for a helpful response, Joseph, I think we are getting closer now.

What I really need is the cost-effective system which can work on auto-pilot and add/remove instances depending on the current system load. This "load" includes cpu levels for frontend servers, i/o levels for databases and backlog queue length (application-defined counter) for backend workers. So three different thresholds should control the number of active instances of corresponding type.

Amazon's auto-scaling features already support this scenario, but I don't mind giving control to Salt if it can do the same. If it is possible, could you please point me to documentation or code samples which do the similar thing? The reason why I wanted to use Amazon auto-scaling is because that's the only way to use spot instances which cost 2-3 times cheaper than regular ones. Salt simply doesn't support spot instances, plus they go up and down very often and apparently Salt is not designed for this use case and wants its inventory to be static.

By the way, how does Salt handle the death of master instance? Can it automatically reconnect all minions to a backup master?

Markus Kirchberg

unread,
May 27, 2013, 9:43:07 PM5/27/13
to salt-...@googlegroups.com
Hi Oleg,

I'm currently using salt stack and AWS auto-scaling. Initially, I encountered the same issue as you and I've been adopting the following approach:

1) As preseeding keys and storing keys in user_data was also no option for me; I decided to isolate my master - minions setup by using the AWS Virtual Private Cloud (VPC) feature. The master accepts all minion requests that come from within the controlled VPC environment (you can use AWS security groups to control this). I also run a syndic daemon on the master, that way I can manage all minions from outside the VPC. Only the syndic connection to the master-of-masters need to be manually approved (or established via preseeded key); as there is only one such master, this was easy to do.

2) I ran into some issues (with salt 0.13) that I couldn't actually remove a minion's key once a minion went offline. This became an issue as I needed to re-use hostnames / IPs. However, this is fixed now and with salt 0.15 you can have a minion send a request to have its key removed from the master before it shuts down.


Good luck and let me know if you need more details. 

      Markus

David Ward

unread,
Aug 25, 2013, 9:03:04 PM8/25/13
to salt-...@googlegroups.com
Oleg,

You must have missed this thread from late 2012 in your searching.


I'd like a good solution to this, but right now, I was using autosign.conf file but that sometimes didn't work and so now I am using auto_accept.
I am working in a AWS VPC.

auto_accept seems to be a little broken too (not used much I suspect)
Reply all
Reply to author
Forward
0 new messages