[Multi-Provider] Hybrid Environment - How to resolve dynamically generated addresses for instances on EC2, Azure, etc. from local VMs?

29 views
Skip to first unread message

Joe Reid

unread,
Nov 4, 2015, 2:04:52 PM11/4/15
to Vagrant
We've hit the memory limit on our local machines and I am looking into a sustainable way to run a multi-provider development environment in Vagrant where we have 1-3 VMs locally and one node on EC2 (per developer).  

The problem is not communicating workstation => EC2, but between VMs and EC2.  Obviously the EC2 instance will not be on the local network.  I'm trying to figure out how to get services and tests running on the other local VMs to know about and be able to route to this node.  


I'm curious how best to accomplish this.  Parameters:
  • Our Vagrantfile is checked into source control and shared.
  • Each user has their own AWS IAM account with permissions to a shared AWS account.
  • We are okay with an "online only" development environment.
  • Hybrid environment (local VMs and EC2).  We would prefer everything to run locally.  But that ship has sailed.
Some options we considered:
  • Elastic IPs + DNS - Not a great use case for Elastic IPs.   
  • Dynamic IPs + VPN + assign DNS as part of Vagrantfile - Doesn't seem to be supported by vagrant-AWS
  • Run DNS locally that all boxes can resolve to.  Enter DNS entry with dynamically assigned IP on box up - Doesn't seem to be supported by vagrant-DNS.
  • Service discovery running locally - How would the agent on the box dial back to each developer's local environment to register?
  • Service discovery running on AWS - If a we could have the node register with Consul on each vagrant up as #{user}.remote_box.domain_name, it would be easy enough to configure the VMs to use that consul cluster as a resolver and use the individualized FQDNS to route to each developers box.  
Centralized service-discovery may be the route we go.  But I'm curious how we would add the register/deregister step to a vagrant up for this node, and/or if there is an easier way that I haven't thought of?

Sample Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.require_version ">= 1.5.0"
require 'etc'
user = Etc.getlogin
# A combo of the analytics box, the database server, and the cleanroom matching server
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.define "matcher" do |matcher|
    matcher.vm.hostname = "match.local"
    matcher.vm.box = "centos-6.5"
    matcher.vm.network "private_network", ip: "33.33.33.12"
    matcher.vm.network "forwarded_port", guest: 27017, host: 2700, auto_correct: true  #Mongo
    matcher.vm.network "forwarded_port", id: "ssh", guest: 22, host: 2202 # override SSH port
    matcher.ssh.insert_key = false
    # NFS mounts cannot be owned by any user (adsummos) that doesn't exist on the base image.
    matcher.vm.synced_folder "../../../O2O", "/home/adsummos/analytics", type: "nfs", create: true
    matcher.vm.synced_folder "../../../chef-repo", "/var/chef-repo", type: "nfs", create: true
    matcher.vm.provider :virtualbox do |virtualbox|
      virtualbox.customize ["modifyvm", :id, "--memory", 1280]
      virtualbox.customize ["modifyvm", :id, "--name", "CleanroomMatchServer"]  # Sets VM name in VirtualBox.
    end
  end
  config.vm.define "hdp" do |hdp|
    hdp.vm.box = "hdp_2.3_box"
    hdp.vm.provider :aws do |aws, override|
      aws.ami = "ami-57cd8732"            # Stock CentOS 6 with HVM
      aws.instance_type = "t2.large"
      aws.subnet_id = "subnet-aebab1da"   # Public subnet
      aws.security_groups = "sg-b69836d0" # SG name = "development_HDP_VMs"
      aws.keypair_name = "korrelate2012"  # Assign the default keypair for SSH.
      # Configure user AWS keys in .profile to be exported as ENV variables.
      aws.access_key_id = ENV['AWS_ACCESS_KEY']
      aws.secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
      # Tag each instance with developer name.
      aws.tags = {
        'Name' => "dev_HDP_#{user}"
      }
      # Must have private component korrelate2012 key pair in .ssh dir.
      override.ssh.username = "ec2-user"
      override.ssh.private_key_path = "~/.ssh/korrelate2012.pem"
    end
  end
end

Regards,
Joe Reid 

Torben Knerr

unread,
Nov 4, 2015, 5:27:10 PM11/4/15
to vagra...@googlegroups.com
Hi Joe,

the vagrant-triggers plugin might be useful in that context too (e.g. lookup the ec2 dynamic ip via `vagrant ssh-config <vm>` and pass that info to a shared state file, where the other VMs can read it):

HTH, 
Torben

--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
 
GitHub Issues: https://github.com/mitchellh/vagrant/issues
IRC: #vagrant on Freenode
---
You received this message because you are subscribed to the Google Groups "Vagrant" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vagrant-up+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/vagrant-up/4cbd5a41-9627-48b9-b291-8226bfa1a4a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Alvaro Miranda Aguilera

unread,
Nov 5, 2015, 6:06:52 AM11/5/15
to vagra...@googlegroups.com
Hello Joe.

If you are using only 1 region in AWS, then you can check consul. One
DC will be your office, other DC will be AWS, and you join them over
wan.

The office will require the box have an ip over the lan. So everyone
will be able to ping :)

On every vagrant box you can install consul agent and dnsmasq and make
the VMs use consul as DNS.


The resolv conf:

$resolv = <<-EOF
cat > /etc/resolv.conf <<EOF2
search consul companydomain
nameserver 127.0.0.1
nameserver 192.168.10.11
nameserver 192.168.10.12
EOF2
EOF

Then you can run something like this:

config.vm.provision "shell", inline: $resolv
config.vm.provision "shell", inline: "grep consul /etc/dnsmasq.conf ||
(echo 'server=/consul/127.0.0.1#8600' | tee -a /etc/dnsmasq.conf &&
service dnsmasq force-reload)"

For the consul part, I assume you know how to do it, otherwise, ask :)

Alvaro.
Reply all
Reply to author
Forward
0 new messages