Clarification on use of VPC Router role and VPCs in multiple regions

56 views
Skip to first unread message

Ben West

unread,
Oct 7, 2015, 6:56:30 PM10/7/15
to scalr-discuss
I am trying out the open source edition of Scalr v5.8.29 on AWS, specifically testing the ability to deploy farms in private subnets in different VPCs.

I found a couple posts on this group suggesting the use of VPC peering between that where the Scalr server resides and any VPCs containing private subnets where I want to deploy farms.  I have been able to successfully deploy farms this way, in particular with these settings in the scalr-server.rb of my Scalr server:

routing[:endpoint_host] = <Scalr server internal IP address>
app[:instances_connection_policy] = 'local'

In addition, the instances in these private subnets successfully get access to the outside via a NAT deployed by the VPC Router role in a dedicated "VPC Router Farm" that I created beforehand, per these instructions:
https://scalr-wiki.atlassian.net/wiki/display/docs/Using+VPC+-+External+Scalr+Deployment

Since AWS VPC peering does not span regions, I am curious about the ability (or lack thereof) to deploy farms in VPCs in different regions.  I see that changing the  scalr-server.rb settings to this:

routing[:endpoint_host] = <Scalr server external IP address>
#app[:instances_connection_policy] = 'local'

... causes Scalr to require I specify the VPC router on the Network tab of any roles launched in a private VPC subnet, and this is expected.  However, the launched instances in those roles get stuck in their Initialization state, since they never receive the HostInitResponse message back from the Scalr server.  I can directly ping/SSH between the Scalr server and the launched instances in the private subnets (i.e. confirming security groups and peering are fine), and indeed the command "telnet <remote instance internal IP> 8013" on the Scalr Server connects fine.

This wiki page appears to indicate that the Scalr server expects two-way communication to all instances it launches, but what isn't clear to me is whether the VPC Router launched by Scalr is supposed to proxy such communication in both directions.
https://scalr-wiki.atlassian.net/wiki/display/docs/Required+Network+Configuration+for+Scalr

I can see that the VPC Router successfully deals with the Cloud Instance -> Scalr Server direction, but I'm not sure if the VPC Router is supposed to be actively proxying the reverse direction, too.  The recommendation to use VPC peering seems to suggest that it is not, and this would impede deployment to private VPC subnets in other regions w/o setting up a VPN solution.

Also, I'm using the router-ubuntu1204-hvm role in region us-east-1, retrieved from Scalr.net a few days ago.

Ben West

unread,
Oct 9, 2015, 12:52:34 PM10/9/15
to scalr-discuss
For follow-up, I simply wiped the Scalr server component I was testing, along with all VPCs and security groups created for Scalr in AWS and started again from scratch.

I now have two VPC Router instances, in two different VPC Router Farms, deployed to public VPC subnets in two AWS regions. I can confirm these routers are successfully proxying communication in both directions b/w the Scalr server and farms launched inside the private VPC subnets in those regions.  No VPC peering needed.

Note this arrangement does require the Scalr server (or its frontend proxy) have a publicly-accessible FQDN/IP specified for routing[:endpoint_host] in scalr-server.rb, and can't do app[:instances_connection_policy] = 'local'.  I use the ec2-*.amazonaws.com hostname of the instance where Scalr server resides.

A possible detail missed in my original post was that changes made to the routing[:endpoint_host] or app[:ip_ranges] params in scalr-server.rb may not get propagated to those AWS security groups that Scalr server had created beforehand.  I.e. you may have to manually apply such changes to the AWS security groups Scalr created for your farms.
Reply all
Reply to author
Forward
0 new messages