500 Internal Server error with AWS in ap-southeast-2

410 views
Skip to first unread message

Josh Girvin

unread,
Aug 3, 2014, 7:24:39 AM8/3/14
to terrafo...@googlegroups.com
Hi all, well done on Terraform! 

I've set up a small Terraform file to create and provision a t2.micro instance on AWS. Being a t2.micro, it needs to be in a VPC, which is no dramas, I've hard-coded it at the moment. But unfortunately, running `terraform apply` is crashing when creating the instance, giving a 500 Internal Service error. I'm wondering if I've done something wrong? 


variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "key_path" {}
variable "key_name" {}
variable "aws_region" {
    default = "ap-southeast-2"
}

variable "aws_amis" {
    default = {
        "ap-southeast-2": "ami-ff3751c5"
    }
}


provider "aws" {
    access_key = "${var.aws_access_key}"
    secret_key = "${var.aws_secret_key}"
    region = "${var.aws_region}"
}

# Our default security group to access
# the instances over SSH and HTTP
resource "aws_security_group" "default" {
    name = "terraform_http_and_ssh"
    description = "Terraform HTTP plus SSH"
    vpc_id = "vpc-6d01e208"

    # SSH access from anywhere
    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    # HTTP access from anywhere
    ingress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

# ELB for it 
resource "aws_elb" "phabricator" {
    name = "terraform-basic-elb"

    # The same availability zone as our instance
    availability_zones = ["${aws_instance.phabricator.availability_zone}"]

    listener {
        instance_port = 80
        instance_protocol = "http"
        lb_port = 80
        lb_protocol = "http"
    }

    # The instance is registered automatically
    instances = ["${aws_instance.phabricator.id}"]
}

# AWS instances
resource "aws_instance" "phabricator" {
    connection {
        # The default username for our AMI
        user = "ubuntu"

        # The path to your keyfile
        key_file = "${var.key_path}"
    }
    
    instance_type = "t2.micro"
    ami = "${lookup(var.aws_amis, var.aws_region)}"
    subnet_id = "subnet-8d6b9de8"
    security_groups = ["${aws_security_group.default.id}"]

    provisioner "remote-exec" {
        inline = [
            "sudo apt-get -y update",
            "sudo apt-get -y install nginx",
            "sudo service nginx start",
        ]
    }
}

output "address" {
    value = "${aws_elb.phabricator.dns_name}"
}

That's the terraform config I'm using, and the error is here:


Thought I'd ask in the group. What I've tried is hard-coding the security group and VPC and that gave me the same error, so it probably isn't the same problem this guy was having with Packer, but the fact that he's using ap-southeast-2 as well and got the same error makes me wonder: https://github.com/mitchellh/packer/issues/1253

Anyway, thought I'd ask, cheers, and well done!

Josh Girvin

unread,
Aug 3, 2014, 9:19:32 PM8/3/14
to terrafo...@googlegroups.com
Okay, so that issue appears to have resolved itself. I will point out that a few days ago, AWS was saying (through OpsWorks) that it had run out of capactity in the ap-southeast-2 zone, so perhaps that's an issue that needs to be handled differently in Terraform? 

Now, I have another issue, heh. Using the following config file: http://paste.stacka.to/xejafomoha.tf 

I pass in my key file path and name, and when I do it manually through AWS it works perfectly, however when using `terraform apply -var key=val` it chokes and dies with this error:


* ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain


Any ideas there? Have I screwed up how it's supposed to work? Do I need to re-order how things are created?

Jack Pearkes

unread,
Aug 4, 2014, 6:55:09 PM8/4/14
to Josh Girvin, terrafo...@googlegroups.com
Josh,

It could be a number of things.

Ensure that the AMI you're launching does in fact have the same username as the one you're providing (looks like "ubuntu"). 

To debug, you can inspect the parameters that are being sent to AWS with `TF_LOG=1 terraform apply ...`. Additionally, double checking in the console if the key pair is correctly matched, and that you're using the right one.

If you post the generated log, be sure to redact any keys/secrets from it.

Hope those steps help a bit, feel free to post that log if necessary.

Best,

Jack



--
You received this message because you are subscribed to the Google Groups "Terraform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to terraform-too...@googlegroups.com.
To post to this group, send email to terrafo...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/terraform-tool/2129f059-cf82-49c0-b065-f73af12805a5%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Sathiya Shunmugasundaram

unread,
Feb 22, 2015, 4:10:09 PM2/22/15
to terrafo...@googlegroups.com
Josh,

Any luck on the "handshake error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
"

The template worked for me when I launched an AMI from market place RHEL into my personal AWS account. Now I am used company created RHEL AMI as base and running with VPN (no SSH from public internet). It has this error, not sure if it has anything to do with private subnets or some SSH settings that would have been in base AMI.\

Regardless, I could login from my SSH terminal using the same key pair.

Thanks
Sathiya

Sathiya Shunmugasundaram

unread,
Feb 23, 2015, 1:57:33 PM2/23/15
to terrafo...@googlegroups.com
I could finally get around the issue.

I moved the connection block inside provisioner and had to pass the host IP. 

  provisioner "remote-exec" {
      inline = [
        "echo start remote-exec",
        "sudo chmod -R 750 /opt/springxd/",
        "echo done",
        "ps -ef | grep xd"
      ]
      connection {
        # The default username for our AMI
        type = "ssh"
        host = "${aws_instance.xd-admin-1.private_ip}"
        user = "ec2-user"

        # The path to your keyfile
        key_file = "${var.key_path}"
      }
  }


I am curious about a long term concern, when I create multiple instances in one shot, how can I do this. May be user_data, but not sure how to pass user_data in a file. The examples I saw with $file give syntax error

Thanks
Sathiya

Bruce Wang

unread,
Feb 23, 2015, 7:04:35 PM2/23/15
to terrafo...@googlegroups.com
For more than one instance cases, better to go with some configuration management tools like Salt/Chef/Ansible/Puppet/etc.

--
You received this message because you are subscribed to the Google Groups "Terraform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to terraform-too...@googlegroups.com.
To post to this group, send email to terrafo...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages