Nomad Job examples in HCL and JSON

2,652 views
Skip to first unread message

BC

unread,
Mar 25, 2016, 7:16:29 PM3/25/16
to Nomad
For some background, I am a linux system admin that is new to the docker container world and I am not well-versed on building and using docker images. Various dev teams in my company are interested in devops and  micro services architecture and we are in the planning stage to determine what tools we want to adopt and what are ecosystem should look like. After using Vagrant for a few years, I have been impressed and drawn to the hashicorp tools and am currently learning and setting up a Nomad/Consul demo environment in the rackspace.com cloud to showcase some of  the features and functionality.

After getting past a few hurdles with how nomad and consul integrate, I have a working 5-node cluster with 3 nomad/consul servers(installed side-by-side) and 2 nomad docker clients/consul agents. I am running iptables(below) on all the nodes,  have opened the dynamic ports(20000-60000) on my docker client and can see docker dynamically updating iptables when containers are deployed.  The nomad (init) redis example job schedules properly across the nomad docker clients using dynamic ports and the services get registered in consul as expected. I am not familiar with redis so accessing and demoing the service externally isn't very useful for me. I setup an example apache job(below), trying a few different apache containers from docker hub, but I have not been able to reach apache service via a URL.

1) Can anyone share a simple and generic apache or tomcat nomad HCL (and perhaps a JSON) job with instructions on how the service would be reached from a browser? I see the docker container running on the dynamic port 46665 and tried to reach it via http:<nomad_client_ip>:46665 without any luck.

2) Is there are simple way to convert HCL into a working JSON config to be used via the API?

My example job, most a which is pieced together from the sparse examples I have been able to find :
job "app1" {
    # Job should run in the US region
    region = "dfw"

    # Spread tasks between us-west-1 and us-east-1
    datacenters = ["rackspace-DFW"]

    # run this job globally
    #type = "system"

    # Rolling updates should be sequential
    update {
        stagger = "30s"
        max_parallel = 1
    }

    group "web" {
        # We want 5 web servers
        count = 5

        # Create a web front end using a docker image
        task "apache" {
            driver = "docker"
            config {
                image = "eboraas/apache"
            }
            service {
                port = "http"
                check {
                    type = "http"
                    path = "/var/www/html"
                    interval = "10s"
                    timeout = "2s"
                }
            }
            resources {
                cpu = 128
                memory = 128
                network {
                    mbits = 100
                    # Request for a dynamic port
                    port "http" {
                    }
                }
            }
        }
    }
}


Here is my Iptables config on my nomad client:
#  iptables -L -v -n
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
42250  186M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    5   465 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0
 1934  116K ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     tcp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            ctstate NEW tcp dpt:22
    0     0 ACCEPT     tcp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:4646 state NEW,ESTABLISHED
    0     0 ACCEPT     tcp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:4647 state NEW,ESTABLISHED
  140  8400 ACCEPT     tcp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8301 state NEW,ESTABLISHED
    0     0 ACCEPT     tcp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8302 state NEW,ESTABLISHED
    0     0 ACCEPT     tcp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8400 state NEW,ESTABLISHED
    0     0 ACCEPT     udp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            udp dpt:8400 state NEW,ESTABLISHED
    0     0 ACCEPT     tcp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8500 state NEW,ESTABLISHED
    0     0 ACCEPT     tcp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8600 state NEW,ESTABLISHED
    0     0 ACCEPT     tcp  --  eth1   *       0.0.0.0/0            0.0.0.0/0            multiport dports 20000:60000
   44  2723 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
  153  8940 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
  139  9981 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT 166 packets, 22562 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination
  135  8028 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.4           tcp dpt:6379
    0     0 ACCEPT     udp  --  !docker0 docker0  0.0.0.0/0            172.17.0.4           udp dpt:6379
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.5           tcp dpt:46665
    0     0 ACCEPT     udp  --  !docker0 docker0  0.0.0.0/0            172.17.0.5           udp dpt:46665
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.6           tcp dpt:43677
    0     0 ACCEPT     udp  --  !docker0 docker0  0.0.0.0/0            172.17.0.6           udp dpt:43677
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           tcp dpt:46628
    0     0 ACCEPT     udp  --  !docker0 docker0  0.0.0.0/0            172.17.0.7           udp dpt:46628


Alex Dadgar

unread,
Mar 26, 2016, 12:31:29 AM3/26/16
to BC, Nomad
Hey Brett,

Glad you are considering Nomad. The HCL to JSON will be coming in Nomad 0.3.2. As for how to access your application it is a little more involved. If the IP Nomad is placing tasks on is in your private subnet (not publicly accessible) you will need something to NAT traffic to it such as a load balancer that sits on both your private and public subnet and it can look up the services location using Consul.

--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
 
GitHub Issues: https://github.com/hashicorp/nomad/issues
IRC: #nomad-tool on Freenode
---
You received this message because you are subscribed to the Google Groups "Nomad" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomad-tool+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/nomad-tool/3cffc169-f264-4c74-8a84-0e242212d2f5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

BC

unread,
Mar 28, 2016, 6:01:27 PM3/28/16
to Nomad
Alex,

It's great to hear HCL->JSON is coming in the next release. Seems like that could be very useful. Do you know when  v0.3.2 might be released?

As for my networking hurdle, which is probably more related to docker itself and iptables than nomad, my nomad cluster is running on a private subnet and for the sake of demo'ing/testing, I was hoping just to access the application via the private address over our site2site VPN tunnel.  So I don't think any NAT'ing or a load-balancer should be needed.

I deployed a vanilla tomcat docker container on my local workstation and could reach it okay it the private IP over port 8888->8080. Using the same public tomcat container image, I am unable to reach the tomcat service on my nomad-docker client which has me baffled at the moment. I can see the container running on the following ports below and can see the service listening on port 25988 via a netstat, but I am unable access the following URL: http://10.190.212.7:25988.  I know in a consul would provide the DNS for this service, but I am just bypassing DNS for now, using the nomad-docker IP address. Is there anything obvious that I am missing?

#  docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                                        NAMES
0509cb2671b6        tomcat:8.0          "catalina.sh run"   About an hour ago   Up About an hour    10.190.212.7:25988->8080/tcp, 10.190.212.7:25988->8080/udp   tomcat-332a5c51-5f2a-7206-f24d-7197d6320309


#  netstat -tanp |grep docker
tcp        0      0 10.190.212.7:42432      0.0.0.0:*               LISTEN      25930/docker-proxy
tcp        0      0 10.190.212.7:25988      0.0.0.0:*               LISTEN      28700/docker-proxy


Diptanu Choudhury

unread,
Mar 28, 2016, 6:27:26 PM3/28/16
to BC, Nomad
Hi Brett,

I don't see why you won't be able to reach to a process running in Docker if you can reach the same app when it's running outside docker.

Can you check the following so that we can rule out any setup, infrastructure or firewall issues -

* Jump on the host where the docker container is running via the Nomad client and try this - curl http://10.190.212.7:25988 (or curl http://localhost:259888 if traffic from localhost is forwarded to the same IP). 

* Run the container yourself without Nomad with port forwarding via the docker CLI and see if you can reach the container on that server from your desktop via your VPN. That would help us rule out any Nomad issues

Also, please make sure your firewall on the server isn't blocking access to the higher ports. 

--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
 
GitHub Issues: https://github.com/hashicorp/nomad/issues
IRC: #nomad-tool on Freenode
---
You received this message because you are subscribed to the Google Groups "Nomad" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomad-tool+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Diptanu Choudhury

BC

unread,
Mar 28, 2016, 8:04:05 PM3/28/16
to Nomad
Alex/Diptanu,

I figured out my issue which shows how green I am with containers. I wasn't allocating enough RAM(128mb) resources to the tomcat, which made it initally look like it's running but after scouring logs, I started seeing OOM errors and nomad was actually restarting the container after the linux kernel killed the process. This was the reason the service wasn't available...doh!  After allocating 256mb to the container in my HCL config, I was able to reach the service via the URL as expected.

I really appreciate the help and responsiveness which gives me a lot more confidence in Hashicorp!

Cheers!


BC

unread,
Mar 28, 2016, 10:13:37 PM3/28/16
to Nomad
In case it might help others, here is an simple example tomcat job in HCL:

job "tomcat" {
    region = "dfw"
    datacenters = ["dc1]

    update {
        stagger = "30s"
        max_parallel = 1
    }

    group "dev" {
        count = 5
        task "tomcat" {
            driver = "docker"
            config {
                image = "tomcat:8.0"
                port_map = {
                  http = 8080

                }
            }
            service {
                port = "http"
                check {
                    type = "tcp"

                    interval = "10s"
                    timeout = "2s"
                }
            }
            resources {
                cpu = 500
                memory = 256
                network {
                    mbits = 100
                    port "http" {
                    }
                }
            }
        }
    }
}


BC

unread,
Mar 28, 2016, 10:15:10 PM3/28/16
to Nomad
Here is a corresponding example JSON job for tomcat:

{
  "Job": {
    "ID": "tomcat",
    "Name": "tomcat",
    "Region": "dfw",
    "Type": "service",
    "Priority": 1,
    "Datacenters": [
      "dc1"
    ],
    "TaskGroups": [
      {
        "Name": "cache",
        "Count": 5,
        "Tasks": [
          {
            "Name": "tomcat",
            "Driver": "docker",
            "Config": {
              "image": "tomcat:8.0",
              "port_map": [
                  {
                    "http": 8080
                  }
               ]
            },
            "LogConfig": {
              "MaxFileSizeMB": 10,
              "MaxFiles": 10
            },
            "Services": [
                {
                  "Checks": [
                    {
                      "Timeout": 2000000000,
                      "Interval": 10000000000,
                      "Type": "tcp",
                      "Name": "alive"
                    }
                  ],
                  "PortLabel": "http",
                  "Name": "Tomcat"
                }
              ],
            "Resources": {
              "Networks": [
               {
                 "Mbits": 100,
                 "DynamicPorts": [
                    {
                      "Label": "http"
                    }
                 ]
               }
              ],
              "CPU": 500,
              "MemoryMB": 256,
              "DiskMB": 500
            },
            "Meta": {
                  "foo": "bar"
            }
          }
        ]
      }
    ],
    "Update": {
      "Stagger": 30000000,
      "MaxParallel": 1
    }
  }
}


Reply all
Reply to author
Forward
0 new messages