Hello,
We're trying out the rolling updates and noticed that the new allocations are not reporting their health status as expected.
$ nomad alloc-status 20e7ab79
...
Latest Deployment
ID = faa23fa7
Status = running
Description = Deployment is running
Deployed
Task Group Desired Placed Healthy Unhealthy
vh 4 2 0 0
Allocations
ID Node ID Task Group Version Desired Status Created At
20e7ab79 4d9c2742 vh 34 run running 08/17/17 16:06:32 EDT
40a6a05f b7132742 vh 34 run running 08/17/17 16:06:32 EDT
f293f968 17af2742 vh 33 run running 08/17/17 16:04:21 EDT
741ecd6d f6922742 vh 33 run running 08/17/17 16:04:21 EDT
46b28ac0 4d9c2742 vh 33 stop complete 08/17/17 16:03:45 EDT
d7ba846e b7132742 vh 33 stop complete 08/17/17 16:03:45 EDT
$ nomad alloc-status 20e7ab79
...
ID = 20e7ab79
Node ID = 4d9c2742
Job ID = vh
Job Version = 34
Client Status = running
Client Description = <none>
Desired Status = run
Created At = 08/17/17 16:06:32 EDT
Deployment ID = faa23fa7
Deployment Health = unset
The new allocations will eventually become 'unhealthy' after it exceeds healthy_deadline.
Our allocation tasks are all running and service checks are healthy (validated through Consul that they're passing!) but the alloc-status still shows "Deployment Health = unset"
update {
max_parallel = 2
min_healthy_time = "10s"
healthy_deadline = "5m"
health_check = "checks"
}
service {
name = "my service name"
port = "http"
tags = [
"${NOMAD_META_env}",
"version-${NOMAD_META_version}",
"git-commit-${NOMAD_META_git_commit_id}"]
check {
name = "health endpoint"
type = "http"
port = "management"
path = "/health"
interval = "10s"
timeout = "30s"
}
check {
name = "app landing page"
type = "http"
port = "http"
path = "/"
interval = "10s"
timeout = "30s"
}
}
Using health_check = "task_states" does work as expected!
Is there something we're missing or failing to set for our service checks?
Thanks,
VH