Unable to access Tectonic console on vSphere install

99 views
Skip to first unread message

Matt Dainty

unread,
Mar 2, 2018, 11:30:47 AM3/2/18
to CoreOS User
I'm trying to install Tectonic on vSphere and I'm getting most of the way except I can't access the console.


So mycluster-k8s round-robins across the master nodes, and mycluster round-robins across the workers. This matches the documentation.

I'm behind an HTTP proxy so I've set `tectonic_http_proxy_address` & `tectonic_https_proxy_address` as necessary and also set `tectonic_no_proxy` to `["127.0.0.1", "localhost", ".example.com"]`. The two variables that I found weren't 100% clear in the documentation were `tectonic_vmware_controller_domain` & `tectonic_vmware_ingress_domain` so I set those to `mycluster-k8s.example.com` and `mycluster.example.com` respectively. Is that correct?

Anyway, I ran Terraform successfully and let the VM's boot up and eventually after reboots and updates `kubectl` is usable however I can't access the Tectonic console. `kubectl get po --all-namespaces` returns the following:

NAMESPACE         NAME                                                   READY     STATUS             RESTARTS   AGE
kube-system       heapster-57d97cf947-kjxx9                              2/2       Running            0          9m
kube-system       kube-apiserver-9gfbt                                   1/1       Running            0          22m
kube-system       kube-apiserver-lxgm7                                   1/1       Running            0          22m
kube-system       kube-controller-manager-56fb8bdff4-r2v4x               1/1       Running            0          22m
kube-system       kube-controller-manager-56fb8bdff4-tkqkc               1/1       Running            0          22m
kube-system       kube-dns-64cd9cc494-77bvw                              3/3       Running            0          22m
kube-system       kube-flannel-75qj9                                     2/2       Running            0          22m
kube-system       kube-flannel-c7mzk                                     2/2       Running            1          22m
kube-system       kube-flannel-jm74w                                     2/2       Running            0          22m
kube-system       kube-flannel-xwlhf                                     2/2       Running            1          22m
kube-system       kube-proxy-gw6m6                                       1/1       Running            0          22m
kube-system       kube-proxy-nsq9v                                       1/1       Running            0          22m
kube-system       kube-proxy-r9bx9                                       1/1       Running            0          22m
kube-system       kube-proxy-xhvm8                                       1/1       Running            0          22m
kube-system       kube-scheduler-7cd7946554-2s5dh                        1/1       Running            0          22m
kube-system       kube-scheduler-7cd7946554-8tgc2                        1/1       Running            0          22m
kube-system       pod-checkpointer-kjrfr                                 1/1       Running            0          22m
kube-system       pod-checkpointer-kjrfr-mycluster-master-1              1/1       Running            0          15m
kube-system       pod-checkpointer-zl6hz                                 1/1       Running            0          22m
kube-system       pod-checkpointer-zl6hz-mycluster-master-0              1/1       Running            0          20m
tectonic-system   alm-operator-6648b95c9c-fkwpn                          1/1       Running            0          9m
tectonic-system   catalog-operator-674d48cd8b-w8djf                      1/1       Running            0          9m
tectonic-system   container-linux-update-agent-ds-bsgnb                  1/1       Running            0          8m
tectonic-system   container-linux-update-agent-ds-vnfrn                  1/1       Running            0          8m
tectonic-system   container-linux-update-agent-ds-xzrcn                  1/1       Running            0          8m
tectonic-system   container-linux-update-agent-ds-z5t48                  1/1       Running            0          8m
tectonic-system   container-linux-update-operator-5cbf89f785-lh4gv       1/1       Running            0          11m
tectonic-system   default-http-backend-8578559d78-gc2bj                  1/1       Running            0          14m
tectonic-system   etcd-operator-57584f6dfb-s6hmp                         1/1       Running            0          14m
tectonic-system   kube-version-operator-694d77cdf7-k6bg7                 1/1       Running            0          14m
tectonic-system   node-agent-8pgbk                                       1/1       Running            0          14m
tectonic-system   node-agent-jfdlk                                       1/1       Running            0          14m
tectonic-system   node-agent-mqqx5                                       1/1       Running            0          14m
tectonic-system   node-agent-p25hp                                       1/1       Running            0          14m
tectonic-system   prometheus-operator-8696544d6b-t9np6                   1/1       Running            0          12m
tectonic-system   tectonic-alm-operator-fdf48d9f6-2lzqj                  1/1       Running            0          14m
tectonic-system   tectonic-channel-operator-64c6d6c5c-lj4sv              1/1       Running            0          14m
tectonic-system   tectonic-cluo-operator-cd7896cfb-7rns5                 1/1       Running            0          14m
tectonic-system   tectonic-console-569c4c9cdd-pgxmj                      0/1       CrashLoopBackOff   6          14m
tectonic-system   tectonic-console-569c4c9cdd-tknnx                      0/1       CrashLoopBackOff   7          14m
tectonic-system   tectonic-identity-7484d469d7-8qhfq                     1/1       Running            1          14m
tectonic-system   tectonic-identity-7484d469d7-98xv9                     1/1       Running            0          14m
tectonic-system   tectonic-ingress-controller-c44f7cb74-ds49l            1/1       Running            0          14m
tectonic-system   tectonic-monitoring-auth-prometheus-6bb985b767-5wg4f   0/1       CrashLoopBackOff   6          9m
tectonic-system   tectonic-prometheus-operator-cfc6455fc-7qwp4           1/1       Running            0          14m
tectonic-system   tectonic-stats-emitter-7dd77c4b48-dc9vw                2/2       Running            0          14m

i.e. both console pods and a prometheous pod are crashing. The logs from all three show the same error message:

2018/03/2 16:12:42 http: Provider config sync still failing, retrying in 16s: Get https://mycluster.example.com/identity/.well-known/openid-configuration: dial tcp 172.16.142.69:443: getsockopt: connection refused
2018/03/2 16:12:58 http: Provider config sync still failing, retrying in 32s: Get https://mycluster.example.com/identity/.well-known/openid-configuration: dial tcp 172.16.142.70:443: getsockopt: connection refused

It looks like they're trying to access the worker IP addresses.

It feels like I'm almost there but I've done something wrong. Any ideas?

Thanks

Matt

Here's the contents of my `terraform.tfvars` with some of the vSphere-specific boilerplate removed:

tectonic_base_domain = "example.com"
tectonic_cluster_name = "mycluster"
tectonic_container_linux_version = "latest"
tectonic_etcd_count = "3"
tectonic_http_proxy_address = "http://user:pass...@proxy.example.com:8080/"
tectonic_https_proxy_address = "http://user:pass...@proxy.example.com:8080/"
tectonic_license_path = "/home/me/license"
tectonic_master_count = "2"
tectonic_no_proxy = ["127.0.0.1", "localhost", ".example.com"]
tectonic_ntp_servers = ["ntp.example.com"]
tectonic_proxy_exclusive_units = false
tectonic_pull_secret_path = "/home/me/secret.json"
tectonic_tls_validity_period = "26280"
tectonic_vanilla_k8s = false
tectonic_vmware_controller_domain = "mycluster-k8s.example.com"
tectonic_vmware_etcd_clusters = {
  "0" = "..."
}
tectonic_vmware_etcd_datacenters = {
  "0" = "..."
}
tectonic_vmware_etcd_datastores = {
  "0" = "..."
}
tectonic_vmware_etcd_gateways = {
  "0" = "172.16.142.254"
  "1" = "172.16.142.254"
  "2" = "172.16.142.254"
}
tectonic_vmware_etcd_hostnames = {
  "0" = "mycluster-etcd-0"
  "1" = "mycluster-etcd-1"
  "2" = "mycluster-etcd-2"
}
tectonic_vmware_etcd_ip = {
  "0" = "172.16.142.64/24"
  "1" = "172.16.142.65/24"
  "2" = "172.16.142.66/24"
}
tectonic_vmware_etcd_memory = "4096"
tectonic_vmware_etcd_networks = {
  "0" = "..."
}
tectonic_vmware_etcd_resource_pool = {
  "0" = "..."
}
tectonic_vmware_etcd_vcpu = "1"
tectonic_vmware_folder = "Tectonic"
tectonic_vmware_ingress_domain = "mycluster.example.com"
tectonic_vmware_master_clusters = {
  "0" = "..."
}
tectonic_vmware_master_datacenters = {
  "0" = "..."
}
tectonic_vmware_master_datastores = {
  "0" = "..."
}
tectonic_vmware_master_gateways = {
  "0" = "172.16.142.254"
  "1" = "172.16.142.254"
}
tectonic_vmware_master_hostnames = {
  "0" = "mycluster-master-0"
  "1" = "mycluster-master-1"
}
tectonic_vmware_master_ip = {
  "0" = "172.16.142.67/24"
  "1" = "172.16.142.68/24"
}
tectonic_vmware_master_memory = "4096"
tectonic_vmware_master_networks = {
  "0" = "..."
}
tectonic_vmware_master_resource_pool = {
  "0" = "..."
}
tectonic_vmware_master_vcpu = "1"
tectonic_vmware_node_dns = "172.16.132.11 172.16.132.12"
tectonic_vmware_server = "vsphere.example.com"
tectonic_vmware_ssh_authorized_key = "ssh-rsa ... me@host"
tectonic_vmware_ssh_private_key_path = ""
tectonic_vmware_sslselfsigned = "true"
tectonic_vmware_type = "vm"
tectonic_vmware_vm_template = "Container Linux 1632.2.1"
tectonic_vmware_vm_template_folder = "Templates"
tectonic_vmware_worker_clusters = {
  "0" = "..."
}
tectonic_vmware_worker_datacenters = {
  "0" = "..."
}
tectonic_vmware_worker_datastores = {
  "0" = "..."
}
tectonic_vmware_worker_gateways = {
  "0" = "172.16.142.254"
  "1" = "172.16.142.254"
}
tectonic_vmware_worker_hostnames = {
  "0" = "mycluster-worker-0"
  "1" = "mycluster-worker-1"
}
tectonic_vmware_worker_ip = {
  "0" = "172.16.142.69/24"
  "1" = "172.16.142.70/24"
}
tectonic_vmware_worker_memory = "4096"
tectonic_vmware_worker_networks = {
  "0" = "..."
}
tectonic_vmware_worker_resource_pool = {
  "0" = "..."
}
tectonic_vmware_worker_vcpu = "1"
tectonic_worker_count = "2"

Matt Dainty

unread,
Mar 2, 2018, 12:10:37 PM3/2/18
to CoreOS User
I've found with a bit of prodding that if I force port 32000 instead of 443 then I can get that erroring identity URL to respond like so:

{
  "authorization_endpoint": "https://mycluster.example.com/identity/auth",
  "response_types_supported": [
    "code"
  ],
  "subject_types_supported": [
    "public"
  ],
  "id_token_signing_alg_values_supported": [
    "RS256"
  ],
  "scopes_supported": [
    "openid",
    "email",
    "groups",
    "profile",
    "offline_access"
  ],
  "token_endpoint_auth_methods_supported": [
    "client_secret_basic"
  ],
  "claims_supported": [
    "aud",
    "email",
    "email_verified",
    "exp",
    "iat",
    "iss",
    "locale",
    "name",
    "sub"
  ]
}

So is the problem some missing 443 -> 32000 iptables-fu?

Matt
tectonic_http_proxy_address = "http://user:password@proxy.example.com:8080/"
tectonic_https_proxy_address = "http://user:password@proxy.example.com:8080/"

dgar...@gmail.com

unread,
Mar 6, 2018, 12:31:26 PM3/6/18
to CoreOS User
I'v been running into the same issue and even deployed a 2 node cluster to rule out any confusions. It seems like an IP Tables issue but I dont know how to troubleshoot and nobody seems able to help on this group of the github group. Please let me know if you find something. 

in the meantime we've been playing around with Openshift; but i want to get this going also. 

thanks, 
tectonic_http_proxy_address = "http://user:password@proxy.example.com:8080/"
tectonic_https_proxy_address = "http://user:password@proxy.example.com:8080/"

Matt Dainty

unread,
Mar 7, 2018, 8:57:22 AM3/7/18
to CoreOS User
For the archives and posterity, the problem (for me at least) has been identified as the merge of https://github.com/coreos/tectonic-installer/pull/2911 that has broken the console.
Reply all
Reply to author
Forward
0 new messages