Hi,
I've been working on creating an ingress controller on GKE and am having some trouble creating an ingress controller with the annotation kubernetes.io/ingress.class: "nginx".
I'll provide code and examples below, but the short version is that the ingress is created successfully when there is no annotation, but adding the lines below to my ingress YAML results in the ingress never being created.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: nginx-ingress
annotations:
# This tells to only use the Nginx Ingress Controller
# and avoids the creation on a Global LoadBalancer on GKE.
...
I have confirmed that removing the annotation results in the ingress being successfully created.
Here is the successful ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: nginx-ingress
spec:
rules:
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
and here is the failing definition; the only difference is that the failing definition has the
kubernetes.io/ingress.class annotation
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
namespace: nginx-ingress
annotations:
# This tells to only use the Nginx Ingress Controller
# and avoids the creation on a Global LoadBalancer on GKE.
spec:
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
When reproducing this, I have used both the kubernetes ingress controller from the official repository:
https://github.com/kubernetes/ingress/tree/master/controllers/nginx as well as the NGINX repository:
https://github.com/nginxinc/kubernetes-ingress/
Here is the YAML for the deployment for the Kubernetes version:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
namespace: nginx-ingress
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
annotations:
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# like with kubeadm
# hostNetwork: true
terminationGracePeriodSeconds: 60
containers:
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/coffee-svc
21:12:30:complete-example$ kubectl describe ing -n nginx-ingress test-ingress
Name: test-ingress
Namespace: nginx-ingress
Address: <IP REMOVED>
Rules:
Host Path Backends
---- ---- --------
/tea tea-svc:80 (<none>)
/coffee coffee-svc:80 (<none>)
Annotations:
Events: <none>
Here is the YAML for the image from the NGINX repository:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-plus-ingress-rc
labels:
app: nginx-plus-ingress
namespace: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-plus-ingress
template:
metadata:
labels:
app: nginx-plus-ingress
spec:
containers:
imagePullPolicy: Always
name: nginx-plus-ingress
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: 8080
hostPort: 8080
readinessProbe:
httpGet:
scheme: HTTPS
path: /heartbeat
port: 443
httpHeaders:
- name: Host
periodSeconds: 20
timeoutSeconds: 20
successThreshold: 1
failureThreshold: 10
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
args:
- -nginx-plus
- -v=2
#- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
After deploying the
nginx-plus-ingress:1.0.0-beta0 image, the output for kubectl describe ing -n nginx-ingress test-ingress
22:47:16:complete-example$ kubectl describe ing -n nginx-ingress test-ingress
Name: test-ingress
Namespace: nginx-ingress
Address:
Rules:
Host Path Backends
---- ---- --------
/tea tea-svc:80 (<none>)
/coffee coffee-svc:80 (<none>)
Annotations:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 6m 1 nginx-ingress-controller Normal AddedOrUpdated Configuration for nginx-ingress/cafe-ingress was added or updated
It shows only 6 minutes in this example, but I've left it for 20 or 30 minutes with no update. One difference is that no address is assigned in this case.
Regardless of which ingress controller image I use, the ingress never starts. The entry in the Discovery & Load Balancing section of the GKE interface looks like this:
Can anyone tell me how to debug the process that creates the ingress? I can't find any useful logs using kubectl or in GKE/GCE.
It seems like there's a simple setting that I'm missing in order to make this work, and I've looked through the docs to no avail.