@wsong: Reiterating the mentions to trigger a notification:
@kubernetes/sig-api-machinery-bugs
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
/sig auth
Your curl example is not using bearer token authentication, only x509 credentials. How are you starting the apiserver in that test?
Here's our API server args:
"--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota,PodNodeSelector",
"--advertise-address=<IP address>",
"--allow-privileged=true",
"--anonymous-auth=false",
"--authorization-mode=Webhook",
"--authorization-webhook-cache-authorized-ttl=0s",
"--authorization-webhook-cache-unauthorized-ttl=0s",
"--authorization-webhook-config-file=/etc/authorization_config.cfg",
"--client-ca-file=<path to authca.pem>",
"--etcd-servers=https://<etcd IP address>",
"--etcd-cafile=<path to ca.pem>",
"--etcd-certfile=<path to cert.pem>",
"--etcd-keyfile=<path to key.pem>",
"--experimental-encryption-provider-config=<path to encryption.cfg>",
"--insecure-port=0",
"--kubelet-certificate-authority=<path to ca.pem>",
"--kubelet-client-certificate=<path to cert.pem>",
"--kubelet-client-key=<path to key.pem>",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--oidc-client-id=e07d81d0-1eca-4d7c-aefe-25ab7425cdad",
"--oidc-ca-file=<path to ca.pem>",
"--oidc-issuer-url=https://example.com",
"--runtime-config=admissionregistration.k8s.io/v1alpha1,rbac.authorization.k8s.io/v1=false,rbac.authorization.k8s.io/v1beta1=false",
"--secure-port=6443",
"--service-account-key-file=<path to service account key.pem>",
"--service-cluster-ip-range=10.96.0.0/16",
"--service-node-port-range=32768-35535",
"--tls-ca-file=ca.pem",
"--tls-cert-file=cert.pem",
"--tls-private-key-file=key.pem",
"--tls-sni-cert-key=cert.pem,key.pem:localhost,proxy.local",
"--tls-sni-cert-key=cert.pem,key.pem",
"--bind-address=0.0.0.0"
Thanks. A couple more questions:
Here's the curl output for a successful request:
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 34.212.253.212...
* TCP_NODELAY set
* Connected to 34.212.253.212 (34.212.253.212) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: ca.pem
CApath: /usr/local/etc/openssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS handshake, CERT verify (15):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: OU=ucp; CN=system:fz1zwhq9g26nkeyt91rd8hl6b
* start date: Jul 3 22:29:00 2018 GMT
* expire date: Oct 1 22:29:00 2018 GMT
* subjectAltName: host "34.212.253.212" matched cert's IP address!
* issuer: CN=UCP Client Root CA
* SSL certificate verify ok.
> GET /api/v1/namespaces/default/pods HTTP/1.1
> Host: 34.212.253.212
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Cache-Control: no-cache, no-store, must-revalidate
Cache-Control: no-cache, no-store, must-revalidate
< Content-Length: 161
Content-Length: 161
< Content-Type: application/json
Content-Type: application/json
< Date: Tue, 03 Jul 2018 22:35:42 GMT
Date: Tue, 03 Jul 2018 22:35:42 GMT
< X-Content-Type-Options: nosniff
X-Content-Type-Options: nosniff
< X-Frame-Options: sameorigin
X-Frame-Options: sameorigin
< X-Server-Ip: 172.31.38.154
X-Server-Ip: 172.31.38.154
< X-Xss-Protection: 1; mode=block
X-Xss-Protection: 1; mode=block
<
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/default/pods",
"resourceVersion": "268"
},
"items": []
* Connection #0 to host 34.212.253.212 left intact
}
And here it is for an unsuccessful one:
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 34.212.253.212...
* TCP_NODELAY set
* Connected to 34.212.253.212 (34.212.253.212) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: ca.pem
CApath: /usr/local/etc/openssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS handshake, CERT verify (15):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: OU=ucp; CN=system:fz1zwhq9g26nkeyt91rd8hl6b
* start date: Jul 3 22:29:00 2018 GMT
* expire date: Oct 1 22:29:00 2018 GMT
* subjectAltName: host "34.212.253.212" matched cert's IP address!
* issuer: CN=UCP Client Root CA
* SSL certificate verify ok.
> GET /api/v1/namespaces/default/pods HTTP/1.1
> Host: 34.212.253.212
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
HTTP/1.1 401 Unauthorized
< Cache-Control: no-cache, no-store, must-revalidate
Cache-Control: no-cache, no-store, must-revalidate
< Content-Length: 165
Content-Length: 165
< Content-Type: application/json
Content-Type: application/json
< Date: Tue, 03 Jul 2018 22:37:01 GMT
Date: Tue, 03 Jul 2018 22:37:01 GMT
< X-Content-Type-Options: nosniff
X-Content-Type-Options: nosniff
< X-Frame-Options: sameorigin
X-Frame-Options: sameorigin
< X-Server-Ip: 172.31.38.154
X-Server-Ip: 172.31.38.154
< X-Xss-Protection: 1; mode=block
X-Xss-Protection: 1; mode=block
<
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
* Connection #0 to host 34.212.253.212 left intact
}
We do not regenerate certs.
I'm unable to reproduce the issue using curl on a tight loop with client certificate auth as shown, even if the apiserver is set up to additionally use OIDC authn... the very first API request that connects succeeds. Are you seeing this for all clients, or only those that attempt to use OIDC bearer tokens?
It happens with all clients, as far as I can tell.
Closed #65785.
I actually haven't seen this on the release version of 1.11.0; I only saw this on 1.11.0-beta.0. Not sure what the root cause was, but perhaps this was fixed in a subsequent commit? I'll go ahead and close this for now.
@wsongI believe the issue is due to how the OIDC authenticator within the apiserver is initialized; it seems to wait 10 seconds before initializing its internal verifier. Reference: plugin/pkg/authenticator/token/oidc/oidc.go
I imagine the fix would be to call wait.PollImmediatelyUntil
rather than wait.PollUtil
. I'd be happy to submit a PR.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Face the same issue when tested out https://github.com/jetstack/kube-oidc-proxy and l believe that @EronWright suggestion would fix the slow start.
This does not seem to be fixed, at least not for v1.17.9
@feikesteenbergen: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
Please give us repro instructions?
Reopened #65785.
@wsong: This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Currently trying to reliable reproduce the issue.
We're also investigating if increasing --livez-grace-period
will be a workaround.
(We'd rather have no response for say 20 seconds than have a 401
Unauthorized response, the former will be retried, the latter will not by many tools)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Closed #65785.
@enj: Closing this issue.
In response to this:
This should be fixed in v1.21 via #97693
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—