How to deploy a cluster-wide webhook authorizer?

97 views
Skip to first unread message

Filip Filmar

unread,
Sep 12, 2017, 11:41:00 AM9/12/17
to kuberne...@googlegroups.com
Hello k8s dev. 

tl;dr: newbie questions.

(First off, this is my first email to k8s dev.  Sorry if the questions are misdirected, any help with question triage is most welcome and I'll try to heed the advice from there in the future.)

What is a (the?) good way to deploy a cluster-wide webhook authorizer (i.e. if I have multiple clusters, each gets its own authorizer)?

The intention is that a webhook authorizer becomes part of a custom k8s deployment.  This suggests to me that the authorizer program should run alongside the apiserver.  Assuming that's correct, how does one effectively make this configuration work?

It would seem to me that the authorizer can not run as a pod, or at least can not use the service discovery via DNS, as there would be bootstrapping issues. Is this understanding correct?  If not, what can one do to bootstrap?

Alternatively, how would one run a webhook alongside any other programs that run on the k8s master node? (related: how does a k8s master start up, i.e. where is the list of programs to run on master specified?)

And maybe most relevantly, could you point me at working webhook examples?  I've read the documentation for webhook and could find examples of authn servers (thanks to the kind folks on slack.k8s.io) and admission controllers; but not for webhook authz.

Thanks for your time and help,
F

Eric Chiang

unread,
Sep 12, 2017, 12:36:11 PM9/12/17
to Filip Filmar, kuberne...@googlegroups.com
Filip,

The authz webhook is part of the kube-apiserver binary, which has a flag to supply a config file for where to send the HTTP request. The flag, config format, and expected payload are documented in the webhook authorization docs[1].

Where you run your webhook service is totally up to you. You can host it externally to the cluster, on the same node(s) as the kube-apiserver binary, or even in the cluster as a regular kubernetes service.

For bootstrapping it into the cluster as a pod, you'd definitely want to turn on both the webhook and RBAC authorizer[2] by using the kube-apiserver flag `--authorization-mode=RBAC,Webhook`. This lets you supply RBAC rules for other components in the systems (controller-manager, schedulers, nodes, etc.) then bootstrap your own authorizer for user authz.

I think the OpenStack Keystone webhook server has an authorizer built in[3],


Eric

--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/CAGEh6bj-Pqg_X3-HiJaU6b4S%3DkUq4%2BghgsvyVngVAFqQL70mAA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.

Jeremiah Wuenschel

unread,
Sep 13, 2017, 2:02:16 PM9/13/17
to Kubernetes developer/contributor discussion
Just to add to this, we have have been running the authn and authz webhooks internally for almost a year with great success. Here are some interesting points about our setup:

- As Eric mentioned, we found it crucial to run RBAC in addition to the webhook. Our rule of thumb is that any control plane component of Kubernetes should be authorized via RBAC and authenticated via TLS or Service Account token. This is to avoid any possibility of control plane failure when our remote auth system becomes unavailable. We include DNS, kube-proxy, and a couple of cluster-critical custom controllers in that list of RBAC-authorized services.
- To minimize latency and maximize reliability, we decided to run one instance of our auth webhooks on every master. In fact, the webhooks are listening on 127.0.0.1 on each master. This also happens to make it very easy to discover.
- We don't actually run docker or the kubelet on the masters right now. This may change in the future, but we elected to run our webhooks as systemd processes for the time being. This does make it pretty easy to install, as we just treat it like other master components.

The best approach to setting up the webhooks will depend a lot on how you set your cluster up. I'm happy to go into more details if you want to reach out to me directly.

-jer

Filip Filmar

unread,
Sep 13, 2017, 3:52:11 PM9/13/17
to Jeremiah Wuenschel, Kubernetes developer/contributor discussion, eric....@coreos.com
On Wed, Sep 13, 2017 at 11:02 AM Jeremiah Wuenschel <jeremiah....@gmail.com> wrote:
Just to add to this, we have have been running the authn and authz webhooks internally for almost a year with great success. Here are some interesting points about our setup:

Thanks Eric and Jeremiah for the advice.  I've been frantically reading up on the details in parallel, hence my silence on your responses. Sorry about that.

While you've clarified quite a bit for me, some questions remain open (and there are some new ones as well, but... progress!)
  1. Eric seemed to suggest that it is possible to run an authorizer as a pod.  It's not obvious to me how that would be possible, given that the URL for the web hook must be known at apiserver startup time, at which time pods do not run and the webhook's IP number is unknown.  Furthermore, the master processes don't have access to the kube DNS, so would not even be able to see the assignment, so kube-dns can't be used. What am I missing?
  2. Jeremiah's advice seems to indicate that it is a good reliability policy to run the webhook authorizer binary alongside the master.  This gives you a predictable localhost address to talk to for the webhook.  However, there's still port contention that needs to be figured out.  But I guess that can be part of cluster configuration.  Am I missing something here, and there is a more automated way?
  3. The question of development environment where I could try these things out quickly.  I wanted to use minikube because, well, it's there. But I've found that it's not easy to customize minikube if you don't want to run pods or any other "conventional" system components.  Do you have an advice as to: (a) is minikube a good fit for such a dev environment; and (b) if not, what is a better setup?
The best approach to setting up the webhooks will depend a lot on how you set your cluster up. I'm happy to go into more details if you want to reach out to me directly.

In a bit of background, we're trying to add some particular enteprise controls to k8s deployments that work across clusters.  As part of that work, we need to extend the authz mechanism, and webhook was a way to do this without modifying core k8s.  So, I think it makes sense to have a local authz process, similar to what you suggested.

Also that to me makes deployments where webhook authz sits on a machine external to the cluster actually unacceptable.  That is, each cluster has its zone-level setup that will always work locally, whereas we will need to figure out cross-cluster communication as next step.

Any other specifics about the deployment that you need, and would affect your advice about deployment?

Thanks,
Filip 

Jeremiah Wuenschel

unread,
Sep 13, 2017, 4:02:27 PM9/13/17
to Filip Filmar, Kubernetes developer/contributor discussion, eric....@coreos.com
1. Whether or not your masters have access to kube-dns is a matter of how you have things set up. There is no reason that they would be unable to use it, however I haven't set things up that way so I can't provide a whole lot of help there. I think the main point is just that if you do set it up that way, kube-dns can use RBAC to do its thing.

2. To be clear, we mostly just wanted to set things up this way to rule out any master->node communication problems. In practice, intra-cluster communication has been absolutely rock solid for a full year. Not quite sure what the port contention issue is that you bring up, but we just picked a specific port and that is the port for our webhooks. It's one that is not used for anything else internally, and that wouldn't even be allowed host->host given our network acls. It's easy enough to find a port to use.

3. Minikube should be a fine place to develop, especially for fast iteration. I use a small cluster deployed with kubeadm into OpenStack

Filip Filmar

unread,
Sep 13, 2017, 4:18:02 PM9/13/17
to Jeremiah Wuenschel, Kubernetes developer/contributor discussion, eric....@coreos.com
On Wed, Sep 13, 2017 at 1:02 PM Jeremiah Wuenschel <jeremiah....@gmail.com> wrote:
1. Whether or not your masters have access to kube-dns is a matter of how you have things set up. There is no reason that they would be unable to use it, however I haven't set things up that way so I can't provide a whole lot of help there. I think the main point is just that if you do set it up that way, kube-dns can use RBAC to do its thing.

I asked this mostly in the context of minikube as that's the smallest example I'm looking at right now.  I suppose that one has full freedom to configure the cluster whatever way they see fit.  At the moment I'm searching for a flexible setup that meshes well with the rest of k8s, but I'm not certain about the correct way forward. To be continued, I guess.

2. To be clear, we mostly just wanted to set things up this way to rule out any master->node communication problems. In practice, intra-cluster communication has been absolutely rock solid for a full year
 
. Not quite sure what the port contention issue is that you bring up, but we just picked a specific port and that is the port for our webhooks. It's one that is not used for anything else internally, and that wouldn't even

My intention is for the webhook solution to be as pluggable as the rest of the k8s components.  In a deployment where the webhook server is a stand-alone binary running on the master, you'd need to make sure that the port you choose for it is not already spoken for by another component.  As long as you have full authority to assign ports on the machines you'd be fine I guess.
 
3. Minikube should be a fine place to develop, especially for fast iteration. I use a small cluster deployed with kubeadm into OpenStack

Noted, thanks.

F

Reply all
Reply to author
Forward
This conversation is locked
You cannot reply and perform actions on locked conversations.
0 new messages