High Availability configuration for dex

357 views
Skip to first unread message

Brian Candler

unread,
May 19, 2020, 3:39:46 AM5/19/20
to dex-dev
Hello,

I tried searching the docs, but couldn't find any information about the recommended way to deploy Dex in HA or with a standby.

Is it OK to run multiple instances of Dex with a load-balancer in front, or round-robin DNS?

Presumably this requires the instances to have a shared or synchronized database, like etcd.  However, I just wanted to check there's no other local state which would require the user to hit the same instance each time.

Thanks,

Brian.

Tom Downes

unread,
May 19, 2020, 10:17:55 AM5/19/20
to Brian Candler, dex-dev
Brian:

I haven't done this myself, but yes there are storage options:


After configuring that, I'd use a health check that restarts the pod/instance/whatever within N seconds of a failure or kills it and fails over to a standby as you suggest (esp. if using full VMs).

If you actually have N>1 active instances of Dex, I bet (but don't know) that you have to start paying attention to sticky sessions. That might have similar per-client downtime properties to the health check solution (because any given client will be temporarily stuck). You know your needs better than mine, but I'd start with the simpler approach first.

Tom

--
You received this message because you are subscribed to the Google Groups "dex-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dex-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dex-dev/4aaf5e03-1720-4114-8e20-c06788b4ecc8%40googlegroups.com.

Brian Candler

unread,
May 19, 2020, 11:28:25 AM5/19/20
to dex-dev
On Tuesday, 19 May 2020 15:17:55 UTC+1, Tom Downes wrote:
If you actually have N>1 active instances of Dex, I bet (but don't know) that you have to start paying attention to sticky sessions.

That's essentially the question I was asking.  Is all state stored in the backend, so that it doesn't matter which front-end the client hits each time?  Or is stickiness required?

FWIW, the likely deployment is in standalone VMs rather than k8s.

Tom Downes

unread,
May 19, 2020, 11:35:27 AM5/19/20
to Brian Candler, dex-dev
I would be very surprised if the initial auth flow did not require stickiness. I bet subsequent traffic is more forgiving once you have a storage provider configured.

Obviously, it's not spelled out in the documentation and I have found the list to be quiet when asking questions about OIDC/Google in particular.

Tom


--
You received this message because you are subscribed to the Google Groups "dex-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dex-dev+u...@googlegroups.com.

Lincoln Stoll

unread,
May 19, 2020, 11:39:57 AM5/19/20
to Tom Downes, Brian Candler, dex-dev
Auth request info is stored in the datastore - as long as the datastore is shared, no stickiness is needed on the front ends. When we used to run dex we ran it fine with stateless load-balanced frontends, and a shared Postgres database.

On May 19, 2020, at 10:35, 'Tom Downes' via dex-dev <dex...@googlegroups.com> wrote:



Brian Candler

unread,
May 19, 2020, 12:34:29 PM5/19/20
to dex-dev
On Tuesday, 19 May 2020 16:39:57 UTC+1, Lincoln Stoll wrote:
Auth request info is stored in the datastore - as long as the datastore is shared, no stickiness is needed on the front ends. When we used to run dex we ran it fine with stateless load-balanced frontends, and a shared Postgres database.


That's very helpful - thank you!
Reply all
Reply to author
Forward
0 new messages