anyone see a problem with auto-scaling kiali?

3 views
Skip to first unread message

John Mazzitelli

unread,
Jan 4, 2021, 3:58:31 PM1/4/21
to kiali-dev
Someone asked for the operator to be able to create a HorizontalPodAutoscaler (HPA) for Kiali. The HPA will be able to scale up and down the Kiali based on whatever specs the user wants.

See: https://github.com/kiali/kiali/issues/3533#issuecomment-749808327

This enhancement is done and ready to be merged - I just have to press the "Big Green Buttons".

But, before I do this, I want to ask if anyone has any reservations about this. The only question I have is with the UI - what happens if Kiali new pods are created (or current pods destroyed) as part of the auto-scaling... will the UI session somehow get screwed up if the UI requests are routed to the new Kiali pod (or a different pod if the current one was destroyed).

I know we don't have server-side storage, so this probably isn't a problem. But I just want to make sure. So I'm asking everyone if you can see a problem with having multiple Kiali pods running (and having the pods scale up and down).

Joel Takvorian

unread,
Jan 5, 2021, 3:02:49 AM1/5/21
to John Mazzitelli, kiali-dev
I don't see anything blocking. We can easily run some tests with the kiali replicaset manually upscaled, to make sure. Perhaps a tiny drawback would be a sub-optimal usage of cache as long as there's no user-based sticky sessions, but that is negligible I'd say.


--
You received this message because you are subscribed to the Google Groups "kiali-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kiali-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kiali-dev/729702815.31787250.1609793905643.JavaMail.zimbra%40redhat.com.

Alissa Bonas

unread,
Jan 5, 2021, 3:34:11 AM1/5/21
to Joel Takvorian, John Mazzitelli, kiali-dev
Thinking out loud... Any issue with updating/creating the same Istio config yaml by users who work against different Kiali pods (when autoscaled) at the same time? race conditions? user updating a yaml from pod A, and user on pod B seeing a different picture? Creating Istio config with same name by 2 users via different Kiali pods?
Is it different than 2 users working against the same Kiali pod or the same? (Could be a non issue, but worth to think about this)

Jay Shaughnessy

unread,
Jan 5, 2021, 12:50:16 PM1/5/21
to kial...@googlegroups.com

I agree about the caching as it's in-memory and therefore should be pod-specific.  But as Joel says, the cache is just a perf optimization and should not be an issue.  As for simultaneous updates, I'm going to say also that it's not an issue, or no more issue than two users using a single pod.  The only thing about scaling up Kiali pods is that it could put more pressure on the resources supporting Kiali, namely the Prometheus and k8s APIs. 

Edgar Hernández

unread,
Jan 5, 2021, 2:25:28 PM1/5/21
to kial...@googlegroups.com
About the session being destroyed, I don't think it's a problem. It shouldn't be destroyed, because all session data is in the user browser (not in Kiali pod).
Indeed, I think we all may have killed the Kiali pod while already logged in and, surely, the session should had persisted.

Lucas Ponce

unread,
Jan 7, 2021, 3:56:17 AM1/7/21
to John Mazzitelli, kiali-dev
On Mon, Jan 4, 2021 at 9:58 PM John Mazzitelli <ma...@redhat.com> wrote:
It should be mostly transparent.

The state that is held in Kiali pod is managed by the kubernetes client, so it should be transparent. 

There could be a very edge case that at some point some pod is in the middle of a cache sync, but this is expected.

So I don't see a problem.
Reply all
Reply to author
Forward
0 new messages