I have a service that points to 2 independent pods running separate scrape jobs against the same targets. So their data will be similar but different. They run independently for redundancy reasons. Right now a service points to both and exposes the stats. The only problem is that people will see different results based on which pod they end up landing on. In the traditional datacenter setup, I'd set the load balancer with one pod to have a weight of 100 so it gets all of the traffic, and the only time the second server would get traffic would be hit is if the initial target ended up becoming unavailable. This would be ideal for my situation, so is there a way to implement that with k8s? The front end system can only point to one target, which is why I'd like the service (or whatever construction does what I described) to do this. I guess I can add a routing layer with servers to do this, but I was hoping for a native option if such a thing exists.
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
I **guess** there is not: https://github.com/kubernetes/kubernetes/issues/25485Maybe with some other ingress back end (doubt it). Also, don't think this can be achieved with service either :-(
So, I don't see any other way than istio, that issue I posted or deploy an nginx with that config. But maybe I'm missing something.
I have a service that points to 2 independent pods running separate scrape jobs against the same targets. So their data will be similar but different. They run independently for redundancy reasons. Right now a service points to both and exposes the stats. The only problem is that people will see different results based on which pod they end up landing on. In the traditional datacenter setup, I'd set the load balancer with one pod to have a weight of 100 so it gets all of the traffic, and the only time the second server would get traffic would be hit is if the initial target ended up becoming unavailable. This would be ideal for my situation, so is there a way to implement that with k8s? The front end system can only point to one target, which is why I'd like the service (or whatever construction does what I described) to do this. I guess I can add a routing layer with servers to do this, but I was hoping for a native option if such a thing exists.
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.