Hi Peter
Using remote+http with a cluster-internal service (representing the clustered deployment) will not result in load balanced requests to the cluster.reuse the same connection and you will see all requests going to the same backend server, which is what is happening.
Richard
--
You received this message because you are subscribed to the Google Groups "WildFly" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wildfly+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wildfly/4cf720e9-bdc4-4d9f-91bd-5dc3284b9c38n%40googlegroups.com.
Hi Peter
Comments inline...
Hi Richard.Can you enlighten me the last section of your message? You say that i should target one of pods internal IP where wildfly server is, and then wildfly server will return cluster topology with IPs and client (Wildfly naming client?) will choose one of these IPs and create connection with given one?
Shouldn't it be working exactly like this with Kubernetes Service? I contact service, service redirects to one Pod, Pod should return cluster topology and client should connect directly to (maybe) different Pod.Is there problem that Service is in the way of communication client -> Pod?
Yes, it should. In order to use the EJB client with a load
balancer (e.g. a k8s Service), you need to use the http protocol,
which under the covers delegates load balancing and fail-over to a
load balancer and does not make use of persistent connections, as
I mentioned previously. The problem is that there is an issue with
the EJB client http protocol working with K8s services and it is
being addressed at the moment. Once it is fixed, the behaviour you
expect will be available. But for now, the recommendation is to
use the remote or remote+http protocol.
Richard
To view this discussion on the web visit https://groups.google.com/d/msgid/wildfly/89334cca-4ae4-4a18-83c6-9522ea422313n%40googlegroups.com.