Kubernetes JMX support

3,933 views
Skip to first unread message

Vikram Patil

unread,
Oct 27, 2017, 10:17:58 PM10/27/17
to Kubernetes developer/contributor discussion
Hi Guys,

We have a java application, which creates certain mbeans to maintain/lookup certain runtime metrics. On regular on premise as well as via docker all we needed was to set the following java properties,

-Dcom.sun.management.jmxremote=true 
-Dcom.sun.management.jmxremote.port=6666 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Djava.rmi.server.hostname=<public hostname/ip>
-Dcom.sun.management.jmxremote.rmi.port=6666
-Dcom.sun.management.jmxremote.local.only=false

And we could remotely connect from any local box within the network to this running application via jconsole using the given host name/ip and port. But we haven't been very successful via kubernetes. Currently we are trying this out with minikube.

Our application also has an HTTP port. And in our service.yml file we expose both TCP ports HTTP as well as JMX port. We can access the HTTP port and the application works fine. But can really access our JMX port.

Any pointers to how this has to be done or anything additional that needs to be configured.

Thanks,
Vikram

Jay Vyas

unread,
Oct 28, 2017, 9:23:17 AM10/28/17
to Vikram Patil, Kubernetes developer/contributor discussion
At a baseline make sure you have Port6666 exposed in your container. By default that will make the container )6666 exposed at the pod level.

Then, you have two options .

1) You can creat a service which specifically labels this pod and no other pod, so you can access its 6666 via host name.

2) Alternatively Prometheus has a way to embed host names into all metrics . So if you use a jmx Prometheus exporter , you can simply scrape jmx data using Prometheus ; and that will allow you to scrape from a single endpoint for all instances of your app without one service per pod. 
--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/7886afbb-5f76-49a9-a268-53679a6975e5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jay Vyas

unread,
Oct 28, 2017, 9:32:39 AM10/28/17
to Vikram Patil, Kubernetes developer/contributor discussion
I think there was a typo / missing info in the below question - do you mean to say you have exposed 6666 ... and it's not working? If so... few  more questions might help figure out what's wrong .

Note I have a vested interest in your answers as we run a lot of java apps internally that need this sort of glue as well :).

1) the connection is refused ? Or is it an internal Java error after the connection to 6666 is made ?

2) are you exposing a nodeport - or are you going through an LB?  Typically egress / ingress in the LB might block some  ports by default while allowing standard https / http ports . 

3) have you tried accessing 6666 inside the cluster from a container ? Typically starting inside the namespace  is an easy way to figure out where the chain in services is breaking down (container ? pod ? service binding ? Node ? Cloud LB?, or firewall.)

On Oct 27, 2017, at 10:17 PM, Vikram Patil <vick...@gmail.com> wrote:

--

Vikram Patil

unread,
Oct 28, 2017, 10:07:22 PM10/28/17
to Kubernetes developer/contributor discussion

Providing more details and answers to your questions.

Yes, i've already exposed the port(TCP) - 6666. Exactly the same way i have exposed my HTTP port which my application uses.

#1 - No internal java error. Its connection refused.
#2 - Tried both actually. Tried nodeport with minikube and loadbalancer with Azure.
#3 - I yet have to try internal connection. There is no proxy and/or firewall noise. The service seems straight forward, like like you expose any HTTP based port. I can get this working on a local docker container. There are multiple threads online, which primarily talk about making sure jmx and rmi port are the same.

Here's the service.yaml file which i was trying with azure. I created a static IP in azure, so that i can set it within "java.rmi.server.hostname" upfront.

kind: Service
apiVersion: v1
metadata:
  name: helloservice
spec:
  loadBalancerIP: <static external IP>
  selector:
    app: hello-app
  ports:
    - name: port1
      protocol: TCP
      port: 8109
    - name: port2
      protocol: TCP
      port: 6666
  type: LoadBalancer

With this, i can remote access my app's http port using <static external IP>:8109. But the same doesn't work for JMX i.e. <static external IP>:6666.

Additionally, i checked azure's inbound and outbound network rules and this port is exposed there.

NitB

unread,
Aug 30, 2018, 12:32:30 PM8/30/18
to Kubernetes developer/contributor discussion
Did you get it to working. I am facing same issue and am interested to know how you solved the issue.

Thanks,
NitB

Neet

unread,
Sep 3, 2018, 6:37:07 AM9/3/18
to Kubernetes developer/contributor discussion
Following got it working for me.

1. Tomcat (JVM) should be started with following arguments
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.rmi.port=1099
-Djava.rmi.server.hostname=<LoadbalancerIP>
-Dcom.sun.management.jmxremote.local.only=false
2. The Loadbalancer service should expose the port used (e.g. 1099 in above example)

For me my corporate firewall blocked JMX access, so I check if that is the case.

rodolphe.bert...@gmail.com

unread,
Sep 18, 2018, 10:16:24 AM9/18/18
to Kubernetes developer/contributor discussion
What would happen if many pods expose the same RMI configuration on the same host? Can the load balancer service (from Google or Amazon) balance RMI traffic between pods? I believe that the service has an internal mapping of ports translation, thus pods can expose to the service the same RMI configuration without having "already used" port exception. 
Reply all
Reply to author
Forward
0 new messages