Hi Gaurav,
The way jmx_exporter works is that it opens an HTTP port, and the Prometheus server connects to that port to retreive metrics from the exporter. The port is configurable via the command line.
I'm not an expert in Amazon's Map Reduce, but in the tutorial they write that the CloudFormation template does the following:
> Configure HDFS Name Node, HDFS Data Node, YARN Resource Manager, and YARN Node Manager processes on the cluster to launch with jmx_exporter as a Java agent.
So if all of these things run on different nodes, or if the jmx_exporters are configured to open different ports, this should be fine.
Maybe you are trying to enable jmx_exporter for map reduce steps in addition to the above? That would explain why running multiple steps in parallel result in an "address already in use" error. To me it looks like the intention is to attach jmx_exporter only to the components listed above, and not to individual steps.
Fabian
On Mon, Mar 01, 2021 at 12:43:34PM -0800, Gaurav Sharma wrote:
>
> Hello Team,
>
> we are trying to export JMX metrics for Spark and using the following AWS
> Blog post, we have implemented a solution :
>
>
>
https://aws.amazon.com/blogs/big-data/monitor-and-optimize-analytic-workloads-on-amazon-emr-with-prometheus-and-grafana/.
> <
https://aws.amazon.com/blogs/big-data/monitor-and-optimize-analytic-workloads-on-amazon-emr-with-prometheus-and-grafana/>
>
> We use the jmx exporter as javaagent for spark submits and while things
> work fine when we submit each step individually , they fail when multiple
> steps are submitted simultaneously. The error message is "Address already
> in use " which is a very common bind issue at OS level when someone tries
> to use a port which is already in use by another process.
>
> Are there any good ways to have the exporter run multiple JVMs in parallel
> / immediately one after the other
>
> --
> You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
prometheus-use...@googlegroups.com.
> To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/c7901713-f0d0-4833-ad51-dc0f8d3ecf3cn%40googlegroups.com.