Hi,
I'm using a custom script to discovery nodes that generates a file with targets and file_sd to configure the Node Exporter job.
In a given day lets say that this file was like this (the ip address is only for exemplification):
-targets
-instance1.app1.prod:9100 (10.1.1.50)
-instance2.app1.prod:9100 (10.1.1.51)
After some time, instance2 was destroyed, but the record stills in the targets file. A new app was created and the same ip that instance2.app1 had whas assigned to other instance for other app:
-targets
-instance1.app1.prod:9100 (10.1.1.50)
-instance2.app1.prod:9100 (10.1.1.51)
-instance1.app2prod:9100 (10.1.1.51)
This means that the metric node_uname_info{job="node", nodename="instance1.app2.prod"} returns two values (unecessary labls ommited):
node_uname_info{instance="instance2.app1.prod:9100",job="node", nodename="instance1.app2.prod"}
node_uname_info{instance="instance1.app2prod:9100 ",job="node", nodename="instance1.app2.prod"}
And that is ok, it is expected.
The problem started when I recreated the instance2.app1.prod, with a different ip (lets say 10.1.1.71). The metrics in prometheus was not updated with the new ip of the hostname instance2.app1.prod.
Is important to say that in the vm where prometheus is running, if I ping instance2.app1.prod I get the correct ip, 10.1.1.71 but in prometheus, every metric with instance="instance1.app2prod:9100 still returning the values of the node that has its old ip.
I've already restarted prometheus, but the problem persists.
What I need to do to make prometheus scrapes the correct ip for this node?
Thanks!