> And I don't want to user a network service, because my scrap must be handled by a blackbox exporter on a specific node.
So how are you going to identify which particular node or pod you want to talk to? If it's always the same physical node, then just scrape that node's IP address, and bind the service to a fixed port on that node. Otherwise, what are you going to do to identify the pod - search for it by name? By a specific label?
But really, this is exactly what a "service" is for in Kubernetes: to provide an external access point for a service. This would allow k8s to move that pod to a different node, and keep it accessible on the same IP/port. Your blackbox_exporter pod definitely provides a "service" like this, in my opinion. A k8s Service will identify the pod in the same way as you'd do manually anyway (e.g. by linking to a pod with a particular label). All it does then is forward the traffic to that pod.
> Is there a way to mix the 2 discovery services (kubernetes and static ) ?
Not really. In label rewriting rules, a single __address__ cannot be rewritten to multiple __address__es - it's a 1:1 mapping (or 1:0 if you drop the target). Therefore, if you use kubernetes SD to select one specific pod, then that scrape job can only scrape a single blackbox target.
What I think you'd have to do is:
1. Write a separate program to determine the IP/port of the blackbox exporter. Say it finds
1.2.3.4:9115
2. From this program, write out a YAML file which merges this with a list of targets you want to probe:
- labels:
__param_target: 10.129.100.213
targets:
- labels:
__param_target: 10.129.100.214
targets:
- labels:
__param_target: 10.129.100.215
targets:
(note: you must also use rewriting rules to copy __param_target to some other persistent label, e.g. "instance" or "target", to ensure each timeseries has a unique set of labels)
3. Use file_sd_configs to read this file
But why go to all that trouble, when you could create a single Service resource definition for your blackbox_exporter pod and be done with it?