Many Thanks for the quick response!
the presentation is very useful, thanks for it!
just to verify few things :
> These (or rather __meta_kubernetes_service_annotation_*) are meta labels introduced by the prometheus service discovery. They expose the annotations map from the service/pod as meta labels, so annotations like prometheus.io.scrape become __meta_kubernetes_service_annotation_prometheus_io_scrape. You should set these annotations in you service/pod yaml.
if by default I want to pull all metrics, there is no reason to use __meta_kubernetes_service_annotation_prometheus_io_scrape = "true", is it correct ?
only I will want to exclude some metrics I cad set it "false" and define in job "drop", right ?
> More of less, yes. We only use the jobs for the endpoints, so we can use the service name as the job name (as opposed to using the pod name).
you are right, I think how to add "service" label to metrics coming from pod - instead of I should you "endpoints" :-)
> We run prometheus as a pod in kubernetes, but it has its drawbacks. Kubernetes isn't so good at managing stateful services (ignoring pet sets in 1.3), so you need to be careful not to blow away all you history (by doing a rolling upgrade, for instance). Use of volumes and config maps can help somewhat here, but you need to be careful.
I think about some solution :
prometheus pod is running on the dedicated node and keep the data on the local disk (out of pod)
yaml files we will keep on persistent storage that all nodes mount it
we will have 2 prometheus (active-active) for HA (?)
Are you using SSD ?
How many metrics you have ?
Thanks for your question!