> My concern is that since this is still for node_exporter port 9100 for both jobs, I'll get duplicate scrapes ingested.
You will, but it's not really a problem, because they will have different labels {job="se-linux-node-exporter"} and {"job=se-linux-XYZ-service"} and therefore will be stored in different timeseries.
It's a waste of resources, but otherwise isn't an issue - although you have to be careful that if you're doing aggregation queries and you don't match on the correct job label, you may count the same host twice.
It would be more efficient if you could categorise your servers by adding labels, rather than duplicating the scrapes. For example, in your se-linux-node-exporter.json you could have
- labels:
category: foo
targets:
- host1
- host2
- host3
- labels:
category: bar
targets:
- host4
- host5
(I'm showing JSON rather than YAML as it's easier to type :-) Then all hosts will be scraped exactly once, but some will have {category="foo"} and some {category="bar"} as extra labels. This is a pretty simple approach. You can add as few or many labels to each group of hosts as you like.
The other way you can handle it is as described here:
In this approach, you create a completely separate static timeseries for each host, giving whatever metadata you like:
meta{instance="host1",category="foo",rack="R1",owner="alice"} 1
meta{instance="host2",category="bar",rack="R1",owner="bob"} 1
...
However, using that metadata requires more complex PromQL queries, as you have to join between your main metric and your metadata metric.