Can you give some more specific examples? What metric are you joining with - perhaps node_uname_info?
Note that the "up" metric will still exist (with a value of 0) when a scrape fails - this means:
(a) you can join on it, and
(b) you can alert on this condition, i.e. scrape failed / node_exporter is down. This is a different condition than "blackbox_exporter says host/service is down, but node_exporter is still being scraped". Hence the alerting rule for (up == 0) can be written to avoid the join. There is actually a benefit here: you'll only get one alert when the host goes down, instead of lots.
Other solutions you can consider:
1. Add labels to your targets at scrape time, either by adding static labels (file_sd_config) or using relabelling
2. Generate an entirely separate metadata timeseries, which is not scraped from the node itself.
This can be done by:
(a) a static recording rule as you suggested, see
There it's being used for alert thresholds, but you can just as well do this for metadata as per
(b) a static web page that you scrape containing all the metadata for all the targets - for an example see