If you need to "do work" to collect metrics, you would do it whenever you receive a scrape. That's what a Custom Collector is for: it's code which is executed at scrape time, which has whatever special logic you need to generate metrics on demand.
However in the normal case, you are simply setting gauges or incrementing counters in response to external events as part of the application's normal flow. You would do those gauge or counter updates whenever the events occur. Then the metrics are being maintained continuously, and so the scrape only needs to return the current values of those metrics. In that case, no custom collector is required.
The example code you posted has a dummy polling loop to update metrics intermittently. That's not what you'd do in real life. Rather, you'd replace "run_metrics_loop" with "run_my_application", and insert code to update metrics within the application, at strategic points where it does work (say when it processes a REST API request, or whatever else it does that needs measuring)
Often you're integrating the exporter into an existing application, in which case you just insert it into the layout of that application.
You said you're instrumenting a "linux app": is that linux app also written in Python? If so, add your exporter to the application. If not, how are you collecting data from it?
If the app is not in Python, and you can modify the app's source code, but you don't want to integrate an exporter in whatever language the app is written in, then here are some other approaches which don't involve writing a separate exporter:
- the app can send update messages to statsd_exporter, which maintains gauges and counter state
- the app can write metrics to text files, to be picked up by node_exporter textfile collector
If you can't modify the app, then at very worst it will probably emit log messages. In that case, you can use mtail or grok_exporter to read the log files and generate metrics. But that's a last resort.