| Well, it would be a few lines of Groovy scripting to calculate this…but as mentioned above, if you have a lot of historical builds, the performance will be abysmal (especially on HDDs) since an unbounded number of little XML files would need to be loaded from disk and deserialized into memory, then discarded shortly afterward once the system gets into a low memory condition. That is true of freestyle builds and would be worse still for Pipeline builds due to the more complex metadata. A supportable solution would need something like a database that gets updated when a build uses a node—not a difficult programming project at all, but logistically a problem since Jenkins currently lacks any kind of standardized database, so you would need to either initiate that effort, or do a one-off hack with some sort of text file stored in a node’s configuration directory. |