This bug is a bit of a pain for setups where the master needs an environment variable defined for the scripts that are supposed to run on the master but at least one of the nodes also needs this environment variable modified for local requirements (a scenario that easily occurs if you have Linux and Windows nodes and both of them are accessing network drives). The only workaround I found was to define the variable for the master in /etc/environment, not define it in the global Jenkins settings and then define it again for every node.
Can we look at this issue again? It's being open for quite a while. I encountered this today and was really confused. I follow the docs saying that the env variable on the node should override the globals, but that's not what happens. Maybe we should do what Jesse Glick mentioned and revert that other issue as this seems more important then being that issue in the first place. Thanks.
If you want to control environment variables used during builds, use the withEnv step or an equivalent, or simply set variables inside shell scripts. Steer clear of node properties and Jenkins global configuration.
I tend to disagree with Jesse Glick. We are currently migrating some concepts from traditional builds to pipelines, and due to different environments we have some nodes with local variables overwriting the global ones.
It's a pain in the <peep> now sometimes to figure out why migrated builds don't work anymore, as the behavior of the node changes (while we were initially suspecting our own scripts)
We encountered the same issue and it happens after the Pipeline: Job plugin version 2.25. And I tend to agree with Pascal van Kempen. If we are going to create a new pipeline job, then we can follow the method provided by Jesse Glick, but if we already have a lot of jobs at Jenkins, then we have to ask all users to check and modify their jobs, this will be a huge disaster to users.