| There's an additional (and apparently SLIPPERY) problem with checking the status of the submitted job... I believe in acquiring the remote context. We've got multipledynamic instances sitting behind a load balancer (HAPROXY). For the sake of this discussion let's use the following configuration: haproxy.mydomain.com with listeners on ports 9001 ->myjenkins1.mycompany.com:8443, 9002 ->myjenkins2.mycompany.com:8443, 9003 ->myjenkins3.mycompany.com:8443 The SSL certificates installed on the 3 back end servers are the same cert for the proxy (haproxy)... as the back end servers are rebuilt on demand when there's a revision to the LTS release or significant plugin updates. The instance on myjenkins1 attempts to trigger a job on myjenkins2 (i.e. https://haproxy.mydomain.com:*9002*/remote_job_to_be_triggered) via the parameterized remote trigger plugin in a pipleine job. The job SUCCESSFULLY triggers, however when the pipeline attempts to check the status of the SUCCESSFULLY triggered job, instead of querying https://haproxy.mydomain.com:*9002*/job/remote_job_to_be_triggered/\{build#}/api/json/?seed={seed#}, when it constructs the URL to query, it seems to be pulling the port number from the remote context(evidenced by the presence of "GOT CONTEXT for Buildand Deploy" in the logs on myjenkisn1) and instead fails to query https://haproxy.mydomain.com:*8443*/job/remote_job_to_be_triggered/\{build#}/api/json/?seed={seed#}. The error will either be an HTTP 404 (not found)... or if there DOES happen to be a listener available on port 8443 on haproxy for a DIFFERENT jenkins instance but the requesting user does not have login access, an HTTP 401 (unauthorized). This may be an odd configuration, but I have seen other users complaining of similar problems (i.e. using a VIEW based URL, web server front end, proxy, etc.)... and the heart of the problem here is utilization of an inconsistent base URL between the triggering request and polling for the job status. While the complexity of having the remote context reported back consistently with "odd" configurations, a simple solution would be to simply pull the base URL(protocol/host/port) from the trigger request instead of the remote context. I don't have the bandwidth at the moment to do a deep dive into the code, but I cannot imagine the would be a difficult fix... in the meantime we have resorted to performing remote triggers via groovy scripting over the SSH listener and polling for job status inline. (silly workaround for what should be a quick fix). Hope this provides a little better insight as to the root cause |