There are architectural issues with the way the agent and server negotiate which environment to use:
* Newly provisioned agent runs will fail if pluginsync occurs in production, but catalog compilation occurs in a different environment, and the manifest references a fact that doesn't exist in production
When I * Each agent run results in two node requests add new nodes to my PE infrastructure , and corresponding classifier requests. Facts are not sent with the first node request, so the classifier terminus retrieve last-known facts from puppetdb. Finally, the first node request returns all of the last-known facts back *I want to the agent (since facts are merged into node parameters).
* If the first apply use node request fails or times out due facts to server load, then apply the agent will switch back to "production" relevant environment , deleting all its plugins. This leads to a positive feedback loop as agents then download all plugins again, *so that* new nodes are seamlessly managed from day one file at a time .
* If the agent is configured to use an environment in puppet [https://tickets . conf, and the environment is deleted on the server, then the agent will never successfully run again until the setting is removed puppetlabs . com/browse/PUP-10539] |
|