Increase the scale of nodes in puppet storage to match large, enterprise customer needs.
Good = 75K nodes
Better = 150k nodes
Best = 250k nodes
HOW?
*STORE LESS STUFF: * # 1. Remove Unchanged Resources - *TSHIRT SIZE (S)* # _Per Rob, we currently have this in opensource, in PE, the default setting is to collect both changed and unchanged resources (redundant data) and we can change that config setting to only collect changed resources. We will also need to write tests and adjust tests, will need to make changes on puppetdb and extensions. (about 1-2 sprints)_
2. Only generate and store log/events on failed runs (Agent work?)
*MODIFY PUPPETDB TO INCLUDE AN IN-MEMORY QUERY CACHE:* # 1. Transparent by default but can be modified/disabled if/when necessary.
*IMPROVE FACT PATHS - TSHIRT SIZE (M)* # 1. Do it less often # 2. Improve query performance
_Proposed solution: Change the fact path garbage collection is configurable outside of everything else, and change the default to 1x every 24h instead of 1x every hour. Untangling/refactoring, and updating writing tests. (2-3 sprints)_
_Per Rob, we currently have this in opensource, in PE, the default setting is to collect both changed and unchanged resources (redundant data) and we can change that config setting to only collect changed resources. We will also need to write tests and adjust tests, will need to make changes on puppetdb and extensions. (about 1\-2 sprints)_
2.# Only generate and store log/events on failed runs (Agent work?)
*MODIFY PUPPETDB TO INCLUDE AN IN\-MEMORY QUERY CACHE*
1.# Transparent by default but can be modified/disabled if/when necessary.
*IMPROVE FACT PATHS \ - TSHIRT SIZE (M)*
1.# Do it less often
2.# Improve query performance
_Proposed solution: Change the fact path garbage collection is configurable outside of everything else, and change the default to 1x every 24h instead of 1x every hour. Untangling/refactoring, and updating writing tests. (2\-3 sprints)_
Increase the scale of nodes in puppet storage to match large, enterprise customer needs.
Good = 75K nodes
Better = 150k nodes
Best = 250k nodes
HOW?
*STORE LESS STUFF*
# Remove Unchanged Resources \- *TSHIRT SIZE (S)*
_Per Rob, we currently have this in opensource, in PE, the default setting is to collect both changed and unchanged resources (redundant data) and we can change that config setting to only collect changed resources. We will also need to write tests and adjust tests, will need to make changes on puppetdb and extensions. (about 1\-2 sprints)_
# Only generate and store log/events on failed runs (Agent work?)
*MODIFY PUPPETDB TO INCLUDE AN IN\-MEMORY QUERY CACHE*
# Transparent by default but can be modified/disabled if/when necessary.
*IMPROVE FACT PATHS \- TSHIRT SIZE (M)*
# Do it less often
# Improve query performance
_Proposed solution: Change the fact path garbage collection is configurable outside of everything else, and change the default to 1x every 24h instead of 1x every hour. Untangling/refactoring, and updating writing tests. (2\-3 sprints)_
Phase 2:
*MODIFY PUPPETDB TO INCLUDE AN IN\-MEMORY QUERY CACHE*
# Transparent by default but can be modified/disabled if/when necessary.