I think there is one, yes. IIRC, the current way profiling is done is to make the call inside of a lambda, and the lambda is either called with or without measuring time. Thus an overhead of at least one call, plus creation of the profiling id/string, plus calling a lambda, plus generating more garbage for the gc - which does not really show up on short benchmarks. Since hiera backend functions that return an entire hash (like the json and yaml ones) are only called once per data file (or possibly multiple times if the cache needs to be evicted) the number of profiled calls will be much lower than if we measure every parameter lookup. The "by key" backend functions will get many calls and those are typically also the ones that are advanced (eyaml, or remote lookup backends). |