HI,
I'm trying to understand a sudden spike of CPU usage of a fluentd pod running on my k8s master.
Around that time we are scaling down all pods and stateful sets that are in one of our dev namespaces, however the CPU usage shown is for a fluentd pod that lives on a master node, which doesn't have any pods deployed from this namespace.
I had a look at fluentd logs of this pod and I see a change in logs around the time
2021-01-11 23:32:23 +0000 [info]: #0 [filter_kubernetes_metadata] stats - namespace_cache_size: 1, pod_cache_size: 16, pod_cache_watch_ignored: 2885, namespace_cache_api_updates: 112, pod_cache_api_updates: 112, id_cache_miss: 112, pod_cache_watch_delete_ignored: 860, pod_cache_watch_misses: 6223, pod_cache_watch_updates: 372
2021-01-11 23:32:53 +0000 [info]: #0 [filter_kubernetes_metadata] stats - namespace_cache_size: 1, pod_cache_size: 16, pod_cache_watch_ignored: 4115, namespace_cache_api_updates: 114, pod_cache_api_updates: 114, id_cache_miss: 114, pod_cache_watch_delete_ignored: 860, pod_cache_watch_misses: 6223, pod_cache_watch_updates: 372
2021-01-11 23:33:23 +0000 [info]: #0 [filter_kubernetes_metadata] stats - namespace_cache_size: 1, pod_cache_size: 8, pod_cache_watch_ignored: 5376, namespace_cache_api_updates: 114, pod_cache_api_updates: 114, id_cache_miss: 114, pod_cache_watch_delete_ignored: 860, pod_cache_watch_misses: 6223
2021-01-11 23:33:53 +0000 [info]: #0 [filter_kubernetes_metadata] stats - namespace_cache_size: 1, pod_cache_size: 8, pod_cache_watch_ignored: 6561, namespace_cache_api_updates: 114, pod_cache_api_updates: 114, id_cache_miss: 114, pod_cache_watch_delete_ignored: 860, pod_cache_watch_misses: 6223
Also, even today, after the namespace has been scaled up, the value of "pod_cache_watch_ignored" is still increasin:
2021-01-12 11:17:35 +0000 [info]: #0 [filter_kubernetes_metadata] stats - namespace_cache_size: 1, pod_cache_size: 9, pod_cache_watch_ignored: 1798797, namespace_cache_api_updates: 198, pod_cache_api_updates: 198, id_cache_miss: 198
Thanks