Ideal value for worker_processes on kubernetes.

瀏覽次數:857 次
跳到第一則未讀訊息

Kshitij Joshi

未讀,
2021年5月7日 上午10:02:452021/5/7
收件者:openresty-en

Based on the documentation here, https://nginx.org/en/docs/ngx_core_module.html#worker_processes

The worker_processes is defined as the number of worker processes.

The optimal value depends on many factors including (but not limited to) the number of CPU cores, the number of hard disk drives that store data, and load pattern. When one is in doubt, setting it to the number of available CPU cores would be a good start (the value “auto” will try to autodetect it).


Most of the guides recommend setting this value to the number of cores on the server or set to auto which itself sets it to the number of cores on the machine.

I'm running OpenResty on kubernetes so when I check the number of CPU cores from inside the openresty pod, it returns me the number of cores my physical machine (node) has. However, with CPU requests & limits on the k8s, all the 8 cores wouldn't be available for the pod.

So what should be worker_processes's value for shared CPUs in case of kubernetes?


Thanks!


Igor Clark

未讀,
2021年5月7日 上午10:16:302021/5/7
收件者:openre...@googlegroups.com
I don’t think this is specific to OpenResty, but perhaps you can get the number of cores the pod has available to it from the k8s api/environment and launch time, pass it as an environment variable to the pod, and use that in a launcher script which passes it to nginx via ‘-g’, à la http://nginx.org/en/docs/switches.html ?

On 7 May 2021, at 15:02, Kshitij Joshi <kshiti...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openresty-en/203711dc-1163-4c32-8e60-f3999bd2d388n%40googlegroups.com.

ecc256

未讀,
2021年5月7日 上午10:55:572021/5/7
收件者:openresty-en
>but perhaps you can get the number of cores the pod has available to it from the k8s api/environment and launch time, pass it as an environment variable to the pod, and use that in a launcher script which passes it to nginx via ‘-g’, à la http://nginx.org/en/docs/switches.html ?
That should be the same as:
worker_processes auto;
right?

Igor Clark

未讀,
2021年5月7日 上午11:02:392021/5/7
收件者:openre...@googlegroups.com
Hiya, well I don't *think* so - as I understand it, "auto" will mean the process inside the pod will get the number of cores on the physical host, whereas the k8s environment will know how many 'virtual' cores the pod has access to. AFAIK by default that's 1 but it can be set in the cluster config, so that processes get restricted to that number of cores (/cycles/however it's done) - but I've noticed they still seem to get the physical core count inside the pod. Like even if you look at /proc/cpuinfo it still tells you the physical count. I could be wrong on that though! :-)

ecc256

未讀,
2021年5月7日 上午11:11:092021/5/7
收件者:openresty-en
Igor,
Looks like you are absolutely right!
Totally my bad, but I least I've learned something!

Igor Clark

未讀,
2021年5月7日 上午11:39:432021/5/7
收件者:openre...@googlegroups.com
Hey, sure thing, it’s quite an obscure detail, and I only happened to notice it when I was trying to use multiple cores inside containers ☺️ In fact I think even trying to do that is a bit unusual - I get the feeling that many, many people who run apps in containers use something single-threaded like node.js, where it doesn’t really matter because the unit of scale is the pod. But Resty is a bit different!

On 7 May 2021, at 16:11, 'ecc256' via openresty-en <openre...@googlegroups.com> wrote:

Igor,

Kshitij Joshi

未讀,
2021年5月7日 上午11:47:042021/5/7
收件者:openresty-en
Thanks a lot Igor!
By number of cores the pod has available to it from the k8s api/environment? Do you mean the CPU requests & CPU limit?
If yes, what if that number is a fraction? For eg, 1500m which is 1500 milicores/1.5 cores?

Igor Clark

未讀,
2021年5月7日 中午12:34:472021/5/7
收件者:openre...@googlegroups.com
CPU limit I think - and I think that’s implementation-dependent - eg GCP vs AWS vs on-prem/hardware will have their own ways of working that out, so it'll depend on what you’re using 👍

On 7 May 2021, at 16:47, Kshitij Joshi <kshiti...@gmail.com> wrote:

Thanks a lot Igor!
回覆所有人
回覆作者
轉寄
0 則新訊息