About the poolsize setting

34 views
Skip to first unread message

David

unread,
Jul 28, 2023, 4:25:33 AM7/28/23
to Enterprise Web Developer Community
Hello there.

I implemented a sample with reference to the following document.

Two APIs are implemented, one that uses setTimeout to wait 10 seconds before executing finished.

module.exports = async function(args, finished) {
  await new Promise((resolve, reject) => {
    setTimeout(() => {
      resolve();
    }, 10000);
  });
  finished({
    text: 'ok'
  });
};

The other simply returns a response.

module.exports = async function(args, finished) {
  finished({
    text: 'ok'
  });
};

When poolSize is 1, if an API that returns a simple response is being executed while an API that takes 10 seconds is being executed, "no available workers" will be displayed and the user must wait for the first API to complete.

Is the number of APIs that can run simultaneously equal to the poolSize setting?
If we get many requests, do we have to increase the poolSize or wait our turn?

I was hoping that the process would run in parallel even with poolSize=1, like the event loop in Node.js.

rtweed

unread,
Jul 28, 2023, 4:40:45 AM7/28/23
to Enterprise Web Developer Community
A unique feature of QEWD is that each worker process or thread will only ever handle a single request at a time.  This is deliberate by design, as it was written specifically to allow safe use of synchronous database APIs, such as those provided for YottaDB and IRIS, and, as a result, allowing the creation of JavaScript abstractions for database access instead of relying on M code for database manipulations (you may want to read this article for more background: https://medium.com/the-node-js-collection/having-your-node-js-cake-and-eating-it-too-799e90d40186)

So, as a result, if you have a poolsize of 1, only one request will be handled by the single worker process at a time, with any others being queued until the process becomes free, as your test has confirmed.

Interestingly, my benchmark tests using "no op" worker handler code shows that maximum queue throughput occurs when the number of workers equals (the number of CPU cores - 1), pretty much on any system/architecture/operating system.  Of course, as your text demonstrates, if any request handler involves extensive awaits (eg due to fetching a lot of data from other remote sources over slow connections), then you'll probably have to increase the pool size further to avoid queue build-up in a busy system.  BTW, the core queue/dispatch mechanism itself in QEWD is extremely fast - it's what you do in your handlers that may add overhead.

So QEWD's behaviour is very different from what it sounds like you were hoping/expecting - you should probably look at the Node.js Cluster module instead if you're wanting multiple parallel processing in workers, but don't expect to use synchronous database APIs to work in that scenario (eg don't try using the QEWD-jsdb abstraction with Cluster!)

Hope this helps
Rob

David

unread,
Jul 30, 2023, 9:53:51 PM7/30/23
to Enterprise Web Developer Community
Thank you for your response.
I understand.

2023年7月28日金曜日 17:40:45 UTC+9 rtweed:
Reply all
Reply to author
Forward
0 new messages