> if not I will have to redesign my system to use 'subprocess'
Expanding on this, for students on the list... Having many worker host
processes is not necessarily a bad thing. It can be more programmer
work, but it simplifies the parallelism in a way (e.g., "let the Linux
kernel worry about it" :), and it potentially gives you better isolation
and resilience for some kinds of defects (in native code used via FFI,
in Racket code, and even in the suspiciously sturdy Racket VM/backend).
If appropriate for your application, you can also consider a worker
pool, with a health metric, sometimes reusing workers to avoid process
startup times, and sometimes retiring, and perhaps sometimes benching
workers for an induced big GC if that makes sense compared to
retiring&starting/unpooling, and maybe sometimes quarantining workers
for debugging/dumps while keeping the system running. You can also
spread your workers across multiple hosts, not just CPUs/cores.
You can even use the worker pool to introduce new changes to a running
system (being very rapid, or as an additional mechanism beyond normal
testing for production), and do A/B performance/correctness of changes,
and change rollback.
If your data to be communicated to/from a worker is relatively small and
won't be a bottleneck, you can simply push it through the stdin and
stdout of each process; otherwise, you can get judicious/clever with the
many available host OS mechanisms.
(Students: Being able to get our hands dirty and engineer systems beyond
a framework, when necessary, is one of the reasons we get CS/SE/EE/CE
degrees and broad&deep experience, rather than only collect a binder
full of Certified Currently-Popular JS Framework Technician certs.
Those oppressive student loans, and/or years of self-guided open source
experience, might not be in vain. :)