> We use Aqueduct as our web server. It relies on opening several isolates in a machine with multiple cores.
> How hard should this affect us?
Inter isolate communication will be faster, isolates will be much faster to spawn, the base memory overhead of an additional isolate will go down significantly, hot-reload will work safely, ...
Whether you will experience any negative impact due to GC being coordinated now across isolates depends on the application. Especially on how many isolates are running in parallel (i.e. have work to do at the same time), how allocation heavy those are and if the allocated data is short lived or long lived. If data survives young generation collections but dies soon thereafter, it is possible to increase the size of the young generation via --new-gen-semi-max-size=<x MB> (depending on the parallelism you expect) - this number is right now optimized for mobile devices.
I encourage you to enable it on the master channel and try it out (pass --enable-isolate-groups to the VM).
> What does "many cores" mean? I know there have been a lot of benchmarks done, so is there a more in-depth analysis so Dart users can make an informed choice here?
See the answer above as well as on the github issue.
We have a large suite of benchmarks, but those measure almost exclusively what the VM team has focused on for a long time: Single isolate peak performance.
As part of this work on lightweight isolates, we have created more benchmarks (see in our public repository here:
dart-lang/sdk/tree/master/benchmarks). Those benchmarks show roughly the numbers reported on the github issue (10x+ faster spaw latency, 10x+ lower base memory consumption, 8x faster communication). It also includes benchmarks that measure event loop responsiveness on one isolate if another isolate performs heavy allocation (where data either dies young (negligible impact) or where data survives (pause times are 10-20ms - which is the time it takes to evacuate young generation object to old space)
I will see whether I can pull the numbers from our benchmarking system and add the table on the github issue.