> Documentation states that each blocking actor live in it's own thread of execution, 'not so lightweight' and 'do not scale to large numbers'.
> Can you please give a glance explanation why is that?
The overhead simply comes from having a std::thread. This means: stack allocation, system calls, signal management, etc. This isn’t terribly expensive, but it adds several kB of RAM overhead to an actor that otherwise only needs a couple hundred Bytes. Spawning a couple dozen blocking actors (=threads) is not a problem. But you wouldn’t want to start thousands of them (unless you have some monster hardware sitting around with thousands of cores to match).
> Does above means that spawning new blocking actor (for example, `scoped_actor`) = spawn new thread?
No. A scoped actor is not really “spawned”. It has no control loop on its own and only exists for ad-hoc communication with other actors. Scoped actors do heap allocations, but otherwise are inexpensive exactly because they don’t have their own thread of execution.
> Application of this question is that I want to provide 'regular' class interface (API) for requests to async event-based actor. What I do now:
> 1. Spawn event-based actor A that do all async work and communicate with other such actors (in a group).
> 2. Create a `caf::function_view` instance B (that internally spawns `scoped_actor`) used to make blocking requests to actor A.
> Steaps 1 & 2 are performed in constructor of _every_ object.
>
> Having that, if some method of object API must return calculation result, I just make blocking request `B(A, ...)` inside that method. Client code that use objects doesn't even aware that there are actors involved, it just calls conventional class API.
>
> This works fine, but I afraid that creating `scoped_actor` instances (contained inside `function_view`) equal to numer of objects (can be large) is not a good idea? If true, then what can you suggest to improve the design?
> Thanks beforehand.
Generally I don’t see a problem with this design. As long as the extra memory allocations and constructing overheads for the function views don’t impact performance. Depending on your API guarantees (safe to call from multiple threads?), maybe you can reduce the number of scoped actors / function views by sharing them between your objects. But I wouldn’t bother before measuring the system. Unless your objects are short-lived, construction overhead probably isn’t an issue.
Hope that helps.
Dominik