> I have some 2-3 years of experience of asio development, and i would be happy if i could help with caf development. If you can provide me some details when i am stuck with caf internals, i'd like to try to contribute on that task.
Ok, let me try to summarize what's going on in the network backend.
The big picture is this:
- the sole purpose the the network backend is to enable brokers
- brokers are a "special kind" of actors living in the IO event loop
- brokers receive:
+ new_connection_msg
+ new_data_msg
+ connection_closed_msg
+ acceptor_closed_msg
- however, the broker itself knows *nothing* about sockets or streams it operators on accept_handle and connection_handle
- brokers have scribes and doormen
- scribes manage IO streams (underneath: TCP sockets)
- doormen manage acceptors (underneath: TCP sockets bound to a port in listening mode)
That's the actor side of things. The messages are invoked directly from the network backend and the buffers used in new_data_msg are re-used by the backend (unless the broker does something stupid like detaching the message). This ensures low latency and avoids needless copies.
Let's move on to the network side of things. In the network namespace, we have a couple of interfaces:
- manager: manages an IO resource
- acceptor_manager: extends manager and manages an acceptor socket
- stream_manager: extends manager and manages an open TCP socket
- multiplexer: glues everything together
The manager interfaces are rather straightforward. Scribes extend stream_manager and doormen extend acceptor_manager. Note that scribes and doormen are abstract classes and are implemented by anonymous classes in the multiplexer. Here's an example function from the default_multiplexer:
```
accept_handle
default_multiplexer::add_tcp_doorman(broker* self,
default_socket_acceptor&& sock) {
CAF_LOG_TRACE("sock.fd = " << sock.fd());
CAF_REQUIRE(sock.fd() != network::invalid_native_socket);
class impl : public broker::doorman {
public:
impl(broker* ptr, default_socket_acceptor&& s)
: doorman(ptr, network::accept_hdl_from_socket(s)),
m_acceptor(s.backend()) {
m_acceptor.init(std::move(s));
}
void new_connection() override {
auto& dm = m_acceptor.backend();
accept_msg().handle
= dm.add_tcp_scribe(parent(), std::move(m_acceptor.accepted_socket()));
parent()->invoke_message(invalid_actor_addr, invalid_message_id,
m_accept_msg);
}
void stop_reading() override {
m_acceptor.stop_reading();
disconnect(false);
}
void launch() override {
m_acceptor.start(this);
}
private:
network::acceptor<default_socket_acceptor> m_acceptor;
};
broker::doorman_pointer ptr{new impl{self, std::move(sock)}};
self->add_doorman(ptr);
return ptr->hdl();
}
```
This function creates a doorman from a socket. All the low-level socket stuff is hidden to the outside world and only visible in default_multiplexer.cpp. The downside is that this cpp file is rather large. The default multiplexer has an IO event loop based on either epoll or poll. You can ignore all the abstraction for this based on the `event_handler` class as well as the pipes. The latter are used for being able to execute (dispatch) user-defined callbacks in the event loop of our multiplexer. You get this for free from ASIO. In the old `asio_network.hpp`, you'll see that there's a thin extra layer (streams and acceptors) to setup an ASIO "loop" via repeated calls to async_write and async_accept. You can take this as a guidance if you like, but keep in mind that the header is outdated.
The abstract class doorman (as well as scribe) has a message as member that is being re-used (i.e. filled with new values) whenever `new_connection` is called. It resets the messages and directly invokes it. The member function `add_tcp_scribe` does the same for scribes.
Usually, you can safely assume that no other thread is calling your member functions with a few exceptions. `new_tcp_scribe` is being called from `remote_actor`, so it has to be thread safe. It returns a `connection_handle` that you can later use from a broker.
By the way, the handles are basically a uint64_t values. They need to be unique. The default multiplexer simply casts the native socket descriptor to an uint64_t to achieve this. Those handles are used by the broker to identify scribes and doormen.
One more note: In ASIO you'll need to create a socket for `new_tcp_scribe` but you can't return the socket itself. So you need to temporary store the socket in a (mutex protected) map. Once the broker cals `assign_tcp_scribe`, you can take the socket out of the map and create a scribe for the broker using the socket. That's not needed in the default multiplexer since it directly operates on native sockets.
I hope this explains how the pieces fit together.
I have also created a new issue for this yesterday [1], so please feel free to ask more question over on GitHub. I'm sure josephnoir is also glad to help (though he has not much experience with ASIO).
Dominik
[1]
https://github.com/actor-framework/actor-framework/issues/232