The timeouts used only define what is
an error condition, so you can use a maximum timeout value such
that it should never be exceeded. Timeouts are enforced for
initialization, termination, and service request handling, so all
service execution is covered (when considering all the common
programming language functionality, i.e., Erlang usage of
cloudi_service_handle_info/3 in the cloudi_service behaviour is
the only thing I can think of without a timeout, and that is due
to Erlang integration requirements).
If the service request handling timeout can not have a decent
maximum (exceeding the 32bit timeout value, or too much variation
causing fault-tolerance problems) you can choose to not use the
return of the service request for the result and instead use a new
service request to provide the result (when a service request
provides no result, the service request destination is making it a
completely asynchronous service request, also, when the service
request provides no result, it is providing an empty binary, i.e.,
<<>> with Erlang syntax, for both ResponseInfo and
Response). That is the approach used by cloudi_service_queue when
it is in 'both' mode, since its service requests must be able to
survive a CloudI node restart.
So, with your example below, you need to either make sure your
send_async timeout is the absolute maximum of any latency that is
acceptable, or you should use send_async without recv_async and
instead have a subscription that allows the response to arrive as
a separate service request.