Fwiw, here is some very crude pseudo-code that shows how a queue
per-thread can distribute the load. Like using a colander rather than a
single funnel. This has a single log thread but its assuming that
logging is not all that "frequent". The load is distributed by the fact
that each thread has its own mutex. Here is the crude pseudo-code:
Also this does not have any lifetime management (ref counting ect...)
wrt keeping request nodes alive in it:
_________________________
struct per_io_worker
{
// our local intrusive linked list node of workers
per_io_worker_list_node m_node;
// our local log queue
log_queue m_logq;
// our work loop... Infinity aside.
void work()
{
for (;;)
{
// wait for our io
raw_request* r = consume_io();
// queue our log request locally
m_logq.push(r);
// can I do something else?
//[... fill in the blank here ...];
}
}
};
// The logger!
struct log_io_worker
{
// a reference to a list of workers
per_io_worker_list& m_wlist;
// our personal log list
raw_request_list m_list;
// our work loop... Infinity aside.
void work()
{
// wait/sleep for signal
// gain requests in read access mode
m_wlist.read_lock();
for each worker in m_wlist
{
m_list.push_items(worker.m_logq.dequeue());
}
m_wlist.read_unlock();
// process log requests locally! :^)
for each raw_request in m_list
{
process_log(raw_request);
}
// dump it!
m_list.clear(); // empty it all out
}
};
_________________________
This example is crude, and does not show how to make it adaptable wrt
using try lock. Its high level and shows how per-thread queues can be
used by a single log thread. Wrt the wait/sleep part, well, we can
create a semaphore with timeout. Or build in some fancy convar logic for
this.
Can you grok this setup?
Sorry if I made a typo in the crude pseudo-code.
;^o