Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

buffers generated one one CPU and processed on others

34 views
Skip to first unread message

JuanP

<jpalotes205@gmail.com>
unread,
Aug 1, 2024, 11:16:09 PM8/1/24
to seastar-dev
Hello, I have a project where I am expecting to be receiving from the network several 1000's of messages per second (its a telco network) so I have set up a service (sharded service) with one receiver CPU that would receive and dispatch received buffers as they come in to other CPU's for processing.... my question is about what strategy to implement with these buffers when the other CPU are done processing them and they now need to be released/reused ? 

I know the buffers have to be freed on the receiving/creating CPU but this number would amount to 1000's per second (and maybe even more at times) so I would have this very overload CPU receiving packages from the network and then receiving buffers back from other CPU's to be freed when the other CPU's are done with the buffers (using foreign_ptr), is this optimal ? is there a better strategy/way ?  is this the only way ? .. Any suggestions or better ideas for this design ? 

Really appreciate any help/direction/hints on this matter. 

Thank you very much in advance!

Botond Dénes

<bdenes@scylladb.com>
unread,
Aug 6, 2024, 2:10:04 AM8/6/24
to JuanP, seastar-dev
If your concern is extra smp::submit_to() traffic generated by ~foreign_ptr(), then there are ways to mitigate this. The simplest one is to keep the buffer alive on the coordinator shard and free it after the request is handled on the worker shard. This way, there is no foreign_ptr involved.
 

Thank you very much in advance!

--
You received this message because you are subscribed to the Google Groups "seastar-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to seastar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/seastar-dev/a54a7283-650c-468a-bd22-db14e9866739n%40googlegroups.com.

Avi Kivity

<avi@scylladb.com>
unread,
Aug 6, 2024, 4:32:57 AM8/6/24
to JuanP, seastar-dev
It's preferable to have multiple connections and a fully symmetric architecture, with each shard having the same responsibilities as every other shard.

If it's not possible, then as Botond says you submit_to() only a reference to the data, and the network processing shard is responsible for freeing it. In case processing needs to hold on to the data after it returns, it must make a copy.
--

JuanP

<jpalotes205@gmail.com>
unread,
Aug 6, 2024, 10:39:48 PM8/6/24
to seastar-dev
Hi, Thank you for your response, can you please expand on how i can keep the buffer alive on the coordinator shard until the worker shard is done with it and then coordinator can reuse the buffer for more incoming messages ? i do not want to allocate for every incoming message so these buffers are in a memory pool ... but with that said, and even if a allocate a buffer for every incoming message,  are we not supposed to free the memory on the allocating shard so we can return memory to the owner shard as to keep the locality/ performance ? ... maybe i am missing something, i am kinda new with seastar ... thank you!! 
Message has been deleted

Avi Kivity

<avi@scylladb.com>
unread,
Aug 7, 2024, 4:47:07 AM8/7/24
to JuanP, seastar-dev
If you have a temporary_buffer it's not easy to reuse as you get a new one from the network APIs each time. Here's how to keep it alive.


future<>
my_connection::read_loop() {
    while (auto buf = co_await _conn.read()) {
         auto shard = determine_shard(buf);
         co_await _my_service.invoke_on(shard, buf.get());
         // buf is held by the coroutine and destroyed here
    }
}

You may choose to run some processing in parallel, so details by vary.

If your buffer isn't a temporary_buffer, just keep it in a linked list or some other container.

JuanP

<jpalotes205@gmail.com>
unread,
Aug 8, 2024, 9:16:06 PM8/8/24
to seastar-dev
Thank you very much for the advice!  
Reply all
Reply to author
Forward
0 new messages