On Tue, Jun 13, 2023 at 9:07 PM Joseph Seigh <
jseig...@gmail.com> wrote:
>
> Ok, thanks. I'll examine it in a bit more detail, though at first glance it looks like they're using ipi to speed things up by not having to wait for slower occurring kernel events.
Yes, my understanding that it always sends IPIs rather than waiting
for something passively. But it knows what CPUs are actually running
threads from the current process at the moment.
> I'm doing some work on a hazard pointer based proxy collector w/ memory barriers that I suggested ages ago. I posted some smrproxy timing comparisons. I was going to rework the atomic reference counted proxy collector in c11/c17 atomics as well but the approximated timings are so bad compared to smrproxy that I think I will pass on that.
Yes, I guess it's expected for an atomic RMW on a shared location.
But it may be interesting to benchmark with 10K threads instead of 10
(10K is real for our server scenarios). I assume any HP approach needs
to iterate over all threads, so 10K may penalize it.
Btw, have you looked at rseq? It's pretty interesting facility:
https://kib.kiev.ua/kib/rseq.pdf
We use it in tcmalloc for per-CPU caches:
https://google.github.io/tcmalloc/rseq.html
Say, it can make HP use per-CPU register instead per-thread and
membarrier support aborting all concurrent rseq sequences. This can
allow for even more interesting algorithms, e.g. registering a hazard
pointer even w/o a loop in a rseq section (effectively atomic
memory-to-memory move wrt to the current CPU).
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "Scalable Synchronization Algorithms" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
lock-free+...@googlegroups.com.
> To view this discussion on the web visit
https://groups.google.com/d/msgid/lock-free/02316b45-8d28-49f3-af60-c49ae27518ban%40googlegroups.com.
--
Dmitry Vyukov
All about lockfree/waitfree algorithms, multicore, scalability,
parallel computing and related topics:
http://www.1024cores.net