Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

About FlushProcessWriteBuffers() and IPIs..

2 views
Skip to first unread message

amin...@gmail.com

unread,
Dec 3, 2019, 4:39:45 PM12/3/19
to
Hello,


About FlushProcessWriteBuffers() and IPIs..

It seems that the implementation of the sys_membarrier on Linux 4.3 is too slow. Starting with kernel 4.14, there is a new flag MEMBARRIER_CMD_PRIVATE_EXPEDITED that enables much faster implementation of the syscall using IPI.

See https://lttng.org/blog/2018/01/15/membarrier-system-call-performance-and-userspace-rcu/

for some details.

And read the following about Userspace RCU, it is also using IPIs:

membarrier system call performance and the future of Userspace RCU on Linux

Read more here:

https://lttng.org/blog/2018/01/15/membarrier-system-call-performance-and-userspace-rcu/


Cache-coherency protocols do not use IPIs, and as a user-space level developer you do not care about IPIs at all. One is most interested in the cost of cache-coherency itself. However, Win32 API provides a function that issues IPIs to all processors (in the affinity mask of the current process) FlushProcessWriteBuffers(). You can use it to investigate the cost of IPIs.

When i do simple synthetic test on a dual core machine I've obtained following numbers.

420 cycles is the minimum cost of the FlushProcessWriteBuffers() function on issuing core.

1600 cycles is mean cost of the FlushProcessWriteBuffers() function on issuing core.

1300 cycles is mean cost of the FlushProcessWriteBuffers() function on remote core.

Note that, as far as I understand, the function issues IPI to remote core, then remote core acks it with another IPI, issuing core waits for ack IPI and then returns.

And the IPIs have indirect cost of flushing the processor pipeline.

You can download my inventions of scalable Asymmetric RWLocks that use IPIs and that are costless on the reader side from here:

https://sites.google.com/site/scalable68/scalable-rwlock



Thank you,
Amine Moulay Ramdane.
0 new messages