Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

About the LRU scalable algorithm..

14 views
Skip to first unread message

amin...@gmail.com

unread,
Oct 16, 2019, 1:34:58 PM10/16/19
to
Hello,


About the LRU scalable algorithm..

On 10/16/2019 7:48 AM, Bonita Montero on comp.programming.threads wrote:
> Amine, a quest for you:
> Database-servers and operating-system-kernels mostly use LRU as
> the scheme to evict old buffers from their cache. One issue with
> LRU is, that an LRU-structure can't be updated by multiple threads
> simultaneously. So you have to have a global lock.
> I managed to write a LRU-caching-class that can update the links
> in the LRU-list to put the most recent fetched block to the head
> of the list without any lock in almost any acccess. Only when
> flushing an entry or inserting a new I have to lock the structure
> completely; but in contrast to cache-hits this has usually a mag-
> nitude lower frequency because of the slowness of disk-/ssd-access,
> so this doesn't relly hurt.
> The algorithm is partitiylly locked, partitially lock-free. Even
> the case when putting cache hits to the head has to be processed
> in locked mode in very rare cases. And as I said inserting and
> flushing is conventional locked access.
> So the quest is for you: Can you guess what I did?



I think i am also smart, so i have just quickly found a solution that is scalable and that is not your solution, so it needs my hashtable that scales very well and it needs my fully scalable FIFO queue that i have invented. But i will not patent it, because i will let you patent yours since i have seen you speaking on the newsgroup of C++ about it.


But my solution is not Lockfree, it uses locks and it is scalable.


Thank you,
Amine Moulay Ramdane.
0 new messages