Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Read again my "final" corrected post about about efficiency and performance..

12 views
Skip to first unread message

amin...@gmail.com

unread,
Oct 14, 2019, 5:18:10 PM10/14/19
to
Hello,


Read again my "final" corrected post about about efficiency and performance..

I was just reading the following webpage on "researchgate" about:

How can I define efficiency and what are the different types of efficiency ?

Here it is:

https://www.researchgate.net/post/How_can_I_define_efficiency_and_what_are_the_different_types_of_efficiency


And if we look at the dictionary, it says:

"Efficiency: it is the ability to do things well, successfully, and without waste."

But now you are feeling that doing the things well and successfully looks like performance, so we can say in computer science that a datastructure is space-efficient or time-efficient or both, but we can even see a space-efficient datastructure as being performant in the
"criterion" of space , and the definition of "performant" from the dictionary means:

"(of technology, etc.) working in an effective way"


And the definition of "effective" from the dictionary means:

"Successful in producing a desired or intended result."


But here again you can, in for example the space-efficiency of datastructures, measure the "success" by comparing relatively to
other datastructures and say that this datastructure is more performant
in the criterion of space than other datastructures. so in computer science being a performant datastructure can be measured relatively to the other performance of datastructures, so i think that performance can be used as a "generalization" that replace efficiency, because we can for example give scores to criteria and see in what criterion we are performant.



Thank you,
Amine Moulay Ramdane.

Bonita Montero

unread,
Oct 16, 2019, 7:48:42 AM10/16/19
to
Amine, a quest for you:
Database-servers and operating-system-kernels mostly use LRU as
the scheme to evict old buffers from their cache. One issue with
LRU is, that an LRU-structure can't be updated by multiple threads
simultaneously. So you have to have a global lock.
I managed to write a LRU-caching-class that can update the links
in the LRU-list to put the most recent fetched block to the head
of the list without any lock in almost any acccess. Only when
flushing an entry or inserting a new I have to lock the structure
completely; but in contrast to cache-hits this has usually a mag-
nitude lower frequency because of the slowness of disk-/ssd-access,
so this doesn't relly hurt.
The algorithm is partitiylly locked, partitially lock-free. Even
the case when putting cache hits to the head has to be processed
in locked mode in very rare cases. And as I said inserting and
flushing is conventional locked access.
So the quest is for you: Can you guess what I did?

Wisdom90

unread,
Oct 16, 2019, 1:29:38 PM10/16/19
to
I think i am also smart, so i have just quickly found a solution that is
scalable and that is not your solution, so it needs my hashtable that
scales very well and it needs my fully scalable FIFO queue that i have
invented. But i will not patent it, because i will let you patent yours
since i have seen you speaking on the newsgroup of C++ about it.


But my solution is not Lockfree, it uses locks and it is scalable.

Bonita Montero

unread,
Oct 16, 2019, 2:14:22 PM10/16/19
to
> I think i am also smart, so i have just quickly found a solution that is
> scalable and that is not your solution, so it needs my hashtable that
> scales very ...

A scaling hashtable is easy.

> ... scales very well and it needs my fully scalable FIFO queue that
> i have invented.

That's also easy.

> But i will not patent it, ..

There's prior art for both.
As someone else told you: your ideas aren't new.

> because i will let you patent yours since i have seen you speaking on
> the newsgroup of C++ about it.

I'd have no problem if you show my idea as prior art.
So go on.

> But my solution is not Lockfree, it uses locks and it is scalable.

A locking LRU-list is never scalable.

Wisdom90

unread,
Oct 16, 2019, 2:19:36 PM10/16/19
to
On 10/16/2019 1:33 PM, Wisdom90 wrote:
> Hello...
>
>
> About the LRU scalable algorithm..
>
> On 10/16/2019 7:48 AM, Bonita Montero on comp.programming.threads wrote:
> > Amine, a quest for you:
> > Database-servers and operating-system-kernels mostly use LRU as
> > the scheme to evict old buffers from their cache. One issue with
> > LRU is, that an LRU-structure can't be updated by multiple threads
> > simultaneously. So you have to have a global lock.
> > I managed to write a LRU-caching-class that can update the links
> > in the LRU-list to put the most recent fetched block to the head
> > of the list without any lock in almost any acccess. Only when
> > flushing an entry or inserting a new I have to lock the structure
> > completely; but in contrast to cache-hits this has usually a mag-
> > nitude lower frequency because of the slowness of disk-/ssd-access,
> > so this doesn't relly hurt.
> > The algorithm is partitiylly locked, partitially lock-free. Even
> > the case when putting cache hits to the head has to be processed
> > in locked mode in very rare cases. And as I said inserting and
> > flushing is conventional locked access.
> > So the quest is for you: Can you guess what I did?
>
>
>
> I think i am also smart, so i have just quickly found a solution that is
> scalable and that is not your solution, so it needs my hashtable that
> scales very well and it needs my fully scalable FIFO queue that i have
> invented. But i will not patent it, because i will let you patent yours
> since i have seen you speaking on the newsgroup of C++ about it.
>
>
> But my solution is not Lockfree, it uses locks and it is scalable.
>
>
> Thank you,
> Amine Moulay Ramdane.
> A locking LRU-list is never scalable.

I said it uses locks and it is scalable, because locks are distributed
like when you does lock striping. So there is no logical contradiction
in my writing.

Wisdom90

unread,
Oct 16, 2019, 2:20:18 PM10/16/19
to
On 10/16/2019 2:14 PM, Bonita Montero wrote:
I correct:

I said it uses locks and it is scalable, because locks are distributed
like when you do lock striping. So there is no logical contradiction in
my writing.

Bonita Montero

unread,
Oct 16, 2019, 11:07:53 PM10/16/19
to
> I said it uses locks and it is scalable, because locks are distributed
> like when you does lock striping. So there is no logical contradiction
> in my writing.

Striped locks are nothing new with hashtables.
Java has ConcurrehtHashMap<K, V>.
0 new messages