Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lock-Free Collectors, without atomic_ptr...

7 views
Skip to first unread message

Joe Seigh

unread,
Aug 4, 2003, 7:11:34 AM8/4/03
to

SenderX wrote:
>
> It is possible to create dynamic lock-free lists, that allow for concurrent
> lock-free pushs, pops, and iterations. You can do all of this without
> atomic_ptr, believe it or not...

...
> What do you think of it?

The K42 project, http://citeseer.nj.nec.com/54799.html , uses a single reference count for
proxy GC. It's a distributed reference count, sum of local processor reference counts,
and they have to use a separate thread to check for count == 0. For their system, they
claim there's no problem with the count not going to zero frequently enough. But they
have control over the application. In general, you might have a problem with this
scheme, count not going to zero due to constant overlapping of read access, so you'd
have to qualify the conditions required for it to work.

Joe Seigh

SenderX

unread,
Aug 4, 2003, 2:43:21 PM8/4/03
to
> It's a distributed reference count, sum of local processor reference
counts,
> and they have to use a separate thread to check for count == 0. For their
system, they
> claim there's no problem with the count not going to zero frequently
enough.

When I was first implementing this, I too was having problems with nodes not
being deleted fast enough to keep up with massive testing loads.


> In general, you might have a problem with this
> scheme, count not going to zero due to constant overlapping of read
access, so you'd
> have to qualify the conditions required for it to work.

I totally understand your point.

In order for that to happen, there would have to be multiple threads,
"concurrently" iterating, ALL the time...

In order to take pressure off the list, I could make the list objects
storage more fine grain...

My object is ac::stack, so I could code this:

template< typename T >

class lists
{

...


private:

// Get a bucket
m_buckets& hash( void* ptr )
{
...
}

private:

ac::stack< T > m_buckets[1024];

};

Now massive read access would have to go through 1024 "separate" buckets.
Each bucket has its own collector...

This should be sufficient for massive iteration, without garbage starvation.

;)


Any thoughts?

--
The designer of the experimental, SMP and HyperThread friendly, AppCore
library.

http://AppCore.home.comcast.net


Joe Seigh

unread,
Aug 4, 2003, 3:38:22 PM8/4/03
to

SenderX wrote:
>
> > In general, you might have a problem with this
> > scheme, count not going to zero due to constant overlapping of read
> access, so you'd
> > have to qualify the conditions required for it to work.
>
> I totally understand your point.
>
> In order for that to happen, there would have to be multiple threads,
> "concurrently" iterating, ALL the time...
>
> In order to take pressure off the list, I could make the list objects
> storage more fine grain...
>

A better way to quanlify this would be to say it should be limited
to the same environments that a reader/writer lock subject to possible
writer starvation would be limited to. There's valid kinds of
environments for that kind of r/w lock and they would be valid for
this kind of proxy GC also.

Joe Seigh

SenderX

unread,
Aug 4, 2003, 3:45:10 PM8/4/03
to
This code:

<orginal>

stack::pop(...)
...
if ( ! Local.node )
{
__asm{ nop };
//__asm{ pause };

continue;
}
</original>


HAS to be this:

if ( ! Local.node )
{
__asm{ nop };
//__asm{ pause };


// Need to re-read the global!!!!!
Local.value = m_front.value;

continue;
}


Its fixed on the site now.


DOH!

Damn cut and paste!

SenderX

unread,
Aug 4, 2003, 4:17:13 PM8/4/03
to
> There's valid kinds of
> environments for that kind of r/w lock and they would be valid for
> this kind of proxy GC also.

Yup.

I could try something else to stave off this problem: ( using the
hash-bucket design )

What if access to a buckets garbage collector, was guarded with a special
kind of lock.

This lock would:

Allow multi-threads lock-free access to the list, when the garbage node
count was lower than a certain level...

When a buckets garbage count gets to a high-watermark, the lock blocks,
while the accessing threads leave, and the garbage is dumped. Accessing
threads will have to leave their bucket if they want to iterate the whole
list. The act of leaving a bucket will decrement its garbage collector. When
all is finished, it wakes all waiting threads.

The granularity of the list storage would also help out a great deal.

Humm, there has to be ways around this, without having the user conform to
rules...

0 new messages