Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss
Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

SGCL - Garbage Collector for C++

14 views
Skip to first unread message

Sebastian Nibisz

unread,
Feb 28, 2008, 4:41:58 PM2/28/08
to
SGCL is precise, parallel garbage collection library for C++ (at this time
for Windows 32/64 only). SGCL is free software published under University of
Illinois/NCSA Open Source License.

Get it at:
http://sourceforge.net/projects/sgcl/

Regards
Sebastian Nibisz


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Larry Evans

unread,
Feb 29, 2008, 6:00:50 AM2/29/08
to
On 02/28/08 15:41, Sebastian Nibisz wrote:
> SGCL is precise, parallel garbage collection library for C++ (at this time

How can it be precise without a compiler which constructs for each class
template or struct, the pointers within that template or struct? Also,
how are pointers on stack determined precisely without help from compiler?

> for Windows 32/64 only). SGCL is free software published under
> University of
> Illinois/NCSA Open Source License.
>
> Get it at:
> http://sourceforge.net/projects/sgcl/
>

--

Antoon

unread,
Feb 29, 2008, 6:42:18 AM2/29/08
to

"Sebastian Nibisz" <EB...@poczta.onet.pl> schreef in bericht
news:fq6fct$rio$1...@inews.gazeta.pl...

> SGCL is precise, parallel garbage collection library for C++ (at this time
> for Windows 32/64 only). SGCL is free software published under University
of
> Illinois/NCSA Open Source License.
>
> Get it at:
> http://sourceforge.net/projects/sgcl/
>

It would be nice to be able to read some documentation without having to
download the code (and potentialy have to extract it from the source code
yourself) just to see if it is applicable.

Antoon

{ Edits: quoted signature and clc++m banner removed. Please don't quote
extraneous material. -mod }

Sebastian Nibisz

unread,
Feb 29, 2008, 1:42:31 PM2/29/08
to
>> SGCL is precise, parallel garbage collection library for C++ (at this
>> time
>
> How can it be precise without a compiler which constructs for each class
> template or struct, the pointers within that template or struct? Also,
> how are pointers on stack determined precisely without help from compiler?

SGCL defines smart pointer classes gc & const_gc which behave mostly like
normal plain C/C++ pointers but you don't have to
explicitly free them.

gc<int> val = gcnew int;
gc<int> arr = gcnew int[100];

Smart pointers are updating data in the GC system. Collection is executed in
a separate thread.
It is necessary to create threads using the Sgcl::Thread class.

rgds
Sebastian Nibisz

Larry Evans

unread,
Feb 29, 2008, 4:16:34 PM2/29/08
to
On 02/29/08 12:42, Sebastian Nibisz wrote:
>>> SGCL is precise, parallel garbage collection library for C++ (at this
>>> time
[snip]

> SGCL defines smart pointer classes gc & const_gc which behave mostly like
> normal plain C/C++ pointers but you don't have to
> explicitly free them.
>
> gc<int> val = gcnew int;
> gc<int> arr = gcnew int[100];
>
> Smart pointers are updating data in the GC system. Collection is
[snip]
Ah! So, IIUC, the smart pointer constructor registers the pointer in
the GC system and the smart pointer destructor deregisters the pointer
in the GC system. So, there's a storage cost (for the registration
data) and a time cost (to avoid duplicate registrations, I suppose).

Could you give some estimate of these costs?

Sebastian Nibisz

unread,
Feb 29, 2008, 8:18:24 PM2/29/08
to
>> Smart pointers are updating data in the GC system. Collection is
> [snip]
> Ah! So, IIUC, the smart pointer constructor registers the pointer in
> the GC system and the smart pointer destructor deregisters the pointer
> in the GC system. So, there's a storage cost (for the registration
> data) and a time cost (to avoid duplicate registrations, I suppose).
>
> Could you give some estimate of these costs?

It isn't easy to estimate these costs.

I will give the comparison...

Creating and destroying null pointer (billion operations):
{
int* volatile ptr1; // 0s
shared_ptr<int> s_ptr; // 0s
gc<int> gc_ptr; // 9.5s
}

Creating, and destroying pointer (billion operations):
{
int* volatile ptr1 = ptr2; // 1.2s
shared_ptr<int> s_ptr1 = s_ptr2; // 20.1s
gc<int> gc_ptr1 = gc_ptr2; // 9.6s
}

Copying pointer (billion operations):
{
ptr1 = ptr2; // 1.3s
s_ptr1 = s_ptr2; // 23.7s
gc_ptr1 = gc_ptr2; // 2.3s
}

rgds
Sebastian Nibisz

Mathias Gaunard

unread,
Mar 1, 2008, 9:17:11 AM3/1/08
to
On 1 mar, 02:18, "Sebastian Nibisz" <EB...@poczta.onet.pl> wrote:

> Creating and destroying null pointer (billion operations):
> {
> int* volatile ptr1; // 0s

This doesn't initialize to null, this is uninitialized.

AlfC

unread,
Mar 1, 2008, 11:45:18 AM3/1/08
to
On Feb 29, 10:42 am, "Sebastian Nibisz" <EB...@poczta.onet.pl> wrote:
> gc<int> val = gcnew int;
> gc<int> arr = gcnew int[100];

How do you achieve this syntax?, I can't find it in your source code.
I am asking about the "gcnew int" part.

It is not a function call, it is not an operator or a member function
and apparently is not a macro either. What am I missing?

Thanks,
Alfredo

Sebastian Nibisz

unread,
Mar 1, 2008, 1:49:55 PM3/1/08
to
>> Creating and destroying null pointer (billion operations):
>> {
>> int* volatile ptr1; // 0s
>
> This doesn't initialize to null, this is uninitialized.

Yes, it's truth. The description of the test is inaccurate.

rgds
Sebastian Nibisz

Sebastian Nibisz

unread,
Mar 1, 2008, 2:12:20 PM3/1/08
to
>> gc<int> val = gcnew int;
>> gc<int> arr = gcnew int[100];
>
> How do you achieve this syntax?, I can't find it in your source code.
> I am asking about the "gcnew int" part.
>
> It is not a function call, it is not an operator or a member function
> and apparently is not a macro either. What am I missing?

In file 'GCNew.hpp'

...

template<class T>
inline gc<T> operator %(const GCInitNew&, T* ptr)
{
return GCNew<T>(ptr);
}

...

#define gcnew Sgcl::Detail::GCInitNew() % new(Sgcl::Detail::GC)

rgds
Sebastian Nibisz

Chris Thomasson

unread,
Mar 2, 2008, 7:45:53 AM3/2/08
to
"Sebastian Nibisz" <EB...@poczta.onet.pl> wrote in message
news:fq6fct$rio$1...@inews.gazeta.pl...

> SGCL is precise, parallel garbage collection library for C++ (at this time
> for Windows 32/64 only). SGCL is free software published under University
> of
> Illinois/NCSA Open Source License.
[...]

I don't have time to read the code now. Do you need to use memory
barriers and atomic operations? BTW, how does this compare to a PDR
algorithm which does not use any expensive operations (e.g., membar or
interlocking)? How are you getting around acquire/release semantics? If
you have not figured out how to do this, then your GC is not that
special. If you did, and its gets around prior-art/patents, I would like
you to explain it to me. Have you did a patent search yet?


Thanks.

Jeff Schwab

unread,
Mar 2, 2008, 7:44:13 AM3/2/08
to
Sebastian Nibisz wrote:
>>> gc<int> val = gcnew int;
>>> gc<int> arr = gcnew int[100];
>>
>> How do you achieve this syntax?, I can't find it in your source code.
>> I am asking about the "gcnew int" part.
>>
>> It is not a function call, it is not an operator or a member function
>> and apparently is not a macro either. What am I missing?
>
> In file 'GCNew.hpp'
>
> ...
>
> template<class T>
> inline gc<T> operator %(const GCInitNew&, T* ptr)
> {
> return GCNew<T>(ptr);
> }
>
> ...
>
> #define gcnew Sgcl::Detail::GCInitNew() % new(Sgcl::Detail::GC)

I don't see at all how that works, but the syntax certainly is nice.

Sebastian Nibisz

unread,
Mar 2, 2008, 1:12:27 PM3/2/08
to
> Do you need to use memory barriers and atomic operations?

Creating and destroying threads requires using memory barrier.
Different operations don't require using memory barriers and atomic
operations.

> BTW, how does this compare to a PDR algorithm which does not use any
> expensive operations (e.g., membar or interlocking)?

I don't know the PDR algorithm.

> How are you getting around acquire/release semantics?

All pointers are buffered. Every thread has buffered area for the stack.
Smart pointers are updating data in buffered area. Garbage Collector is
switching buffers and he is analysing frozen state. Garbage collection
doesn't stop threads of application.

rgds
Sebastian Nibisz

Chris Thomasson

unread,
Mar 3, 2008, 4:13:18 PM3/3/08
to
"Sebastian Nibisz" <EB...@poczta.onet.pl> wrote in message
news:fqeiud$q3q$1...@inews.gazeta.pl...

>> Do you need to use memory barriers and atomic operations?
>
> Creating and destroying threads requires using memory barrier.
> Different operations don't require using memory barriers and atomic
> operations.
>
>> BTW, how does this compare to a PDR algorithm which does not use any
>> expensive operations (e.g., membar or interlocking)?
>
> I don't know the PDR algorithm.

It stands for "Partial Copy-On-Write Deferred Reclamation":

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/82376b7295e6ce1a

It's a method for managing memory in non-blocking data-structures. This
includes algorithms such as RCU, SMR, Joe Seighs RCU+SMR, my vZOOM, proxy
collectors, strongly thread-safe reference counting, ect...


>> How are you getting around acquire/release semantics?
>
> All pointers are buffered. Every thread has buffered area for the stack.
> Smart pointers are updating data in buffered area. Garbage Collector is
> switching buffers and he is analysing frozen state. Garbage collection
> doesn't stop threads of application.

That does not explain how your getting around memory barriers; I need more
details. Are you using an epoch detection mechanism such as quiescent
states? How are you handling the synchronization during a buffer switch?
BTW, I think you send follow-ups over to comp.programming.threads where the
implementation details are more on topic.

mark.z...@gmail.com

unread,
Mar 3, 2008, 4:13:18 PM3/3/08
to
On Mar 1, 2:12 pm, "Sebastian Nibisz" <EB...@poczta.onet.pl> wrote:
> >> gc<int> val = gcnew int;
> >> gc<int> arr = gcnew int[100];
>
> > How do you achieve this syntax?, I can't find it in your source code.
> > I am asking about the "gcnew int" part.
>
> template<class T>
> inline gc<T> operator %(const GCInitNew&, T* ptr)
> {
> return GCNew<T>(ptr);
>
> }
>
> #define gcnew Sgcl::Detail::GCInitNew() % new(Sgcl::Detail::GC)
>

Order of parameter evaluation for operator % is NOT deterministic.
Compiler can reorder it.
As the result You may call your operator new on unitialyzed GC.

Chris Thomasson

unread,
Mar 3, 2008, 4:13:26 PM3/3/08
to

"Sebastian Nibisz" <EB...@poczta.onet.pl> wrote in message
news:fqeiud$q3q$1...@inews.gazeta.pl...

>> Do you need to use memory barriers and atomic operations?
>
> Creating and destroying threads requires using memory barrier.
> Different operations don't require using memory barriers and atomic
> operations.
>
>> BTW, how does this compare to a PDR algorithm which does not use any
>> expensive operations (e.g., membar or interlocking)?
>
> I don't know the PDR algorithm.
>
>> How are you getting around acquire/release semantics?
>
> All pointers are buffered. Every thread has buffered area for the stack.
> Smart pointers are updating data in buffered area. Garbage Collector is
> switching buffers and he is analysing frozen state. Garbage collection
> doesn't stop threads of application.

Have you checked this GC implementation out:

http://hnxgc.harnixworld.com

here is some discussion on it:

http://groups.google.com/group/comp.lang.c++/browse_frm/thread/3011f400cfe1822b

The author is using an analog of the Linux kernel rcu_synchronization()
procedure to ensure memory synchronization. Also, have you checked out any
proxy garbage collector algorithms?

http://groups.google.com/group/comp.programming.threads/msg/41f29efe33e7f124

These collectors can use existing allocator implementations (e.g.,
malloc/free - new/delete) and are very low overhead.

Alf P. Steinbach

unread,
Mar 4, 2008, 11:26:41 AM3/4/08
to
* mark.z...@gmail.com:

> On Mar 1, 2:12 pm, "Sebastian Nibisz" <EB...@poczta.onet.pl> wrote:
>>>> gc<int> val = gcnew int;
>>>> gc<int> arr = gcnew int[100];
>>> How do you achieve this syntax?, I can't find it in your source code.
>>> I am asking about the "gcnew int" part.
>> template<class T>
>> inline gc<T> operator %(const GCInitNew&, T* ptr)
>> {
>> return GCNew<T>(ptr);
>>
>> }
>>
>> #define gcnew Sgcl::Detail::GCInitNew() % new(Sgcl::Detail::GC)
>>
>
> Order of parameter evaluation for operator % is NOT deterministic.

I think you mean that argument evaluation order in general is
implementation-specific.


> Compiler can reorder it.
> As the result You may call your operator new on unitialyzed GC.

The above is a bit difficult to relate to the example code.

As I understand it, you're assuming that GCInitNew's constructor
allocates storage and places a pointer to that storage in a global named
GC. And if so then yes it's not portable to assume sequential
evaluation of the arguments. However I don't see any definition of GC.

It could be that the only pupose of GC is to use a specific allocation
function that does something, that GC is const or a function or whatever
matches the argument type of the allocation function in question.


Cheers,

- Alf


--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

Xu Weijiang

unread,
Mar 4, 2008, 4:21:55 PM3/4/08
to
mark.z...@gmail.com writes:

> On Mar 1, 2:12 pm, "Sebastian Nibisz" <EB...@poczta.onet.pl> wrote:
>> >> gc<int> val = gcnew int;
>> >> gc<int> arr = gcnew int[100];
>>
>> > How do you achieve this syntax?, I can't find it in your source code.
>> > I am asking about the "gcnew int" part.
>>
>> template<class T>
>> inline gc<T> operator %(const GCInitNew&, T* ptr)
>> {
>> return GCNew<T>(ptr);
>>
>> }
>>
>> #define gcnew Sgcl::Detail::GCInitNew() % new(Sgcl::Detail::GC)
>>
>
> Order of parameter evaluation for operator % is NOT deterministic.

I think user-defined 'operator %' is deterministic in this regard.

Matti Rintala

unread,
Mar 4, 2008, 4:22:22 PM3/4/08
to
mark.z...@gmail.com wrote:
>> template<class T>
>> inline gc<T> operator %(const GCInitNew&, T* ptr)
>> {
>> return GCNew<T>(ptr);
>>
>> }
>>
>> #define gcnew Sgcl::Detail::GCInitNew() % new(Sgcl::Detail::GC)
>
> Order of parameter evaluation for operator % is NOT deterministic.
> Compiler can reorder it.
> As the result You may call your operator new on unitialyzed GC.

I had a similar problem in a memory tracking library of mine (Tutnew). I
solved the evaluation order by using ?:

#define new tutnew_temp(__LINE__, __FILE__) ? 0 : new

The same could be used here.

--
------------- Matti Rintala ------------ bi...@cs.tut.fi ------------
Painting is the art of inclusion. Photography is an art of exclusion.

Francis Glassborow

unread,
Mar 4, 2008, 4:25:07 PM3/4/08
to
Alf P. Steinbach wrote:
> * mark.z...@gmail.com:
>> On Mar 1, 2:12 pm, "Sebastian Nibisz" <EB...@poczta.onet.pl> wrote:
>>>>> gc<int> val = gcnew int;
>>>>> gc<int> arr = gcnew int[100];
>>>> How do you achieve this syntax?, I can't find it in your source code.
>>>> I am asking about the "gcnew int" part.
>>> template<class T>
>>> inline gc<T> operator %(const GCInitNew&, T* ptr)
>>> {
>>> return GCNew<T>(ptr);
>>>
>>> }
>>>
>>> #define gcnew Sgcl::Detail::GCInitNew() % new(Sgcl::Detail::GC)
>>>
>>
>> Order of parameter evaluation for operator % is NOT deterministic.
>
> I think you mean that argument evaluation order in general is
> implementation-specific.
>

No, it is worse than that, it is unspecified. That means that the
compiler can choose the order arbitrarily.

--

Sebastian Nibisz

unread,
Mar 4, 2008, 4:21:39 PM3/4/08
to
> Have you checked this GC implementation out:
>
> http://hnxgc.harnixworld.com
>
> here is some discussion on it:
>
> http://groups.google.com/group/comp.lang.c++/browse_frm/thread/3011f400cfe1822b
>
> The author is using an analog of the Linux kernel rcu_synchronization()
> procedure to ensure memory synchronization. Also, have you checked out any
> proxy garbage collector algorithms?
>
> http://groups.google.com/group/comp.programming.threads/msg/41f29efe33e7f124
>
> These collectors can use existing allocator implementations (e.g.,
> malloc/free - new/delete) and are very low overhead.

I didn't know the HnxGC library. It results from what I read that this
library is implementing garbage collector in a different way to the SGCL
library.

Badly I classified the library in my first message (correctly is on the
project side) - SGCL library is implementing concurrent, not parallel
garbage collector. Only one thread of the garbage collector exists. The
thread of the garbage collector is executed non-stop and he isn't being
synchronized with different threads.

rgds
Sebastian Nibisz

Greg Herlihy

unread,
Mar 4, 2008, 11:07:47 PM3/4/08
to
On Mar 3, 1:13 pm, mark.zayt...@gmail.com wrote:
> On Mar 1, 2:12 pm, "Sebastian Nibisz" <EB...@poczta.onet.pl> wrote:
>
> > >> gc<int> val = gcnew int;
> > >> gc<int> arr = gcnew int[100];
>
> > > How do you achieve this syntax?, I can't find it in your source code.
> > > I am asking about the "gcnew int" part.
>
> > template<class T>
> > inline gc<T> operator %(const GCInitNew&, T* ptr)
> > {
> > return GCNew<T>(ptr);
> > }
>
> > #define gcnew Sgcl::Detail::GCInitNew() % new(Sgcl::Detail::GC)
>
> Order of parameter evaluation for operator % is NOT deterministic.
> Compiler can reorder it.
> As the result You may call your operator new on unitialyzed GC.

The "Sgcl::Detail::GC" parameter is just a dummy parameter to new() -
it is not the type of object being allocated. The actual type of
object being allocated in a given instance - would be provided by the
program and would come after the gcnew macro; exactly as shown in the
usage examples above.

Furthermore, there is in fact no apparent dependency between the two
sides of the % operation - so their respective order of evaluation
should not matter. Moreover, it is certain that that both sides of %
expression will have been evaluated before the operator%() function
itself executes; so it is certain that the pointer passed to the
operator%() function will have been initialized.

Greg

Chris Thomasson

unread,
Mar 5, 2008, 11:38:38 AM3/5/08
to
"Sebastian Nibisz" <EB...@poczta.onet.pl> wrote in message
news:fqhu95$d6a$1...@inews.gazeta.pl...

>> Have you checked this GC implementation out:
>>
>> http://hnxgc.harnixworld.com
>>
>> here is some discussion on it:
>>
>> http://groups.google.com/group/comp.lang.c++/browse_frm/thread/3011f400cfe1822b
>>
>> The author is using an analog of the Linux kernel rcu_synchronization()
>> procedure to ensure memory synchronization. Also, have you checked out
>> any proxy garbage collector algorithms?
>>
>> http://groups.google.com/group/comp.programming.threads/msg/41f29efe33e7f124
>>
>> These collectors can use existing allocator implementations (e.g.,
>> malloc/free - new/delete) and are very low overhead.
>
> I didn't know the HnxGC library. It results from what I read that this
> library is implementing garbage collector in a different way to the SGCL
> library.
>
> Badly I classified the library in my first message (correctly is on the
> project side) - SGCL library is implementing concurrent, not parallel
> garbage collector. Only one thread of the garbage collector exists. The
> thread of the garbage collector is executed non-stop and he isn't being
> synchronized with different threads.

How do your collector thread(s) observe their quiescent state(s)?

Sebastian Nibisz

unread,
Mar 5, 2008, 5:13:10 PM3/5/08
to
> How do your collector thread(s) observe their quiescent state(s)?

All pointers are being enrolled in special buffers of data. When the garbage
collector is working with one buffer of data, different threads are updating
pointers in the second buffer. After finishing garbage collecting, buffers
of data are being switched. Switching buffers of data doesn't require the
synchronization, it is threads-safe.

rgds
Sebastian Nibisz

Chris Thomasson

unread,
Mar 5, 2008, 10:52:00 PM3/5/08
to

"Sebastian Nibisz" <EB...@poczta.onet.pl> wrote in message
news:fqmlgg$4cn$1...@inews.gazeta.pl...

>> How do your collector thread(s) observe their quiescent state(s)?
>
> All pointers are being enrolled in special buffers of data. When the
> garbage
> collector is working with one buffer of data, different threads are
> updating
> pointers in the second buffer. After finishing garbage collecting, buffers
> of data are being switched. Switching buffers of data doesn't require the
> synchronization, it is threads-safe.

How could I use your smart-pointer interface to do something like this:

_____________________________________________________________
/* assume this is all garbage collected */

struct llnode {
llnode* next;
void* state;
};


static llnode* volatile g_list = 0;


int list_push(void* const state) {
llnode* const _this = gc_malloc(sizeof(*_this));
if (_this) {
_this->state = state;
do {
cmp = g_list;
_this->next = cmp;
} while (CASPTR_RELEASE(&g_list, cmp, _this));
return 0;
}
return ENOMEM;
}


llnode* list_pop() {
void* state;
llnode* cmp;
do {
if (! (cmp = g_list)) {
return 0;
}
} while (CASPTR_ACQUIRE(&g_list, cmp, cmp->next));
state = cmp->state;
cmp = 0;
return state;
}
_____________________________________________________________

Since I am using GC, there is no ABA problem, and everything gets cleaned
up. What I am most interested in is how this would look with your smart
pointer interface?

Chris Thomasson

unread,
Mar 6, 2008, 11:37:32 AM3/6/08
to

"Sebastian Nibisz" <EB...@poczta.onet.pl> wrote in message
news:fqmlgg$4cn$1...@inews.gazeta.pl...

>> How do your collector thread(s) observe their quiescent state(s)?
>
> All pointers are being enrolled in special buffers of data. When the
> garbage
> collector is working with one buffer of data, different threads are
> updating
> pointers in the second buffer. After finishing garbage collecting, buffers
> of data are being switched. Switching buffers of data doesn't require the
> synchronization, it is threads-safe.

How you ensure proper memory visibility wrt an application thread's
operations on the buffer? The collector threads needs some kind of memory
synchronization with its mutators. Also, how is the buffer switching
different from real-time RCU with N=2 collector setup?

Can I do this:

___________________________________________________
static gc<int> g_val = gcnew int(0);


thread_writer() {
gc<int> cmp, xchg = gcnew int;
do {
cmp = g_val;
*xchg = (*cmp) + 1;
} while (! g_val.cas_release(cmp, xchg));
printf("inc %d to %d\n", *cmp, *xchg);
}


thread_reader() {
gc<int> l_val = g_val;
if (l_val) {
printf("read %d\n", *l_val);
}
}
___________________________________________________

-- or --


___________________________________________________
template<typename T>
struct llnode {
gc<llnode<T> > next;
gc<T> state;
};


struct foo {
typedef gc<llnode<foo> > gcptr;
};


static foo::gcptr g_list;


void llpush(foo::gcptr& n) {
foo::gcptr cmp;
do {
n->next = cmp = g_list;
} while (! g_list.cas_release(cmp, n));
}


bool llpop(foo::gcptr& ret) {
foo::gcptr cmp, xchg;


do {
if (! (cmp = g_list)) {

return false;
}
xchg = cmp->next;
} while (! g_list.cas_acquire(cmp, xchg));
ret = cmp;
return true;
}
___________________________________________________


I want to be able to do lock/wait-free programming with your smart pointer
interface. With proxy GC, I can use plain raw pointers for everything. How
is your scheme better than a proxy gc wrt creating non-blocking algorithms?

Sebastian Nibisz

unread,
Mar 6, 2008, 11:42:20 AM3/6/08
to

//----------------------------------------------------------
struct node
{
node(const gc<node>& n, void* s) : next(n), state(s)
{
}
gc<node> next;
void* state;
};

static gc<node> g_list;
static Sgcl::Mutex g_lmutex;

void list_push(void* state) throw(std::bad_alloc)
{
Sgcl::AutoLock l(g_lmutex);
g_list = gcnew node(g_list, state);
}

void* list_pop()
{
Sgcl::AutoLock l(g_lmutex);
if (g_list)
{
void* state = g_list->state;
g_list = g_list->next;
return state;
}
else return 0;
}
//----------------------------------------------------------

rgds
Sebastian Nibisz

Sebastian Nibisz

unread,
Mar 6, 2008, 3:57:21 PM3/6/08
to
> How you ensure proper memory visibility wrt an application thread's
> operations on the buffer? The collector threads needs some kind of memory
> synchronization with its mutators. Also, how is the buffer switching
> different from real-time RCU with N=2 collector setup?

All pointers are being enrolled in special buffers of data. When the garbage
collector is working with one buffer of data, different threads are updating
pointers in the second buffer. After finishing garbage collecting, buffers
of data are being switched. Switching buffers of data doesn't require the
synchronization, it is threads-safe.

> I want to be able to do lock/wait-free programming with your smart

> pointer
> interface. With proxy GC, I can use plain raw pointers for everything. How
> is your scheme better than a proxy gc wrt creating non-blocking
> algorithms?

Blocking in the way you presented isn't possible. Operations of assigning
isn't atomic. All remaining operations also an operation of updates given in
the system of the garbage collector are atomic

Assigning operator in the gc class:

template<class U>
gc& operator =(const gc<U>& gc)
{
_Pointer = gc._Pointer;
_MemHeader = gc._MemHeader;
_ObjectHeader->Set(_MemHeader);

return *this;
}

rgds
Sebastian Nibisz

Chris Thomasson

unread,
Mar 6, 2008, 6:42:25 PM3/6/08
to
"Sebastian Nibisz" <EB...@poczta.onet.pl> wrote in message
news:fqon50$aec$1...@inews.gazeta.pl...

>> How could I use your smart-pointer interface to do something like this:
>>
>> _____________________________________________________________
[...]>> _____________________________________________________________

>>
>> Since I am using GC, there is no ABA problem, and everything gets cleaned
>> up. What I am most interested in is how this would look with your smart
>> pointer interface?
>
> //----------------------------------------------------------
[...]
> //----------------------------------------------------------

Okay. AFAICT, I cannot create non-blocking algorithms with your
smart-pointer interface. Am I correct?

Sebastian Nibisz

unread,
Mar 25, 2008, 7:52:28 PM3/25/08
to
Hi.

I added the possibility of blocking resources on the level of single
indicators. The blocking mechanism is using the "lock cmpxchg" instruction.
This solution isn't more efficient than system mechanisms. All atomic
operations are very time-consuming.

rgds
Sebastian Nibisz

> Can I do this:
> ...

//-------------------------------------------------------------------------------
void thread_writer()
{


gc<int> val = gcnew int;

lock(g_val)
{
*val = *g_val + 1;
g_val = val;
}
}

void thread_reader()
{
gc<int> l_val;
lock(g_val) // copy operation isn't atomic
{
l_val = g_val;
}

if (l_val)
{
printf("read %d\n", *l_val);
}
}

//-------------------------------------------------------------------------------
//-------------------------------------------------------------------------------
struct node
{
node(void* s) : state(s) {}


gc<node> next;
void* state;
};

static gc<node> g_list;

void list_push(void* state) throw(std::bad_alloc)
{
gc<node> n = gcnew node(state);
lock(g_list)
{
n->next = g_list;
g_list = n;
}
}

void* list_pop()
{
void* state = 0;
lock(g_list)
{
if (g_list)
{


state = g_list->state;
g_list = g_list->next;
}
}
return state;
}

//-------------------------------------------------------------------------------

0 new messages