void knote_release(structᅵfilterᅵ*filt,ᅵstructᅵknoteᅵ*kn) { ᅵᅵᅵᅵintᅵref; ᅵᅵᅵᅵrefᅵ=ᅵatomic_dec(&kn->kn_ref); ᅵᅵᅵᅵifᅵ(refᅵ==ᅵ0)ᅵ{ ᅵᅵᅵᅵᅵᅵᅵᅵdbg_printf("freeingᅵknoteᅵatᅵ%p,ᅵrc=%d",ᅵkn,ᅵref); ᅵᅵᅵᅵᅵᅵᅵᅵpthread_rwlock_wrlock(&filt->kf_knote_mtx); ᅵᅵᅵᅵᅵᅵᅵᅵknote_free(filt,ᅵkn); ᅵᅵᅵᅵᅵᅵᅵᅵpthread_rwlock_unlock(&filt->kf_knote_mtx); }ᅵelseᅵifᅵ(refᅵ<ᅵ0)ᅵ{ dbg_printf("WARNᅵknoteᅵ%pᅵrc=%dᅵwouldᅵbeᅵfreedᅵtwice,ᅵcheckᅵrefᅵcounting"); abort(); ᅵᅵᅵᅵ}ᅵelseᅵ{ ᅵᅵᅵᅵᅵᅵᅵᅵdbg_printf("NOTᅵfreeingᅵknoteᅵ%pᅵrc=%d",ᅵkn,ᅵref); ᅵᅵᅵᅵ} }
Reference counting for knotes is not fully implemented, so the
knote_release() function is basically doing the same thing as calling
knote_free(). The missing functionality would be to call knote_retain() in
conjunction with knote_lookup() so that acquiring a reference to a knote
object would actually increment the reference counter.
You are also correct in that knote_new() should increment the reference
count and kevent() should decrease the reference count once it no longer
needs to access the knote object.
One of my goals for libkqueue v2.0 has been to use fine-grained locking at
the 'struct knote' level, instead of a coarse-grained lock in 'struct
kqueue'. Given the large amount of other changes related to Windows and
Linux, I'm not sure this is the right time to introduce a more complex
locking scheme. Therefore, as of r484, I have restored the behavior from
libkqueue 1.0 where the kevent_copyin() and kevent_copyout() functions
acquire a lock on their 'struct kqueue'. I've also added an assertion to
knote_release() to explicitly check for a double-free error.
I'm hoping that this checkin will also fix the dispatch_priority test
failures that Joakim was reporting, and I'm going to create some
multithreaded unit tests that try to race each other to modify a 'struct
knote' object.
- Mark
So at the moment, the dispatch_priority was not reproducible - will let you know of the results using libumem.
I hope we will be able to get fine-grained locking working, having a single mutex will definitely become a contention point for heavier workloads...
A multithreaded test would be great, let us know if/when it is committed and I will try to run it on our many-core boxes.
Joakim
ps As mentioned, will largely be away until the end of the month now, so expect latency in any replies
> --
> You received this message because you are subscribed to the Google Groups "libkqueue" group.
> To post to this group, send email to libk...@googlegroups.com.
> To unsubscribe from this group, send email to libkqueue+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/libkqueue?hl=en.
>