| Commit-Queue | +1 |
Daniel ChengWhile I would prefer to just allow the proper building blocks and land https://chromium-review.googlesource.com/c/chromium/src/+/7694752 instead, this weak value map is fine as well since weak map values are what std::weak_ptr is generally used for anyway. Also b/492374380 is a real issue that should be fixed.
Yes, definitely understood. FWIW I did chat to C++ style owners, and the general consensus was that it didn't seem terrible to allow `std::weak_ptr` (and `std::shared_ptr` for these kinds of use cases), but with the caveat that having both refcounting systems in common use didn't seem ideal.
I do think there's a future where we might end up standardizing on `std::shared_ptr`, but I think that's some distance away. I've added a bunch of comments, added an additional test, and restore a trace event (that got moved to the `MappedFontFile` dtor); PTAL.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
@spvm who volunteered to help review the atomic bits
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Some comments on the atomic impl. Let me know if I've misread anything or have mistaken assumptions about how it should work! I'm not 100% that I've completely internalized the API yet.
return scoped_refptr<ValueT>(it->second.get());Why doesn't this path `AddRefChecked(): { if (fetch_add(count) != 0) { return } }` before returning it, guaranteeing that the returned value won't hit the CHECK in AddRef() and will stay alive for at least a little while?
CHECK(count_.fetch_add(1, std::memory_order_relaxed) != 0);this CHECK() could occur. in fact, it occurs whenever an AddRef() occurs in just before ReleaseAndRemovIfUnused() in Release(), no?
CHECK(count_.fetch_add(1, std::memory_order_relaxed) != 0);can you talk about why you believe that it makes sense to have memory_order_relaxed here? I'm not sure that I've yet figured out a case that it could cause problems yet, but I'm not convinced that it's necessarily the right idea given that the compiler could plausibly reorder it into locked operations on `table_` (though acq/rel wouldn't stop the compiler from doing that given that `table_` isn't operating on it.
To tie into my other comment on FindOrCreate(): it could plausibly reorder it before other operations in the Callable() in there that, while I can't imagine actually posing a problem, could maybe pose a problem if you engineered it to.
To my understanding according to the spec, the table lock and the atomic can be mutually reordered to the compiler's and CPU's whims. it feels like you're assuming that they can't.
// supported operation is `FindOrInsert()`, which either:FindOrCreate?
return scoped_refptr<ValueT>(it->second.get());Why doesn't this path `AddRefChecked(): { if (fetch_add(count) != 0) { return } }` before returning it, guaranteeing that the returned value won't hit the CHECK in AddRef() and will stay alive for at least a little while?
Can you clarify how this would hit the `CHECK()`?
If we acquire the lock, no thread is currently in the critical section of `ReleaseAndRemoveIfUnused()`, which should mean there are no entries with a refcount of zero in the table.
CHECK(count_.fetch_add(1, std::memory_order_relaxed) != 0);this CHECK() could occur. in fact, it occurs whenever an AddRef() occurs in just before ReleaseAndRemovIfUnused() in Release(), no?
It can't because a precondition of entering `ReleaseAndRemoveIfUnused()` is the caller thinks the refcount is potentially 1 (but definitely not zero). At that point, only two scenarios are possible:
1. `ReleaseAndRemoveIfUnused()` wins the race and the entry is removed from the table. Nothing else can possible reference it, because the caller to `ReleaseAndRemoveIfUnused()` had the sole refcount, so the refcount cannot increase from zero.
2. `ReleaseAndRemoveIfUnused()` loses the race and `AddRef()` occurs first because someone else looked up the entry and obtained a ref to it. In this case, the refcount never goes to 0: it was at 1, became 2, and then returns to 1 when `ReleaseAndRemoveIfUnused()`'s `fetch_sub()` executes.
CHECK(count_.fetch_add(1, std::memory_order_relaxed) != 0);can you talk about why you believe that it makes sense to have memory_order_relaxed here? I'm not sure that I've yet figured out a case that it could cause problems yet, but I'm not convinced that it's necessarily the right idea given that the compiler could plausibly reorder it into locked operations on `table_` (though acq/rel wouldn't stop the compiler from doing that given that `table_` isn't operating on it.
To tie into my other comment on FindOrCreate(): it could plausibly reorder it before other operations in the Callable() in there that, while I can't imagine actually posing a problem, could maybe pose a problem if you engineered it to.
To my understanding according to the spec, the table lock and the atomic can be mutually reordered to the compiler's and CPU's whims. it feels like you're assuming that they can't.
To be perfectly honest, I didn't write the original code. But RefCountedThreadSafe and blink's StringImpl::AddRef use relaxed semantics for the add and depend on the acquire/release barrier in Release() for refcount synchronization, and I can't think of a scenario where this is actually problematic.
Upon acquiring `table_lock_`, the invariant is every entry in `table_` has a refcount of at least 1 (I've added this to the comments)
it could plausibly reorder it before other operations in the Callable() in there
while this type aims to be robust and correct, I don't think it needs to guard against a callable that is intentionally being malicious.
// supported operation is `FindOrInsert()`, which either:Daniel ChengFindOrCreate?
Done
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |