--
You received this message because you are subscribed to the Google Groups "cxx" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cxx+uns...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/cxx/CAKFU2SAJg%2By4FUU-qTWkhq1XAeCWB2GKo%2BjSUbkMFdVK%2Bakxcg%40mail.gmail.com.
IMO, better to have a dangerous tool than no tool.PK
--
You received this message because you are subscribed to the Google Groups "cxx" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cxx+uns...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/cxx/CAAHOzFDX6Kj9wjSThGh1U3rHjAL_HDhicS2mh3YpUYwVUCnWpw%40mail.gmail.com.
Would the "no tool" here would just be replacing rw locks with regular mutexes or everyone rolls their own rw locks?
--
You received this message because you are subscribed to the Google Groups "cxx" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cxx+uns...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/cxx/CAKFU2SBwWrNiUC2WNQqi7Vo42M%3Dfaa6qhnxMMJshxT7m9u3hEA%40mail.gmail.com.
That seems like it would be adding quite a bit of throw-away code. The absl::Mutex and absl::Lock API are pretty similar, so it's a pretty straight forward conversion:
To answer the process question, the procedure to allow an Abseil feature is the same as allowing a C++ standard feature, and is outlined here:Basically, we have a discussion on cxx@ (so this thread), and if the conclusion is positive, it gets added to the list of allowed features.I would be very surprised if RCU isn't used already in places in the Chromium code base, as it's a widely-applicable technique for reducing contention in many concurrent data structures, but I don't know if there's an existing library (or if that would even be useful). Typically, you would use a cheap atomic to implement the "read" part of the pattern; in this case, I suspect you'd still want a lock for the "copy" and "update" parts, so a better analogy might be double-checked locking.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/cxx/CACwGi-4nMMazP2o4ksijU%3DTb%2B3UKDNne2sBMYxrVwxpgXjqiAQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/cxx/CAKFU2SC%2BcHszzuYgP1d03Mip%3D-23NWnNp6JWaJ6iWq-rPCP35Q%40mail.gmail.com.
In this specific case, I was not planning to run this behind an experiment, for a couple reasons:1. This code gets run before FieldTrials are available, so would require custom set up to do, as well as the extra code complexity.2. I was thinking given we can observe instances of the bad behavior through slow reports, we can just monitor the occurrence rate of these for versions past the change. I've had success with something like this for other cases that we've only had visibility through UMA (e.g. record corruption), which went away after a specific fix. (Although I'm less familiar w/ slow reports and I guess the question is whether we have a good way to query them that match this scenario - I assume we do, but also we can discuss that outside this list - e.g. on the bug.)
On Wed, Oct 13, 2021 at 1:22 PM Alexei Svitkine <asvi...@chromium.org> wrote:In this specific case, I was not planning to run this behind an experiment, for a couple reasons:1. This code gets run before FieldTrials are available, so would require custom set up to do, as well as the extra code complexity.2. I was thinking given we can observe instances of the bad behavior through slow reports, we can just monitor the occurrence rate of these for versions past the change. I've had success with something like this for other cases that we've only had visibility through UMA (e.g. record corruption), which went away after a specific fix. (Although I'm less familiar w/ slow reports and I guess the question is whether we have a good way to query them that match this scenario - I assume we do, but also we can discuss that outside this list - e.g. on the bug.)We can but that won't tell us if R/W has impact on other metrics.There's precedent, at least on Windows, for using the registry to configure synthetic field trials (read and applied in the next session).Another option might be to no-op our custom Lock() calls before FeatureList is created as Chrome has to be single-threaded until that point anyways.Discussing here because the results of this as an experiment would be useful to vouch for R/W lock on the overall codebase?
On Thu, Oct 14, 2021 at 1:28 PM Gabriel Charette <g...@chromium.org> wrote:On Wed, Oct 13, 2021 at 1:22 PM Alexei Svitkine <asvi...@chromium.org> wrote:In this specific case, I was not planning to run this behind an experiment, for a couple reasons:1. This code gets run before FieldTrials are available, so would require custom set up to do, as well as the extra code complexity.2. I was thinking given we can observe instances of the bad behavior through slow reports, we can just monitor the occurrence rate of these for versions past the change. I've had success with something like this for other cases that we've only had visibility through UMA (e.g. record corruption), which went away after a specific fix. (Although I'm less familiar w/ slow reports and I guess the question is whether we have a good way to query them that match this scenario - I assume we do, but also we can discuss that outside this list - e.g. on the bug.)We can but that won't tell us if R/W has impact on other metrics.There's precedent, at least on Windows, for using the registry to configure synthetic field trials (read and applied in the next session).Another option might be to no-op our custom Lock() calls before FeatureList is created as Chrome has to be single-threaded until that point anyways.Discussing here because the results of this as an experiment would be useful to vouch for R/W lock on the overall codebase?Right, we could do it, but it would be quite a bit of complexity. I agree it would be interesting to validate the overall effect (though I honestly expect not to see any significant movement).I do think it's worth checking the impact on slow reports in question to confirm it actually has the desired effect, before investing more into a complex trial set up for it. Perhaps we can do this as a first step and then re-visit?For example, if it doesn't help, then we can just revert it and avoid the overhead of the complicated field trial set up that can't use the normal flow given the early start up case.
Okay, SG, I'm all for the most expedient way to acquire data. At the same time I prefer not cutting corners when there's an opportunity to gather more data that'll influence cross-cutting decisions (so someone doesn't have to do this again in the future).I understand that in this case it's worthwhile to first figure out if it helps the priority inversion seen by slow reports.LGTM for usage of RW locks in base/metrics.
I read back through the thread, but I didn't see if you actually benchmarked e.g. std::shared_mutex or absl::Mutex against base:: primitives to confirm that despite the low-level increase in overhead, the suitability-for-use-case factor at a high level makes this a win. Did you?
If it is, I think that's an argument that we should build a (frowned-on, should be rarely used) r/w capability in the base:: classes, because there's an actual win with it. If you don't know that it is, though, I'm leery. (And how did we land usage of the absl:: types into stable, when they're also banned?)
As a reference, I've pushed back in the past against shared mode locks as the use cases were rare and you potentially run the risk of actually slowing down your code (base::Lock is based on SRWLock on Windows).If we were to allow RW locks in base, it certainly seems like a good candidate for base::subtle.
There is some fascinating work in this bug.I would love to hear a brews and bites talk about the odyssey some time.