--
You received this message because you are subscribed to the Google Groups "platform-architecture-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architect...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CALG6KPPDK7gDaguRaMvRW%2B_NuYPQWL%2Boq-qrC0aO7H3pZqzwuQ%40mail.gmail.com.
Sorry I only read Dana's email not your original email. I'd run a Speedometer job against the M1 bots as well, with ~100 repeat count.
--
You received this message because you are subscribed to the Google Groups "platform-architecture-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architect...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGFX3sET4u-H4UqDTT7%3DQf%2BfeSQ7NftU8PU0FzgTrcysARqudQ%40mail.gmail.com.
I know this is a public forum, so I'm just discussing in terms of percentages. From my understanding 84% of the bugs were found via clusterfuzz weren't they? So that already has the DCHECK on and enabling this wouldn't change that source of bugs. Leaving roughly ~11% of real bugs coming from external reports.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGyDDx6v9EccKTe33iWSmhFfFHXZvhH_UroOiVZGZCeHLhj9%3DQ%40mail.gmail.com.
Hi all,I was going to take Kentaro's bet, but then noticed that the two To<.> variants in question have around 300 usages each, but that very many of them are in generated code. So my first try is allowing unsafe use from code generators only. (Patch set 2 of 4249403.) I'll report back when I have results.
In the meantime: Would anyone have an explanation why the performance penalty would be greater on one platform vs another? If I take the pinpoint numbers at face value, then the checks are 5x more expensive on x64/Win vs M1/Mac (+1.5% vs +0.3% increase; relative to the base performance). I find that rather strange.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAF3XrKpDfdzP3vA-Cg1WL-k4Bx7Ew34wgbnmfNVhAmxCM2sxtQ%40mail.gmail.com.
I was going to take Kentaro's bet, but then noticed that the two To<.> variants in question have around 300 usages each, but that very many of them are in generated code. So my first try is allowing unsafe use from code generators only. (Patch set 2 of 4249403.) I'll report back when I have results.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CALG6KPNZ4L09yBObxtvAAOiicDDaEKseJ8ZcMQo6_9fzJFSF-Q%40mail.gmail.com.
Thanks for checking performance on these changes upfront.You can use crossbench to find the hot paths that are affected by e.g. Speedometer2. The tool can spit out a pprof profile, similar to this one (sorry Google-only), which you can nicely search for bottlenecks. This can give you hints for where to place exceptions for hot paths in the DOM. Just make sure to compile with `symbol_level=2` to have symbols for inlined frames.A regression for more usage of `IsA<>()` is not surprising to me as these already show up in the profile today as something that we spend significant time in.
On Mon, Feb 20, 2023 at 11:00 AM Michael Lippautz <mlip...@chromium.org> wrote:Thanks for checking performance on these changes upfront.You can use crossbench to find the hot paths that are affected by e.g. Speedometer2. The tool can spit out a pprof profile, similar to this one (sorry Google-only), which you can nicely search for bottlenecks. This can give you hints for where to place exceptions for hot paths in the DOM. Just make sure to compile with `symbol_level=2` to have symbols for inlined frames.A regression for more usage of `IsA<>()` is not surprising to me as these already show up in the profile today as something that we spend significant time in.Thanks, I didn't know about crossbench.A cursory look at the profile suggests there's a handful of large users. I tried excluding those from the checks, but that hasn't moved the needle. (Still at ~+1% for x64/Win.)I think I've about exhausted the easy options. The initial VRP analysis suggests to me that solving this would be worth several SWE-months. I'll bring this up with my manager (currently ooo) to see whether my team has an interest in taking this on.
Daniel
dave.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architecture-dev+unsub...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGFX3sET4u-H4UqDTT7%3DQf%2BfeSQ7NftU8PU0FzgTrcysARqudQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "platform-architecture-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architecture-dev+unsub...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGyDDx6v9EccKTe33iWSmhFfFHXZvhH_UroOiVZGZCeHLhj9%3DQ%40mail.gmail.com.
--Kentaro Hara, Tokyo
--
You received this message because you are subscribed to the Google Groups "platform-architecture-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architecture-dev+unsub...@chromium.org.
this same SECURITY_DCHECK would have caught another security bug see https://bugs.chromium.org/p/chromium/issues/detail?id=1448032Given we've made quite a few improvements in perf through other projects, can we "spend" some of these "perfcoins" and change the occurrence in casting.h to a CHECK?I think this would be a meaningful security improvement. Who needs to approve this sort of change?
Will
On Tuesday, February 21, 2023 at 4:46:58 AM UTC-8 Daniel Vogelheim wrote:
On Mon, Feb 20, 2023 at 11:00 AM Michael Lippautz <mlip...@chromium.org> wrote:Thanks for checking performance on these changes upfront.You can use crossbench to find the hot paths that are affected by e.g. Speedometer2. The tool can spit out a pprof profile, similar to this one (sorry Google-only), which you can nicely search for bottlenecks. This can give you hints for where to place exceptions for hot paths in the DOM. Just make sure to compile with `symbol_level=2` to have symbols for inlined frames.A regression for more usage of `IsA<>()` is not surprising to me as these already show up in the profile today as something that we spend significant time in.Thanks, I didn't know about crossbench.A cursory look at the profile suggests there's a handful of large users. I tried excluding those from the checks, but that hasn't moved the needle. (Still at ~+1% for x64/Win.)I think I've about exhausted the easy options. The initial VRP analysis suggests to me that solving this would be worth several SWE-months. I'll bring this up with my manager (currently ooo) to see whether my team has an interest in taking this on.
Daniel
dave.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architect...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGFX3sET4u-H4UqDTT7%3DQf%2BfeSQ7NftU8PU0FzgTrcysARqudQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "platform-architecture-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architect...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGyDDx6v9EccKTe33iWSmhFfFHXZvhH_UroOiVZGZCeHLhj9%3DQ%40mail.gmail.com.
--Kentaro Hara, Tokyo
--
You received this message because you are subscribed to the Google Groups "platform-architecture-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architect...@chromium.org.
Will
Daniel
dave.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architecture-dev+unsub...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGFX3sET4u-H4UqDTT7%3DQf%2BfeSQ7NftU8PU0FzgTrcysARqudQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "platform-architecture-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architecture-dev+unsub...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGyDDx6v9EccKTe33iWSmhFfFHXZvhH_UroOiVZGZCeHLhj9%3DQ%40mail.gmail.com.
--Kentaro Hara, Tokyo
--
You received this message because you are subscribed to the Google Groups "platform-architecture-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architecture-dev+unsub...@chromium.org.
With regard to in-the-wild perf costs: doing a Finch experiment (if possible) or binary A/B and reviewing impacts on guardrail metrics would be the gold standard for understanding impact.
With regard to in-the-wild perf costs: doing a Finch experiment (if possible) or binary A/B and reviewing impacts on guardrail metrics would be the gold standard for understanding impact.Alternately we could temporarily land the change in canary/dev and use Stack Sampled Metrics in-the-wild profiling to understand low-level impact and address hot spots. If the low-level impact is marginal, or can be made so, it's not likely to significantly impact the top-level metrics.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architect...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAHtyhaS4sGXy5UMH3mc%3DYs9%2BDmo7V0KnFgiXQrZTUke9uNc2jA%40mail.gmail.com.
I think to get realistic results here we'd have to run a binary experiment (likely, on Dev channel) with a synthetic finch trial group being reported, because, as others have said, the added check for whether or not the client is in the control or treatment group would upset the data. Running a binary experiment is possible, but takes more work than just a finch experiment backed by a Feature flag.
The pinpoint data is in, you can see if on the CL https://chromium-review.googlesource.com/c/chromium/src/+/4563433 if someone can interpret it and decide whether or not it's good or bad, it seems some are green and some are red but I'm not sure which are more important than others...?Given this is a known source of security bugs, are we able to land something simple now, then recover the perf/size costs in future CLs e.g. implement the solution that dcheng@ expects? As per our guidance, we have to assume that these vulnerabilities are being actively exploited so I think we should try and land these additional checks as a matter of urgency.
[-chrome-atls-discuss@google, +jam, sorry for the cross-post]On Wed, May 24, 2023 at 4:05 PM Will Harris <w...@chromium.org> wrote:I think to get realistic results here we'd have to run a binary experiment (likely, on Dev channel) with a synthetic finch trial group being reported, because, as others have said, the added check for whether or not the client is in the control or treatment group would upset the data. Running a binary experiment is possible, but takes more work than just a finch experiment backed by a Feature flag.
The pinpoint data is in, you can see if on the CL https://chromium-review.googlesource.com/c/chromium/src/+/4563433 if someone can interpret it and decide whether or not it's good or bad, it seems some are green and some are red but I'm not sure which are more important than others...?Given this is a known source of security bugs, are we able to land something simple now, then recover the perf/size costs in future CLs e.g. implement the solution that dcheng@ expects? As per our guidance, we have to assume that these vulnerabilities are being actively exploited so I think we should try and land these additional checks as a matter of urgency.Given we have evidence of non-trivial perf regressions, I don't think we can just ram this in and worry about the performance later. Performance and security are both top-line goals for chromium, and just like we often ask performance teams to delay their work for many months while they work with security teams to evaluate and mitigate security risks, I think it's reasonable to expect security teams to work hard on analyzing and mitigating performance costs before landing security improvements. I think this is the path we've followed with the really big security / performance tradeoffs (eg. Oilpan, site isolation), so I don't think smaller cases should be any different. Given this is an important polarity to be managed across our product and teams, it's IMHO unproductive to take a perspective that strictly places one side over the other.
Would an approach like this be reasonable?- Add UnsafeTo<T>.- Profile jetstream/speedometer/motionmark runs locally (presumably we have some good steps on how to do this?) to find and mitigate hotspots.- 0 regression is ideal; if we can't achieve that, how do we determine what amount of regression is permissible?- Possibly: binary experiment (this should, at least, be relatively easy from a configuration perspective; logically, I've always had the impression that binary experiments are rather painful and very much a manual process. Has this changed?)- After we actually release a binary, use stack samples to find any remaining hotspots to focus on?
Daniel
On Wed, May 24, 2023 at 4:27 PM Rick Byers <rby...@chromium.org> wrote:[-chrome-atls-discuss@google, +jam, sorry for the cross-post]On Wed, May 24, 2023 at 4:05 PM Will Harris <w...@chromium.org> wrote:I think to get realistic results here we'd have to run a binary experiment (likely, on Dev channel) with a synthetic finch trial group being reported, because, as others have said, the added check for whether or not the client is in the control or treatment group would upset the data. Running a binary experiment is possible, but takes more work than just a finch experiment backed by a Feature flag.
The pinpoint data is in, you can see if on the CL https://chromium-review.googlesource.com/c/chromium/src/+/4563433 if someone can interpret it and decide whether or not it's good or bad, it seems some are green and some are red but I'm not sure which are more important than others...?Given this is a known source of security bugs, are we able to land something simple now, then recover the perf/size costs in future CLs e.g. implement the solution that dcheng@ expects? As per our guidance, we have to assume that these vulnerabilities are being actively exploited so I think we should try and land these additional checks as a matter of urgency.Given we have evidence of non-trivial perf regressions, I don't think we can just ram this in and worry about the performance later. Performance and security are both top-line goals for chromium, and just like we often ask performance teams to delay their work for many months while they work with security teams to evaluate and mitigate security risks, I think it's reasonable to expect security teams to work hard on analyzing and mitigating performance costs before landing security improvements. I think this is the path we've followed with the really big security / performance tradeoffs (eg. Oilpan, site isolation), so I don't think smaller cases should be any different. Given this is an important polarity to be managed across our product and teams, it's IMHO unproductive to take a perspective that strictly places one side over the other.I took a look at the benchmarks and I see < 1% regression on all 3 if I am reading this correctly. I believe we set a "revert" threshold where we don't accept followup work to address performance at 1%. Am I misreading things? It's quite possible.
If we follow up with stack profiling to understand in the wild (after PGO is applied) where the hot paths are and work to eliminate the overhead there, does that address this need? Is there some risk to landing this < 1% change, and following up on the hot paths that makes it too risky of a strategy?
On Wed, May 24, 2023 at 4:57 PM Daniel Cheng <dch...@chromium.org> wrote:Would an approach like this be reasonable?- Add UnsafeTo<T>.- Profile jetstream/speedometer/motionmark runs locally (presumably we have some good steps on how to do this?) to find and mitigate hotspots.- 0 regression is ideal; if we can't achieve that, how do we determine what amount of regression is permissible?- Possibly: binary experiment (this should, at least, be relatively easy from a configuration perspective; logically, I've always had the impression that binary experiments are rather painful and very much a manual process. Has this changed?)- After we actually release a binary, use stack samples to find any remaining hotspots to focus on?This sounds good to me FWIW. If a binary experiment seems like overkill, I'm also supportive of Mike's idea to just land and look for hotspots with SSM. In general I expect micro perf concerns like this to have a much lower impact on metrics in the wild than on benchmarks.
DanielOn Wed, May 24, 2023 at 5:01 PM <dan...@chromium.org> wrote:On Wed, May 24, 2023 at 4:27 PM Rick Byers <rby...@chromium.org> wrote:[-chrome-atls-discuss@google, +jam, sorry for the cross-post]On Wed, May 24, 2023 at 4:05 PM Will Harris <w...@chromium.org> wrote:I think to get realistic results here we'd have to run a binary experiment (likely, on Dev channel) with a synthetic finch trial group being reported, because, as others have said, the added check for whether or not the client is in the control or treatment group would upset the data. Running a binary experiment is possible, but takes more work than just a finch experiment backed by a Feature flag.
The pinpoint data is in, you can see if on the CL https://chromium-review.googlesource.com/c/chromium/src/+/4563433 if someone can interpret it and decide whether or not it's good or bad, it seems some are green and some are red but I'm not sure which are more important than others...?Given this is a known source of security bugs, are we able to land something simple now, then recover the perf/size costs in future CLs e.g. implement the solution that dcheng@ expects? As per our guidance, we have to assume that these vulnerabilities are being actively exploited so I think we should try and land these additional checks as a matter of urgency.Given we have evidence of non-trivial perf regressions, I don't think we can just ram this in and worry about the performance later. Performance and security are both top-line goals for chromium, and just like we often ask performance teams to delay their work for many months while they work with security teams to evaluate and mitigate security risks, I think it's reasonable to expect security teams to work hard on analyzing and mitigating performance costs before landing security improvements. I think this is the path we've followed with the really big security / performance tradeoffs (eg. Oilpan, site isolation), so I don't think smaller cases should be any different. Given this is an important polarity to be managed across our product and teams, it's IMHO unproductive to take a perspective that strictly places one side over the other.I took a look at the benchmarks and I see < 1% regression on all 3 if I am reading this correctly. I believe we set a "revert" threshold where we don't accept followup work to address performance at 1%. Am I misreading things? It's quite possible.I defer to Scott on how to reason about the magnitude of the impact for the benchmarks. I don't interpret his e-mail to quite mean "feel free to knowingly regress benchmark performance as much as you like as long as you keep it to <1% per CL" 😉. Hopefully our strategy here isn't just to do DCHECK->CHECK conversions in small enough batches that each CL flies under the detection radar?
Speaking of which, would it be reasonable to document that CHECKs are supposed to be side-effect free? IMHO we should make it easy to periodically do benchmark runs with a build that disables all CHECKs just to make sure we're aware of the total costs we're paying in aggregate.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAFUtAY9W2B%2BDRGRS59jASQ0%2BpguGi_5vMbe1KL2%2BMTJorX96Hg%40mail.gmail.com.
Hello,I feel like I'm late to the party!I was pointed to Will's cl without knowing the scope of the problems we are trying to prevent. Is there a summary of the problem we are trying to prevent, and what it would mean if we did not do this change?
As to the performance threshold. It's 1% overall, or 2% to any subtest. Will's pinpoint runs show more than 2% regression to a couple of subtests. What matters is the pgo bots. It's currently not possible to run a try job that builds a new pgo profile and uses that as part of the try jobs.
After looking at the assembly for a similar change in more detail (specifically Peter's change here) my suspicion is that you will not find a single place that is the culprit, but rather death by a thousand paper cuts, meaning that this code is used in many places and it's the sum total of all of these places getting slightly slower. That Will's patch increases binary size by 85k is a pretty good signal the patch changes a lot of code.
see inline.On Wed, May 24, 2023 at 3:32 PM Scott Violet <s...@chromium.org> wrote:Hello,I feel like I'm late to the party!I was pointed to Will's cl without knowing the scope of the problems we are trying to prevent. Is there a summary of the problem we are trying to prevent, and what it would mean if we did not do this change?I think the original post by Daniel gives an accurate summary, we see avg of ~2 bugs a month found by clusterfuzz and a few a year reported externally, of which the most recent was CVE-2023-1215, released in this update. [$7000][1417176] High CVE-2023-1215: Type Confusion in CSS. Reported by Anonymous on 2023-02-17. I think given our knowledge of variants and bug collisions it is reasonable to assume that there are attackers who are using these type confusions to harm our users.
As to the performance threshold. It's 1% overall, or 2% to any subtest. Will's pinpoint runs show more than 2% regression to a couple of subtests. What matters is the pgo bots. It's currently not possible to run a try job that builds a new pgo profile and uses that as part of the try jobs.After looking at the assembly for a similar change in more detail (specifically Peter's change here) my suspicion is that you will not find a single place that is the culprit, but rather death by a thousand paper cuts, meaning that this code is used in many places and it's the sum total of all of these places getting slightly slower. That Will's patch increases binary size by 85k is a pretty good signal the patch changes a lot of code.I'm curious for your view on the proposal from Rick to land something now, and then use stack profiling in the wild to determine the impact, or iterate on improvements? Also, I should add that if we see regressions we can always revert before branch point, which is a few weeks away, as we just branched today.
Hannes Payer | | V8 | | Google Germany GmbH | | Erika-Mann Str. 33, 80636 München |
Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Diese E-Mail ist vertraulich. Falls Sie diese fälschlicherweise erhalten haben sollten, leiten Sie diese bitte nicht an jemand anderes weiter, löschen Sie alle Kopien und Anhänge davon und lassen Sie mich bitte wissen, dass die E-Mail an die falsche Person gesendet wurde.
This e-mail is confidential. If you received this communication by mistake, please don't forward it to anyone else, please erase all copies and attachments, and please let me know that it has gone to the wrong person.
I 100% agree that there will be more bugs in the future. But if our fuzzing finds them in practice immediately and we stop paying bug bounties then we are ~good. Regardless of this, we should improve our fuzzing in this area.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGFX3sEyd_GA5AG%3D8DEyC42uGn-j__JOWaP4k6NHkB2ok1uO2A%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAH%2BmL5C6Um-REeCoc0iiJ1akN_ENyLaN7YpijhLr0YUEiVEXNA%40mail.gmail.com.
After a few of my patchs [1, 2, 3, 4] show there are roughly 222M blink:To's now. See https://pprofng.corp.google.com/?id=60cb950c88d7af7fe7c39a281372bbe8&pivot=blink::To$I wasn't able to get go/crossbench to work on my glinux machine. Is there a guide I can follow? It seems to be crashing in the renderer startup script. I just used perf record, and then converted the perf.data to a proto file and uploaded it to pprofng...
Camillo Bruni | | Software Engineer, V8 | |
Google Germany GmbH | | Erika-Mann Str. 33, 80636 München |
Thoughts fresh and haven't solidified but I'm wary of having another CHECK macro without compiler enforcement that one is pure and not the other. I suspect we'll end up with more CHECKs than CHECKP/CHECKI just because it's what people are used to typing (and short = default one). We also end up with zero coverage that CHECKP and CHECKI can actually be removed unless we regularly ship builds and run tests without them.If I could have things by just wishing for them then C++ would have the attribute [[pure]] and it would be widely applied in standard libraries, we'd apply it in Chromium and have a decent shot at this. Presumably we'd also generate better code. Afaik [[gnu:pure]] isn't widely applied (but I could be wrong) and it certainly isn't widely used within Chromium. Then we'd mark the CHECK_WITH_SIDE_EFFECTS() as the non-default one. Maybe we could split existing CHECKs based on if they are [[pure]], rinse and repeat.The other option I thought of is that we actually want to be able to tell how much our CHECKs cost and look at profiling through that lens. If we could add a pivot on CHECK (that includes the conditional) we could profile better, then we could look at "are these costly CHECKs actually holding their weight". I haven't found a way of doing so. I tried putting the conditional inside a lambda so that I could call it from another stack frame (that could be force inlined and hopefully generate the same code), but that bit me as clang lock annotations don't understand that calling [&]() {foo_ = bar;}(); means that foo_ is accessed only under the lock where the lambda is called (and that the lambda doesn't leak out otherwise). If there's a way to annotate debug code so that CHECKs can be pivoted on in profiling I think that would be great.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAH%2BmL5C6Um-REeCoc0iiJ1akN_ENyLaN7YpijhLr0YUEiVEXNA%40mail.gmail.com.
Great feedback Peter.The meta concern is how do we prevent death by a 1000 paper cuts? The suggestion to prevent this is a bot that builds with CHECKs disabled, runs a suite (most likely speedometer) and compares to the normal release build. This means that we're never really shipping a build with checks disabled, it's purely for testing. That simplifies some of your concerns, but we still need to deal with side effects in the CHECKs. I don't have a feel for how prevalent it is. Your suggestion of trying this in a particular part of the code would be enlightening.Perhaps the more interesting question is what happens if the bot goes red because the difference between CHECKs and no-CHECKs is significant? It most likely isn't from a recent commit, but rather the accumulation of changes that added CHECKs. This would likely require a deeper analysis to understand hotspots that are calling to CHECK and if they can be changed.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGFX3sGFvCsaaYirnLi%3D3%3DgnvMFCapHZuEgk%3DM_%3DCSKCg%3D1EZw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAGFX3sEOStZ1afy_h_Vd%3D-37TwUgGAor_BuKQuT0SO6LbwCMrw%40mail.gmail.com.
Ok thanks for the explanation.The other pitfall I will call out here is the idea of telling in tooling if something has side effects.clang::FunctionDecl has isPure() to tell you this information but it's going to always be false unless manual annotations are applied. There's a great amount of codegen optimization that could be improved if Clang was able to tell if a function has side effects, yet it is completely unable to. This can be seen by the warnings generated by putting anything other than primitive values into __builtin_assume(). I don't know what that indicates about the difficulty of the problem, but it's a red flag.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CAHtyhaSY0pdERtcK%3DbSqDWdFgCp3x-Gw_7wG8oHfAK2Hv7spOg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/CACwGi-741igmfEqguxMLYCbeyLM7pMJ_HyGKsPNdT%2BZ_Hezn4Q%40mail.gmail.com.