PTAL (not expecting to land this before the holidays, just want to send it out!). Not sure if we should go "experimental" or just enable this.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Code-Review | +1 |
Huzzah!
<title>Largest Contentful Paint: performance entries are emitted in the expected sequence</title>Just FYI that Firefox will not pass this, and landing this into the LCP repo will make it automatically part of the interop work... you can untag it, but I dont know the full process.
(It could also start tentative until we merge in spec changes? Though I can prob get that in alongside this...)
container.appendChild(createDivWithText('text1', 'AB'));Nit: could you consider adding a third div, which is als smaller, that comes last (after the largest)?
That way, regardless of arbitrary paint order for any implementation (well not guaranteed but higher odds...) you can test that only one (largest) candidate is emitted per paint.
Otherwise you could just be accidentally measuring largest before smaller...
// a LargestContentfulPaint entry for image1.Is this 100% guaranteed? Is the random appended to ensure no cache?
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
<title>Largest Contentful Paint: performance entries are emitted in the expected sequence</title>Just FYI that Firefox will not pass this, and landing this into the LCP repo will make it automatically part of the interop work... you can untag it, but I dont know the full process.
(It could also start tentative until we merge in spec changes? Though I can prob get that in alongside this...)
Yeah maybe making it .tentative for now makes sense. I thought there was a predefined list of tests for the interop effort that will trip up edits, but I'm not fully sure how that works.
container.appendChild(createDivWithText('text1', 'AB'));Nit: could you consider adding a third div, which is als smaller, that comes last (after the largest)?
That way, regardless of arbitrary paint order for any implementation (well not guaranteed but higher odds...) you can test that only one (largest) candidate is emitted per paint.
Otherwise you could just be accidentally measuring largest before smaller...
Yeah good idea, I'll update this shortly... And yeah this is tricky since paint order _could_ be anything.
// a LargestContentfulPaint entry for image1.Is this 100% guaranteed? Is the random appended to ensure no cache?
It should be. The key here is we don't want the image to load synchronously, otherwise it'll be in this frame. The only time that's _supposed_ to happen is when the image is in the list of "list of available images" [1] (used for de-duping during loading, and I think for reloads), but that's keyed off of absolute URL, so this should prevent reuse there.
Chrome loads data URIs synchronously, which IIUC is a (maybe willful) spec violation, but this doesn't apply here.
Preloads and HTTP cache hits are still supposed to have an async hop since it calls out to Fetch, but I haven't tested this in all browsers (I think I have a test showing this is the case in Chrome) -- but regardless, the random query param should prevent a cached/preloaded resource from being used, so I think this is sound.
[1]https://html.spec.whatwg.org/#list-of-available-images
---
BUT, I'm finding this is flaky locally for another reason: sometimes the feedback for frame N arrives _after_ N+1, so we give everything the timestamp for N+1 :(. So I'm seeing all of the text nodes getting the same timestamp in some cases.
It looks mostly confined to non-threaded compositing (I've seen weirdness with this mode before wrt presentation callbacks), but it timed out at least once with threaded compositing (although I can't get it to fail now..). Looks like it could be related to the jitter we add or something, as these are delayed tasks.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
// a LargestContentfulPaint entry for image1.Scott HaseleyIs this 100% guaranteed? Is the random appended to ensure no cache?
It should be. The key here is we don't want the image to load synchronously, otherwise it'll be in this frame. The only time that's _supposed_ to happen is when the image is in the list of "list of available images" [1] (used for de-duping during loading, and I think for reloads), but that's keyed off of absolute URL, so this should prevent reuse there.
Chrome loads data URIs synchronously, which IIUC is a (maybe willful) spec violation, but this doesn't apply here.
Preloads and HTTP cache hits are still supposed to have an async hop since it calls out to Fetch, but I haven't tested this in all browsers (I think I have a test showing this is the case in Chrome) -- but regardless, the random query param should prevent a cached/preloaded resource from being used, so I think this is sound.
[1]https://html.spec.whatwg.org/#list-of-available-images
---
BUT, I'm finding this is flaky locally for another reason: sometimes the feedback for frame N arrives _after_ N+1, so we give everything the timestamp for N+1 :(. So I'm seeing all of the text nodes getting the same timestamp in some cases.
It looks mostly confined to non-threaded compositing (I've seen weirdness with this mode before wrt presentation callbacks), but it timed out at least once with threaded compositing (although I can't get it to fail now..). Looks like it could be related to the jitter we add or something, as these are delayed tasks.
Re: out-of-order presentation time:
- When coarsening, we round up to nearest multiple of 4ms and use that as the presentation time. We then post a delayed task with delay = origin + presentation_time - now.
- But we sometimes get the same presentation timestamp for consecutive frames. This happens a lot more without threaded compositing, but can also happen with threaded compositing, e.g. with dropped frames (which is what I'm seeing)
- The flakiness results from computing the delay: the target_time (origin + pres_time) is the same, but there's some margin for error since we compute now() and then add that delay back to now() in the scheduling layer
The new-ish `PostDelayedTaskAt()` should fix this since that would use the target_time directly `SequenceManager` rather than doing more math, and the tie would be broken based on sequence number (posting order). But, it's guarded by a PassKey, so I'll need to get this blessed.
One alternative is to use a Timer and maintain a list of callbacks, ordered by target_time, and have a single outstanding task (earliest presentation time). But that's basically duplicating the `SequenceManager` logic.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |