Re: headscratcher...

124 views
Skip to first unread message

Patrick Meenan

unread,
Apr 5, 2016, 9:06:09 AM4/5/16
to Ryan Sleevi, net-dev, Ilya Grigorik
+net-dev

So, it looks like the Windows cert verification for the EV cert failed but a 3rd negotiation with an embedded stapled response succeeded.  I'll dig in and see if I can track down why it happened, if it is specific to the WPT configuration/agents and see if the first 2 certs also included stapled responses or not.  That could potentially explain the behavior of the child connection as well where the 4 negotiated sockets are closed before accepting the 5ths one to use for the H2 connection.

Separately it looks like we may be burning some bandwidth on negotiating TLS for multiple connections that end up being thrown away for H2.  Not sure we can solve it generally but in this case the connections actually go back to the same IP as the base page where we are already talking H2.  Maybe more interesting would be to see why the connections didn't get coalesced (see if the cert on the base connection included the subdomain).


On Mon, Apr 4, 2016 at 4:01 PM, Ryan Sleevi <sle...@google.com> wrote:
Can we take it to a list, to make sure that this information isn't lost?

Can you clarify what your question is? pitchup.com is serving an EV cert. An EV cert triggers revocation checks. Those revocation checks go through the OSs' network-stack on not-Linux. If those revocation checks fail, EV status is denied.

The choice of the last TLS connection is based on the socket pool's logic to pick the warmest socket - the most recently actively socket.

The choice to do multiple TLS handshakes is the same as our choice to do sockets. In 2014, my EP interns looked at delaying the subsequent connections to see about using TLS resumption handshakes (which can be 1RTT), but the answer was that for desktop, it caused a perf-degredation, and for mobile, it was only a perf improvement for the extreme edge. I get NBU, but the complexity tradeoff was extremely high and extremely brittle due to our socket pools.

The choice of multiple connections when we believe a server supports H2 is an opportunity for improvement; right now, AIUI, we race connections. I have no thoughts/knowledge on that.

On Mon, Apr 4, 2016 at 12:53 PM, Ilya Grigorik <igri...@google.com> wrote:
+Ryan.. perhaps you can shed some light on the cert revocation checking mess that's happening here?

Pat, thanks for digging into this. I was hoping I missing something obvious, but looks like we potentially have a couple of issues here..

On Mon, Apr 4, 2016 at 8:59 AM, Patrick Meenan <pme...@google.com> wrote:
The netlog looks a lot like the tcpdump (at least for the base page).  As best as I can tell, a handful of connections are opened, 2 of which go through TLS negotiation but fail because the revocation can't be checked.  After those 2 fail, TLS negotiation is started on the 3rd connection and that one appears to include a stapled response so it sails through and gets used.  What I can't tell is why the revocation checks failed (nothing in the netlog) and the overall behavior is bizarre.

One other thing that is clear is that on bandwidth-limited connections and H2, we are burning a BOATLOAD of bytes doing parallel TLS negotiations for connections that will be unused (and doing them in parallel with the connection we care about)

base page - Socket 279 was used

7195 - Start connect job (x2 - 261, 262)
7196 - Start Connect job (x2 - 265, 266)
7195-7546 DNS lookup for www.pitchup.com (resolved to 213.138.116.230)
7546 - Socket 276 (for connect 262)
7547 - Socket 277 (for connect 261)
7696 - * Start connect job (278) - "BACKUP_CONNECT_JOB_CREATED"?
7696 - * Socket 279 (for connect 278)
7934 - Socket 277 connected, TLS negotiation started
7935 - Socket 276 connected, TLS negotiation started
8085 - * Socket 279 connected
8350 - Socket 277 TLS certificate in, verification request 286
8383 - Socket 276 TLS certificate in, verification request 286
9049 - Socket 277 ERR_CERT_UNABLE_TO_CHECK_REVOCATION, socket closed
9050 - Socket 276 ERR_CERT_UNABLE_TO_CHECK_REVOCATION, socket closed
9051 - * Socket 279 starts SSL negotiation
9474 - * Socket 279 cert in (looks like it may contain a stapled response)
9476 - * Start using socket 279

Unfortunately the netlog doesn't have details on why the verification failed.  All I can see is "cert_status = 131104 (REV_CHECKING_ENABLED | UNABLE_TO_CHECK_REVOCATION)"


On Mon, Apr 4, 2016 at 11:27 AM, Patrick Meenan <pme...@google.com> wrote:
Sorry, digging in now (thought it would be involved since anything that makes you headscratch is not going to be a quick answer).

First thing that jumped out at me was that I didn't report the DNS lookup and socket connect times for the connections so I clearly have a bug to fix around that.  It also looks like the TLS negotiation time wasn't reported in all of the waterfalls.That would account for SOME of the gap.

Also looks like tcpdump is being a bit buggy and didn't capture in 2/3 of the tests when I ran it but I did manage to capture one with a tcpdump.

The absolute times in the netlog, trace and waterfall won't line up but relative to the first request they should.  The tcpdump is a mess.  For the static domain, we open and negotiate TLS for a lot of separate connections (5 + several more later) and in a bizarre twist of logic, pick the last of the 5 to complete as the one to keep while closing the other 4.  There is also a bunch of TLS handshake and certificate traffic in flight for all of those connections competing with each other.

The base page also makes no sense what so ever and the craziness may not even be represented in the WPT waterfall.  It looks like we opened and negotiated 2 TLS connections, opened a 3rd (because, why not?) looked up the domains for the CRL/OCSP checks (but didn't check anything) and then closed the first 2 connections and went through the TLS negotiation for the 3rd.

I'll go through the netlog to see if I can make any sense of it and I guess it could be WPT doing something to piss off Chrome but I can't imaging what.

Base Page

7.238 - DNS www.pitchup.com
7.538 - DNS response 213.138.116.230
7.540 - SYN (x2) to 213.138.116.230
7.927 - SYN to 213.138.116.230 (hmm, why are we trying a 3rd connection?)
7.927 - SYN ACK x2
7.928 - TLS Client Hello (x2)
8.077 - SYN ACK (3rd)
8.324 - Server Hello & Certificates (x2)
8.342 - Client key, change cipher spec, handshake finished (x2)
8.432 - DNS ocsp.digicert.com (revocation check started?)
8.732 - DNS response 72.21.91.29 (never see a connection appempted to this address)
8.734 - Server handshake complete + some application data (keylog didn't seem to work)
8.738 - DNS crl4.digicert.com
8.942 - DNS response 66.225.197.197 (don't see a connection attempt here either)
9.042 - FIN ACK to both original connections (WTF?)
9.044 - Client hello for 3rd connection
9.446 - Server hello + certificate
9.463 - Client key exchange, handshake finished
9.463 - Client application data (presumably the http request)
9.857 - Server handshake finished
9.858 to 9.934 - Server application data (response?)

media.pitchup.co.uk content requests
Looks like client port 50724 ends up being the connectiion used for the bulk of the data (tcp.stream eq 8)

10.152 - DNS media.pichup.co.uk
10.452 - DNS 213.138.116.230 (same as base page)
10.453 - * SYN to 213.138.116.230 (x5 - client ports 50720 - 50724)
10.840 - SYN ACK's start coming in
10.841 - Client Hello's start going out
10.960 - * SYN ACK for 50724
11.239 - Server Hello's start coming in
11.266 - FIN ACK's (for most of the connections). Looks like all but 50724 (probably discovered it is H2 and picked a connection to keep)
11.267 - * Client hello for 50724
11.280 - SYN to 213.138.116.230 (client port 50726) - no ide why more connections to the same server are being opened
11.301 - SYN to 213.138.116.230 (client port 50727)
11.665 - * Server hello for 50724
11.679 - * Certificate for 50724
11.681 - SYN ACK (50726)
11.682 - * 50724 Client handshake finished, requests go out (presumably)
11.690 - SYN ACK (50727)
12.074 - * 50724 Server handshake finished, responses start coming in


On Fri, Apr 1, 2016 at 5:20 PM, Ilya Grigorik <igri...@google.com> wrote:
Hey Pat. 

I'm trying to make sense of this, wondering if you have any ideas:

Specifically, trying to understand the reported gap between the HTML and the subresources. 

What's confusing me is the fact that the waterfall doesn't really line up with the traces. The timing appears to be off, and I'm seeing some weird behavior with the cert verifier... e.g. netlog shows a bunch of errors:

net_error = -205 (ERR_CERT_UNABLE_TO_CHECK_REVOCATION)

Can't wrap my head around what's actually going on here.. Any ideas? My hunch is that the EV certs are doing some funny here, but can't prove it yet. =/

ig





Patrick Meenan

unread,
Apr 5, 2016, 9:36:17 AM4/5/16
to Ryan Sleevi, net-dev, Ilya Grigorik
Looking at the tcpdump:

- All of the TLS negotiations have stapled responses (including the 1st 2 that get rejected)
- The certificates include alt names for www.pitchup.com, m.pitchup.com and media.pitchup.co.uk

Which raises a bunch of questions.  I'll see if I can instrument a dev build and see the code path better but:

- Why are the certificates being passed to Windows for verification when they include stapled responses at the TLS layer?  
- Is it a timing issue since the stapling come in a TLS frame that follows the certificates (though in the same packet in this case).  Do we start the OS validation and cancel it once we see the stapled response?  
- Windows did the DNS lookups for the crl and ocsp domains and the validation failed after the DNS lookup completed.  If we cancel an in-flight validation but it doesn't return until later, is there a race condition where the failure from canceling overwrites the stapled validation?

- Why did the subresource requests not get coalesced onto the existing H2 connection since the certificate included the domain and it resolved to the same IP address?

Ryan Sleevi

unread,
Apr 5, 2016, 12:11:11 PM4/5/16
to Patrick Meenan, Ilya Grigorik, net-dev


On Apr 5, 2016 6:36 AM, "Patrick Meenan" <pme...@google.com> wrote:
>
> Looking at the tcpdump:
>
> - All of the TLS negotiations have stapled responses (including the 1st 2 that get rejected)
> - The certificates include alt names for www.pitchup.com, m.pitchup.com and media.pitchup.co.uk
>
> Which raises a bunch of questions.  I'll see if I can instrument a dev build and see the code path better but:
>
> - Why are the certificates being passed to Windows for verification when they include stapled responses at the TLS layer?  

I am not sure I understand your question.

> - Is it a timing issue since the stapling come in a TLS frame that follows the certificates (though in the same packet in this case).  Do we start the OS validation and cancel it once we see the stapled response?  

No

> - Windows did the DNS lookups for the crl and ocsp domains and the validation failed after the DNS lookup completed.  If we cancel an in-flight validation but it doesn't return until later, is there a race condition where the failure from canceling overwrites the stapled validation?

No

>
> - Why did the subresource requests not get coalesced onto the existing H2 connection since the certificate included the domain and it resolved to the same IP address?

Because we don't coalesce until we have negotiated the TLS connection

Patrick Meenan

unread,
Apr 5, 2016, 2:00:25 PM4/5/16
to Ryan Sleevi, Ilya Grigorik, net-dev
On Tue, Apr 5, 2016 at 12:11 PM, Ryan Sleevi <rsl...@chromium.org> wrote:


On Apr 5, 2016 6:36 AM, "Patrick Meenan" <pme...@google.com> wrote:
>
> Looking at the tcpdump:
>
> - All of the TLS negotiations have stapled responses (including the 1st 2 that get rejected)
> - The certificates include alt names for www.pitchup.com, m.pitchup.com and media.pitchup.co.uk
>
> Which raises a bunch of questions.  I'll see if I can instrument a dev build and see the code path better but:
>
> - Why are the certificates being passed to Windows for verification when they include stapled responses at the TLS layer?  

I am not sure I understand your question.

The question is around the verification of the certificate for the base page.  In the tcpdump, the certificate comes in along with an OCSP stapled response (same packet, tls record right after the certificate).  

Following the certificate are DNS lookups for the crl and ocsp responders for the certificate which seems to indicate that the Windows cert verifier at least started to check crl/ocsp for the EV cert (ignoring the stapled response).  There don't appear to be intermediate certificates so I assume the DNS lookup is for the leaf certificate verification. 

Shortly after the DNS lookups complete for the crl/ocsp domains the cert verification fails.  Both TLS connections were bound to the same cert verifier job and both get closed.  The cert verifier job shows now indication that it saw the stapled response.

After that, a 3rd connection that was already opened but idled, starts tls negotiation.  The cert response from the server is the same as on the earlier connections but this time the netlog shows that the cert verifier job sees the stapled response and the verification passes.

> - Is it a timing issue since the stapling come in a TLS frame that follows the certificates (though in the same packet in this case).  Do we start the OS validation and cancel it once we see the stapled response?  

No

> - Windows did the DNS lookups for the crl and ocsp domains and the validation failed after the DNS lookup completed.  If we cancel an in-flight validation but it doesn't return until later, is there a race condition where the failure from canceling overwrites the stapled validation?

No

>
> - Why did the subresource requests not get coalesced onto the existing H2 connection since the certificate included the domain and it resolved to the same IP address?

Because we don't coalesce until we have negotiated the TLS connection

Sorry, I guess that adds at least one more RTT to the coalescing than I thought was in place.  I was under the impression that we recognized that the IP is the same as one that we already have an open connection for and that the open connection already has a certificate that includes the new host in the list of alt names.

Even then, in this case the actual traffic was over the new connection so we didn't coalesce even after TLS negotiation. 

Ryan Sleevi

unread,
Apr 5, 2016, 2:35:52 PM4/5/16
to Patrick Meenan, Ryan Sleevi, Ilya Grigorik, net-dev
On Tue, Apr 5, 2016 at 11:00 AM, Patrick Meenan <pme...@google.com> wrote:
Following the certificate are DNS lookups for the crl and ocsp responders for the certificate which seems to indicate that the Windows cert verifier at least started to check crl/ocsp for the EV cert (ignoring the stapled response).  There don't appear to be intermediate certificates so I assume the DNS lookup is for the leaf certificate verification. 

I'm not sure what you mean by "there don't appear to be intermediate certificates"

The chain is www.pitchup.com -> DigiCert SHA2 Extended Validation Server CA -> DigiCert High Assurance EV Root.

The stapled OCSP response *only* covers www.pitchup.com. The DNS lookups are for the revocation checking of the intermediate - DigiCert SHA2 Extended Validation Server CA (crl4.digicert.com, ocsp.digicert.com). That's because for EV certificates, we do revocation checking for *all* certificates in the path.
 
Shortly after the DNS lookups complete for the crl/ocsp domains the cert verification fails.  Both TLS connections were bound to the same cert verifier job and both get closed.  The cert verifier job shows now indication that it saw the stapled response.

That's correct, we don't log it.
 
After that, a 3rd connection that was already opened but idled, starts tls negotiation.  The cert response from the server is the same as on the earlier connections but this time the netlog shows that the cert verifier job sees the stapled response and the verification passes.

No, that's not a correct interpretation of the results.
 
Sorry, I guess that adds at least one more RTT to the coalescing than I thought was in place.  I was under the impression that we recognized that the IP is the same as one that we already have an open connection for and that the open connection already has a certificate that includes the new host in the list of alt names.

Even then, in this case the actual traffic was over the new connection so we didn't coalesce even after TLS negotiation. 

I'm not sure how to interpret this either. The coalescing happens after verification. The way it's structured now, verification happens after the TLS Finished message. That's entirely in line with the socket 'not being ready' (because verification is still happening) and thus kicking off another connect.


I think your understanding of what's happening is off here, and while I'm responding to say "No, that's not correct", I'm not sure if it would be more useful to explain what *is* happening. But I'm not seeing anything unexpected here.

Patrick Meenan

unread,
Apr 5, 2016, 2:43:01 PM4/5/16
to Ryan Sleevi, Ilya Grigorik, net-dev
The main unexpected behavior that I'd love to understand is why cert verification failed for the first verifier job but succeeded for the second one (for the base domain).  The crl/ocsp responders were looked up but never connected to (not even before the verification succeeded on the second attempt).

I guess it could be something screwy inside of Windows' verification logic itself.

Ryan Sleevi

unread,
Apr 5, 2016, 2:48:37 PM4/5/16
to Patrick Meenan, Ryan Sleevi, Ilya Grigorik, net-dev
On Tue, Apr 5, 2016 at 11:42 AM, Patrick Meenan <pme...@google.com> wrote:
The main unexpected behavior that I'd love to understand is why cert verification failed for the first verifier job but succeeded for the second one (for the base domain).  The crl/ocsp responders were looked up but never connected to (not even before the verification succeeded on the second attempt).

I guess it could be something screwy inside of Windows' verification logic itself.

Or the servers themselves are unreliable, which is true for most OCSP and CRL responders.

Patrick Meenan

unread,
Apr 5, 2016, 3:32:08 PM4/5/16
to Ryan Sleevi, Ilya Grigorik, net-dev
No outbound SYN.  Whatever happened was completely local (both in the failure AND success case)

Ryan Sleevi

unread,
Apr 5, 2016, 9:08:24 PM4/5/16
to Patrick Meenan, Ryan Sleevi, Ilya Grigorik, net-dev

Ryan Sleevi

unread,
Apr 5, 2016, 9:28:30 PM4/5/16
to Ryan Sleevi, Patrick Meenan, Ilya Grigorik, net-dev, bruce...@chromium.org
One possible hypothesis for the first two failing, but with the DNS bits being what they are - CAPI does a network liveness check (via sensapi's IsNetworkAlive function). If there's trouble with that (for example, perhaps the status was having issues in the VM?), then that'd manifest as a revocation check failure - but as the network status quiesced, that would allow the validation to succeed.

You can force CryptNet (the OCSP/CRL fetching component of CAPI) to be super-chatty by using the RegKey hive of HKLM\SYSTEM\CurrentControlSet\Services\Crypt32 and setting the DebugFlags DWORD value to 1. If you have that, CryptNet will go into super-chatty mode via OutputDebugString . If you REALLY want to make it chatty, and trace the logic and not just the diagnostics, set it to 3, and this will enable full tracing ( which DebugView can capture, and I suspect Bruce, cc'd, may have other suggestions).

For WPT runs, this will give significantly more insight into the TLS verification steps, if you want; we don't have anything remotely comparable on the other platforms.

Anyways, those are the best things I can recommend, but I'd lean on IsNetworkAlive hiccuping.

Patrick Meenan

unread,
Apr 6, 2016, 12:05:23 AM4/6/16
to rsl...@chromium.org, Ilya Grigorik, net-dev, bruce...@chromium.org
Thanks. I can turn on the reg key and grab the logs from debugview. I'll update when I have the results. 

Patrick Meenan

unread,
Apr 12, 2016, 1:30:48 PM4/12/16
to rsl...@chromium.org, Ilya Grigorik, net-dev, bruce...@chromium.org
CAPI logs were a huge help, thanks

tl;dr: WPT bug, all fixed and OCSP/CRL checks now show up in WPT waterfalls (for both IE and Chrome)


WPT uses CNAME mappings to identify CDN's and to help collect the CNAMEs while a test is running it adds a flag to the DNS lookups so that the CNAME is also returned with the lookup (saves a significant amount of time over actively performing lookups after the test).  Unfortunately It universally added the AI_CANONNAME flag to the lookups and that conflicts with AI_FQDN which WinHttp was using for the OCSP/CRL lookups to populate the FQDN in the cname field.

That pretty much means every OCSP/CRL validation check on the WPT agents for Chrome and IE has been effectively blocked for years (like 5 years years).

The fix was brutally simple and I landed it this afternoon.  With the fix, the validation checks show up in the waterfalls now just like they have for Firefox for the longest time: http://www.webpagetest.org/result/160412_W2_e4acee08d8b863a6dfe1c509c23b1c9d/

If you want a clear cert cache when running a test (to see what validations are done), make sure to check the "Clear SSL Certificate Caches" box in the Advanced tab in the advanced settings.  I used to always clear the Windows certificate caches but there was a lot of push-back from users because the root intermediaries are very likely to be cached on end user systems so I made it optional.

Thanks,

-Pat

Ilya Grigorik

unread,
Apr 12, 2016, 5:51:50 PM4/12/16
to Patrick Meenan, rsl...@chromium.org, net-dev, bruce...@chromium.org
On Tue, Apr 12, 2016 at 10:30 AM, Patrick Meenan <pme...@google.com> wrote:
CAPI logs were a huge help, thanks

tl;dr: WPT bug, all fixed and OCSP/CRL checks now show up in WPT waterfalls (for both IE and Chrome)

WPT uses CNAME mappings to identify CDN's and to help collect the CNAMEs while a test is running it adds a flag to the DNS lookups so that the CNAME is also returned with the lookup (saves a significant amount of time over actively performing lookups after the test).  Unfortunately It universally added the AI_CANONNAME flag to the lookups and that conflicts with AI_FQDN which WinHttp was using for the OCSP/CRL lookups to populate the FQDN in the cname field.

That pretty much means every OCSP/CRL validation check on the WPT agents for Chrome and IE has been effectively blocked for years (like 5 years years).

The fix was brutally simple and I landed it this afternoon.  With the fix, the validation checks show up in the waterfalls now just like they have for Firefox for the longest time: http://www.webpagetest.org/result/160412_W2_e4acee08d8b863a6dfe1c509c23b1c9d/

\o/ -- yay, great work Pat. The new waterfalls look much better!

---

This probably deserves a separate thread, but since we're on the subject...

Inline image 1

It would be nice if WPT showed when the h2 requests are sent -- e.g. devtools shows a "skinny" line up to the point when we get a TTFB. Right now the gap between the HTML and request TTFB's looks unnecessarily scary, as if the browser is not doing anything... but looking at the trace for that window of time shows that the requests have gone out and we're waiting for the server response:

Inline image 2

The right boundary of that window corresponds to the TTFB in the WPT trace.

</aside>

ig

Patrick Meenan

unread,
Apr 12, 2016, 6:05:52 PM4/12/16
to Ilya Grigorik, rsl...@chromium.org, net-dev
-brucedawson

I'm fairly certain the huge gap in the WPT waterfall is actually accurate and the timings are actually coming from dev tools for this specific site (they are SPDY, not H2 and WPT doesn't have a SPDY decoder).  The gap is the DNS lookup + whatever else happens before we decide to coalesce the requests for the media* domain to the base domain.  The requests do originate a lot sooner in the browser but they don't actually go out on to the wire for a while which is what WPT reports.

Ilya Grigorik

unread,
Apr 12, 2016, 6:14:09 PM4/12/16
to Patrick Meenan, rsl...@chromium.org, net-dev
On Tue, Apr 12, 2016 at 3:05 PM, Patrick Meenan <pme...@google.com> wrote:
I'm fairly certain the huge gap in the WPT waterfall is actually accurate and the timings are actually coming from dev tools for this specific site (they are SPDY, not H2 and WPT doesn't have a SPDY decoder).  The gap is the DNS lookup + whatever else happens before we decide to coalesce the requests for the media* domain to the base domain. 

Guessing there is no way for us to show that in the waterfall, since that work is never surfaced to higher layers?
 
The requests do originate a lot sooner in the browser but they don't actually go out on to the wire for a while which is what WPT reports.

Hmm, are you sure? The right boundary in the above trace selection is ~2.3s, which lines up with us reading the response. However, the request itself hits the wire much sooner (~300ms sooner, to be exact), which is what I'm saying would be nice to report.


Patrick Meenan

unread,
Apr 12, 2016, 6:29:08 PM4/12/16
to Ilya Grigorik, rsl...@chromium.org, net-dev
Showing the "blocked" time is something I've considered but it hasn't been important enough yet to push everything else out of the way.  In the case of H2 I'm 100% positive on the timings because it comes from the socket filter.  For SPDY it depends on the remote debug net.* events ("request will be sent" I believe is the one that triggers the start of the outbound bytes measurement).  Don't line the times up exactly because the traces will have an earlier start point (i.e. the DNS lookup for the base domain will not be at 0).

Eyeballing the waterfall screen shot, it looks like ~500ms from the HTML to the outbound request and those are the 2 you'd want to compare to in the trace.  If you have a link to the test result I can pull the raw dev tools debug events to compare.

Ilya Grigorik

unread,
Apr 12, 2016, 6:45:12 PM4/12/16
to Patrick Meenan, rsl...@chromium.org, net-dev
On Tue, Apr 12, 2016 at 3:29 PM, Patrick Meenan <pme...@google.com> wrote:
Showing the "blocked" time is something I've considered but it hasn't been important enough yet to push everything else out of the way.  In the case of H2 I'm 100% positive on the timings because it comes from the socket filter.  For SPDY it depends on the remote debug net.* events ("request will be sent" I believe is the one that triggers the start of the outbound bytes measurement).  Don't line the times up exactly because the traces will have an earlier start point (i.e. the DNS lookup for the base domain will not be at 0).

Ok, gotcha.
 
Eyeballing the waterfall screen shot, it looks like ~500ms from the HTML to the outbound request and those are the 2 you'd want to compare to in the trace.  If you have a link to the test result I can pull the raw dev tools debug events to compare.

Reply all
Reply to author
Forward
0 new messages