Re: More deterministic web page replay (was [blink-dev] Blink scheduler question)

71 views
Skip to first unread message

Rick Byers

unread,
May 20, 2015, 12:27:50 PM5/20/15
to Roberto Perdisci, tele...@chromium.org
Moving this fork of the discussion to telemetry@ (bcc blink-dev) and updating subject.

On Wed, May 20, 2015 at 11:55 AM, Roberto Perdisci <roberto....@gmail.com> wrote:
In terms of reducing record/replay non-determinism, we have developed
a number of "best effort" approaches to limit it, both for Blink and
for V8. For example, for V8 we record/replay values returned from
Math.random() and from Date(). We do this at the level of V8's
platform (e.g., we "wrap" the platform to hook OS::TimeCurrentMillis).
We do more "advanced" things for JS-driven asynchronous requests, etc.

Makes sense.  telemetry@ have we encountered issues in our perf tests due to non-determinism like times and random numbers?  I wonder how much value there would be in enabling this sort of thing to be mocked out.  Eg. when we measure perf against the top 10k pages, are there pages that are particularly noisy?  Manually ensuring things are sufficiently deterministic (as I assume we've done for top25) obviously doesn't scale.

I could share more technical details privately, if you are interested
(we have a full academic paper under submission, and would like not to
openly disclose all the details until that becomes public).

Fair enough.  How about you just circle back here once your paper is published?  We have a system today for users to send us "slow reports" (chrome://tracing traces), I'd love to have the option of including a full replay log in that so we can replay EXACTLY what happened (I assume the size is dominated by the network traffic - so not much bigger than the bandwidth used during the recording?).  This would be useful for functional issues in addition to performance of course.  We've experimented with this sort of thing a bit (eg. we have various input event record/replay systems) but never (AFAIK) made a serious attempt at fully reliable record/replay.

BTW, you can see WebCapsule in action here (and other videos by
searching for "WebCapsule project" on Youtube):
https://www.youtube.com/watch?v=K1CwIwcTgbE

Thanks,


Roberto



On Wed, May 20, 2015 at 11:20 AM, Rick Byers <rby...@chromium.org> wrote:
>
> On Wed, May 20, 2015 at 10:57 AM, Roberto Perdisci
> <roberto....@gmail.com> wrote:
>>
>> Dear Blink-dev list,
>>
>>    I'm trying to figure out if there is a specific past Git commit ID for
>> Blink's code that contains a full implementation of the Blink scheduler
>> described in this document:
>> https://docs.google.com/document/d/11N2WTV3M0IkZ-kQlKWlBcwkOkKTCuLXGVNylK5E2zvc/edit#heading=h.3ay9sj44f0zd
>
>
> +scheduler-dev.
>
>> I understand that there is a heavy refactoring in progress for the Blink
>> scheduler, as outlined in this other document:
>> https://docs.google.com/document/d/16f_RIhZa47uEK_OdtTgzWdRU0RFMTQWMpEWyWXIpXUo/edit#heading=h.srz53flt1rrp
>>
>> However, my understanding is that this refactoring is far from complete,
>> and the code is being heavily changed.
>>
>> Just to give a bit of background about the above question: we have
>> developed a system called WebCapsule that is able to record web browsing
>> traces, offload the recorded data, and then seamlessly replay the recorded
>> browsing activities in a separate isolated environment with no new user
>> input or network resources (we had a poster about WebCapsule at this year's
>> Usenix NSDI conference: http://goo.gl/RrJRDZ). This is done via a
>> self-contained instrumentation of Blink (no changes to any code outside of
>> Blink), and by leveraging DevTools. Our current replay strategy takes a
>> "best effort" approach to cope with non-determinism introduced by thread
>> scheduling. While our current approach works quite well in practice, we are
>> planning to instrument the Blink scheduler to get closer to fully
>> deterministic replay. As we are not currently interested in all the UI-level
>> optimizations that seem to have motivated the Blink scheduler refactoring,
>> my thinking is that we can work off of the previous Blink scheduler
>> implementation to achieve (or get really close) to our goals.
>
>
> Note that "UI-level optimizations" are the primary reason for the existence
> of the blink scheduler in the first place (eg. to try to get smooth
> scrolling during page load).  Perhaps rather than find an old version to
> use, you just want to disable the scheduler with
> --disable-blink-features=BlinkScheduler?
>
> BTW, your system sound interesting.  We rely heavily on "web page replay"
> for our 'telemetry' performance testing, but it doesn't attempt to replay
> user input - just network traffic.  Adding user input record and replay
> seems like it could be valuable for both perf and functional testing.  If
> there are other places you've successfully reduced non-determinism I'd love
> to hear details (perhaps we can bake it more directly into chrome or
> telemetry) non-determinism can be a huge pain for our performance testing.
>
>> Any help would be greatly appreciated.
>>
>> Thank you,
>> regards
>>
>>
>> Roberto
>>
>>
>

Ned Nguyen

unread,
May 20, 2015, 3:29:16 PM5/20/15
to tele...@chromium.org, erik...@chromium.org, roberto....@gmail.com
+erikchen

erik...@google.com

unread,
May 20, 2015, 4:17:32 PM5/20/15
to tele...@chromium.org, erik...@chromium.org, roberto....@gmail.com
Replay from the top 25 pages is not very deterministic. This does cause problems for Telemetry. (No one has manually massaged the pages to reduce non-determinism).
Telemetry does not attempt to replay sites from Alexa top 10000.

Stubbing out Math.random() and Date() would be a good start, but likely not sufficient.
WPR makes a half-hearted stab at it here:
https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webpagereplay/deterministic.js&q=deterministic.&sq=package:chromium&l=1

I suspect it would be necessary to make the replay happen on the same machine that the site was recorded on, unless you also plan to stub out every call that returns hardware and system-specific information.

Roberto Perdisci

unread,
May 20, 2015, 5:39:47 PM5/20/15
to Rick Byers, tele...@chromium.org, erik...@google.com
On Wed, May 20, 2015 at 12:27 PM, Rick Byers <rby...@chromium.org> wrote:

> [snip]
> Fair enough. How about you just circle
> back here once your paper is published?

Rick:

sure, we would be very happy to share all details once the paper is
out. In fact, we are planning to release our proof-of-concept code as
well.

> [snip]
> (I assume the size is dominated by the
> network traffic - so not much bigger than
> the bandwidth used during the recording?).

Yes, correct. All other metadata represent only a small fraction of
the recording of the network traffic.

Thanks,


Roberto

Roberto Perdisci

unread,
May 20, 2015, 5:50:48 PM5/20/15
to erikchen, telemetry, erik...@chromium.org
On Wed, May 20, 2015 at 4:17 PM, <erik...@google.com> wrote:
> Replay from the top 25 pages is not very deterministic. This does cause problems for Telemetry. (No one has manually massaged the pages to reduce non-determinism).
> Telemetry does not attempt to replay sites from Alexa top 10000.
>
> Stubbing out Math.random() and Date() would be a good start, but likely not sufficient.
> WPR makes a half-hearted stab at it here:
> https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webpagereplay/deterministic.js&q=deterministic.&sq=package:chromium&l=1
>

Erik:

thanks for the link to deterministic.js. We do have code in
WebCapsule that also records other JS non-determinism, though we need
to expand on that part.

> I suspect it would be necessary to make the replay happen on the same machine that the site was recorded on, unless you also plan to stub out every call that returns hardware and system-specific information.


We currently record most of Blink's platform API calls. For example,
even if we replay in a different device, during replay we can return
the original User-Agent string and other system-specific info that
were observed during recording.

As an example, we have successfully recorded some browsing traces on
an ARM-based Nexus 7 tablet, and then replayed those traces correctly
on an (emulated) x86-based Android Virtual Device (running on top of
Ubuntu).

Therefore, we believe that "cross-platform" record/replay is possible,
though there still are a number of fine details to be ironed out.

Thanks,


Roberto

Paul Irish

unread,
May 22, 2015, 4:29:37 PM5/22/15
to tele...@chromium.org, roberto....@gmail.com, erik...@chromium.org, Sam Uong, erik...@google.com, Pavel Feldman
Hey Roberto,

Just wanted to add that I'm very happy to see this work! 
We've had WPR for network replay, but this is very exciting indeed.

I work on the developer tools in Chrome and we know many developers are interested in having this level of control.

Good luck!

Roberto Perdisci

unread,
May 22, 2015, 5:02:55 PM5/22/15
to Paul Irish, telemetry, erik...@chromium.org, Sam Uong, Erik Chen, Pavel Feldman, Chris Neasbitt
Thank you, Paul.

We are really happy to hear our project is of interest. It took a lot
of effort for our small academic team, and most of WebCapsule's success
is due to the hard work of Chris Neasbitt (in CC), our lead PhD
student on the project.

We look forward to sharing all details and code of our WebCapsule
system (hopefully our paper will be published soon ;) ), and to
receiving further feedback from you guys so that we can try to bring
the project to the next level.

Best regards,


Roberto

Ned

unread,
May 22, 2015, 5:07:25 PM5/22/15
to Roberto Perdisci, Paul Irish, telemetry, erik...@chromium.org, Sam Uong, Erik Chen, Pavel Feldman, Chris Neasbitt
I am uncomfortably excited to see this work happens!

roberto....@gmail.com

unread,
Aug 6, 2015, 12:41:49 PM8/6/15
to telemetry, roberto....@gmail.com
Hello Rick,

a few months ago I had written a question about Blink's scheduler, and mentioned that we have developed a system called WebCapsule that aims to perform record and replay of web browsing traces.

As promised, I'm circling back to share more information about our system, now that our WebCapsule paper has been officially accepted (it will appear at ACM Conference on Computer and Communications Security 2015).

You can find a draft of the paper at the following link:
http://roberto.perdisci.com/publications/publication-files/webcapsule.pdf

We are planning to release our WebCapsule prototype and a number of recorded browsing traces in the near future.

If you have any comments on the paper, we would love to here from you and other Chromium developers.

As a follow up on our current paper, we are studying if it is possible to make replay more deterministic (currently we use a best effort replay approach). That's why we are trying to learn more about Blink's scheduler and other parts of Chromium that can introduce significant non-determinism in the replay process.

Please, let me know if you have any comments.

Best regards,



Roberto

Ned

unread,
Aug 6, 2015, 12:43:58 PM8/6/15
to roberto....@gmail.com, telemetry, k...@google.com, z...@google.com

Roberto Perdisci

unread,
Sep 26, 2015, 9:06:52 AM9/26/15
to telemetry, roberto....@gmail.com, k...@google.com, z...@google.com
Dear Telemetry group,

   last time I mentioned our WebCapsule project for record-and-replay of web browsing traces, there seemed to be interest for its potential applications to telemetry. 

I wanted to let you know that we have release most of our WebCapsule code on GitHub: http://webcapsule.org

On our webcapsule.org page you can find links to the code, binaries (Linux and Android), our ACM CCS 2015 paper, demos, and our Wiki that contains documentation on how to run WebCapsule.

We are moving forward with our project, and aiming to improve replay towards full determinism. Our project was initially geared towards security applications (e.g., enabling fine-grained forensic analysis of phishing attacks), but we are now also focusing on other application scenarios, such as web application debugging and performance analysis. 

Since you are the experts in this area, we would very much appreciate any feedback you might be able to provide. I would be happy to discuss further in person, if there is interest. I am also planning to attend BlinkOn in November, and would love to meet some of you and brainstorm about possible common interests around record-and-replay techniques for Blink/V8.

Thank you very much,
best regards



Roberto

Michael Klepikov

unread,
Sep 26, 2015, 5:47:10 PM9/26/15
to Roberto Perdisci, telemetry, Zoe Wright
Hi Roberto, we are currently using WPR in the google3 latency lab for web performance/latency testing. In our context we need maximally faithful replay of the recorded session(s), otherwise tests fail too often to be useful. Some of the more sophisticated apps e.g. Tactile, have a degree nondeterminism themselves, and we allow several rounds of recording in order to deal with that. So far we've found that we cannot achieve a good pass rate on sophisticated tests/apps without adding certain rule-based tweaks in WPR that can be configured for each test:


Have you explored WebCapsule replay on a nontrivial scenario with Tactile? For example: search, wait for the results to show up, and click on a result. If we could eliminate, or at least reduce, the need to hand-tweak replay for each tested app through trial and error, that would certainly improve UX for the developers who want to use record/replay in their tests.

Thanks.

Michael

Roberto Perdisci

unread,
Sep 27, 2015, 11:36:04 AM9/27/15
to telemetry, roberto....@gmail.com, z...@google.com, k...@google.com
Dear Michael,

   thank you for your response. We did encounter similar issues with dynamically generated URLs in WebCapsule. Here is an example :

https://www.amazon.com/empty.gif?1424608322667


where 1424608322667 is a timestamp (in milliseconds). During replay, that timestamp may be off, making it difficult to match the requested URL with its response during replay.


In WebCapsule we use a number of approaches to try to solve this type of problems:


1) We record the return value of calls to CurrentTime and MonotonicallyIncreasingTime from both Blink's and V8's platform APIs. During replay, we attempt to re-synch the "replay clock" to the recorded timeline (this is a purely best-effort approach, but it actually brings us close to what we want).


2) We record JS calls to Math.random(), and during replay we attempt to replay the same return values as seen during recording (in case the URL embeds a parameter derived from a random number).


3) For every network request, we record the current JavaScript call stack. If we are not able to match a response with the previous methods, we can attempt to match the JS call stack during replay with what seen during recording, to identify the correct response. Essentially, the JS callstack becomes a key to the table of network responses. Combined with other information (URL's domain/structure/timestamp) this can actually help a lot).


4) One thing that we have not yet implemented but are planning to do is approximate matching of URLs. Again, the idea is to try to identify the correct response, even if the URL requested during replay is slightly different from the URL seen during recording.


Methods 1) and 2) aim to "force" Blink/V8 to re-generate the very same URLs as seen during recording. Methods 3) and 4) aim to take care of those cases in which 1) and 2) failed, for some reason.


The other part of WebCapsule that I think may be helpful to Telemetry is the recording and replay of user-browser interactions (key-presses, clicks, mouse movements, taps, gestures, page scrolls, etc.).



I'm not familiar with Tactile. I searched online, but I found several possible relevant results. Could you point me more specifically to the Tactile you are referring to?


Thank you very much,

regards



Roberto



Michael Klepikov

unread,
Sep 28, 2015, 12:00:44 AM9/28/15
to Roberto Perdisci, telemetry, Todd Wright

Hi Roberto, sorry, didn't realize you are not at Google, sorry for using an internal nickname:) Let's start from the beginning... say, Google Maps – do a search, wait for the results to display on the map, click on one of the results, wait for the result popup to render. There is a lot of semi-randomness hidden underneath. Have you tried that with WebCapsule?

Michael
Sent from my mobile device

Roberto Perdisci

unread,
Sep 28, 2015, 12:14:37 PM9/28/15
to telemetry, roberto....@gmail.com, z...@google.com, k...@google.com
Hi Michael,

   thanks for the clarification. Yes, Google Maps is a challenge, and we can replay it successfully only in part. Here is a quick demo that I prepared of using WebCapsule to interact with Google Maps (the subtitles should help clarify what's happening, if the terminal text is too small on your screen): 

https://youtu.be/TvuFtWMKTMg   (this demo video is only 2.5 minutes long)

Essentially, while WebCapsule can handle user input and in general replay of search-related network requests, what I noticed is that WebCapsule is not currently able to replay network request/responses related to updating the map's tiles. This is caused by requests that look like the following:


Map updates/interactions seem to generate a very large number of these unique requests. There must be some non-deterministic input that goes into computing the last part of the URL that we either don't currently handle (e.g., we do not have a hook for the right Blink/V8 platform API call), or we record the input but we fail to replay it correctly (e.g., we return the wrong random value of current time value to V8). We will need to investigate deeper on this.

This is actually a good starting point and target for our next steps towards making WebCapsule's replay more deterministic. As an example, one of the things we are currently working on is to wrap V8 to make the entire JavaScript execution fully deterministic. It's an ambitious goal, but we have some good ideas on how to get there.

Please, let me know if you have any additional feedback.

Thank you very much,


Roberto

Michael Klepikov

unread,
Sep 28, 2015, 12:28:33 PM9/28/15
to Roberto Perdisci, telemetry, Zoe Wright
Hi Roberto, thanks for the demo and a peek into the plans! You keep mentioning Blink and V8... Are you targeting anything other than Chrome (and maybe Opera)? Specifically Mobile Safari is very interesting, as well as other desktop browsers – FF, IE, Edge. WPR is browser agnostic, which is a big advantage in our environment.

Michael

Roberto Perdisci

unread,
Sep 28, 2015, 12:39:32 PM9/28/15
to telemetry, roberto....@gmail.com, z...@google.com, k...@google.com
Michael,

   I am aware that the telemetry's WPR works from "outside" the browser. WebCapsule instead works from "inside" the browser.

More specifically, WebCapsule requires modifications to the Web API and Platform API for Blink and V8 (we add hooks that allow us to record UI and network events by extending DevTools). This makes it portable to browsers that embed Blink/V8 (Chromium, Opera, Yandex, etc.) and virtually platform-agnostic because we inherit this property from Blink. However, WebCapsule is not browser-agnostic, in that it is not easily portable to browsers that do not user Blink/V8.

Thanks,


Roberto

Michael Klepikov

unread,
Sep 28, 2015, 12:50:27 PM9/28/15
to Roberto Perdisci, telemetry, Todd Wright

When you say modifications, does that involve building a custom binary of the browser with WebCapsule enabled?

Michael
Sent from my mobile device

Roberto Perdisci

unread,
Sep 28, 2015, 12:54:19 PM9/28/15
to telemetry, roberto....@gmail.com, z...@google.com, k...@google.com
Yes, that's correct. We extend DevTools and add hooks inside Blink to record events. This means we need to compile a custom browser. We have done this for both Chromium on Linux and ChromeShell on Android (both built with the additional WebCapsule's record-and-replay functionalities).


Roberto

Michael Klepikov

unread,
Sep 28, 2015, 12:57:47 PM9/28/15
to Roberto Perdisci, telemetry, Todd Wright

Also when replaying, what drives the browser - WebCapsule, or could it be a separate automated test framework like e.g. Telemetry or WebDriver? If WebCapsule drives the browser on replay, how do you deal with situations when in the recording phase the test framework was waiting for certain page elements to render before proceeding - how would WebCapsule know to do the same waits on replay? Page loading is never fully deterministic (and forcing it to be fully deterministic kind of undermines the performance testing use case), so I'd imagine it's hard to infer such waits just by observing browser traces.

Michael
Sent from my mobile device

Roberto Perdisci

unread,
Sep 28, 2015, 1:50:26 PM9/28/15
to telemetry, roberto....@gmail.com, z...@google.com, k...@google.com
WebCapsule is completely self-contained, and does not use WebDriver or a MITM proxy (though we could build a version of WebCapsule that relies on Telemetry for network traffic replay).

Currently, WebCapsule re-injects UI and network events itself (via DevTools) by closely following the recorded events timeline, and basically replicates (as closely as possible) all time deltas between events as seen during recording. As you mentioned, this may cause problems when UI and rendering do not replay exactly the same in the replay phase.

One thing that can be done is to synchronize UI invents injection with network responses, for example. However, this is also a best effort approach, in that the processing time for the responses still may introduce errors in terms of timing UI re-injection.

An approach we are working on to make WebCapsule more deterministic is to synchronize UI events re-injection to DOM events (i.e., by following how the DOM tree is build/changed). However, this is work in progress, and while it seems like a promising approach we do not yet know how successful this will be in practice.

Thanks,


Roberto

Roberto Perdisci

unread,
Sep 28, 2015, 5:26:35 PM9/28/15
to telemetry, roberto....@gmail.com, z...@google.com, k...@google.com
Dear Michael,

   in your previous emails you mentioned you are using WPR for "google3 latency lab for web performance/latency testing." Is there any public document that describes what the objectives of these tests are and how Telemetry is setup at a high level during these tests? What causes variability in performance, and how is latency measured?

Knowing this would be extremely useful to make us understand if/how WebCapsule could evolve to also be useful for performance testing.

Thank you very much for your feedback,
regards


Roberto

Michael Klepikov

unread,
Sep 28, 2015, 9:53:13 PM9/28/15
to Roberto Perdisci, telemetry, Todd Wright

Think WebPageTest.org, with scripted multi-step tests, not just URL loads. Use for example Speed Index computed from recorded video as the main performance metric. Network conditions simulated via dummynet.

https://sites.google.com/a/webpagetest.org/docs/system-design/mobile-testing

https://sites.google.com/a/webpagetest.org/docs/system-design/webpagetest-relay

https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/scripting

https://sites.google.com/a/webpagetest.org/docs/private-instances/node-js-agent/async-js

Reliable and complete replay for multistep test scenarios is very important: everything must load correctly, e.g. no missed map tiles on replay, even if missing tiles wouldn't fail the test per se. This is in contrast with a different use case where a test harness measures page load for 10000 pages and doesn't care if 5% of them fail to replay properly. We need 100%, and we deal mostly with handwritten tests for a given web app, not measuring across 10000 URLs.

We also seriously care about non-Chrome browsers, in particular Mobile Safari. But if there is a much better replay for Chrome, we would certainly consider integrating that in addition to the browser-agnostic, but more complex setup+maintenance WebPageReplay.

Michael

Roberto Perdisci

unread,
Sep 30, 2015, 1:31:23 PM9/30/15
to telemetry, roberto....@gmail.com, z...@google.com, k...@google.com
Thank you, Michael.

The links you sent provided us with very useful information, I really
appreciate it.

I have one more question to further clarify if this may be one of the
use cases that Telemetry/WPR is interested in:

Let's assume we have recorded a browsing session of a (simulated?)
user that visits and interacts with Google Maps (e.g., the user
searches for a location, clicks on the map, moves/zooms the map,
etc.). Now, we want to replay this browsing session. Specifically, we
want to replay the user inputs and all the resulting network
requests/responses generated by the browser.

In essence, the UI inputs and network traffic could be seen as a
"constant" at this point. What would vary, and is therefore the
important thing to measure, is the Speed Index (or similar metric)
related to replaying the same browsing trace on multiple devices and
browsers.

The net results is that, all other conditions being equal, we can
directly compare the performance of rendering Google Maps on different
browsers, or the same browser but on different devices/platforms.

Did I get it right? Is this one of the use cases of interest?

Best regards,


Roberto

Michael Klepikov

unread,
Oct 4, 2015, 12:46:56 AM10/4/15
to Roberto Perdisci, telemetry, Zoe Wright
On Wed, Sep 30, 2015 at 1:31 PM, Roberto Perdisci <roberto....@gmail.com> wrote:
Thank you, Michael.

The links you sent provided us with very useful information, I really
appreciate it.

I have one more question to further clarify if this may be one of the
use cases that Telemetry/WPR is interested in:

Let's assume we have recorded a browsing session of a (simulated?)
user that visits and interacts with Google Maps (e.g., the user
searches for a location, clicks on the map, moves/zooms the map,
etc.). Now, we want to replay this browsing session. Specifically, we
want to replay the user inputs and all the resulting network
requests/responses generated by the browser.

In essence, the UI inputs and network traffic could be seen as a
"constant" at this point. What would vary, and is therefore the
important thing to measure, is the Speed Index (or similar metric)
related to replaying the same browsing trace on multiple devices and
browsers.

The net results is that, all other conditions being equal, we can
directly compare the performance of rendering Google Maps on different
browsers, or the same browser but on different devices/platforms.


We don't expect a recording to work across browsers or devices. For example, screen resolutions and aspect ratios vary widely, and so e.g. Maps would request different tiles vs. the original recording. There are also all kinds of browser-dependent web app behavior differences, based on runtime detection in JS – and we don't want to stub them out, because 1) they are there for a reason, and 2) we want to test the web app the way it behaves for real users on that browser.

We only replay multiple iterations on the same browser where they were recorded, in order to get statistically viable client-side performance metrics. Even with the network-level replay, different iterations will vary slightly, or not so slightly, depending on many factors. So we run continuous tests with some semi-arbitrary number of iterations to start with – say, 10, observe the variability and precision, and adjust the number of iterations to achieve the balance between total test time and the desired precision of detected differences.

Because of these iteration to iteration variations, we've also found that no matter how well we tweak the recording behavior, truly sophisticated web apps like Maps or Google Docs would still sometimes make unexpected requests. We believe that it has to do with real-time browser performance and how the JS adapts to it, so 1) it's impossible to mitigate with any stubbed JS functions, and 2) from the performance testing POV, it is an inherent property of the system under test, and therefore we want to test with it – e.g. ignoring results of such iterations would undermine statistical validity of the test. A made-up example, just to illustrate how this could be happening and how it's essentially impossible to stabilize via faked-out JS functions: say, Docs might decide to send to the server the text typed so far in one or two XHR's, depending on whether it detects a 100ms pause between keystrokes while the user is typing, and a 100ms pause may get triggered by any number of factors, e.g. GC. It sounds far fetched in theory, but in practice it happens often enough to make some tests useless without a mitigation.

To address these occasional unexpected requests, we have what we might call "adaptive record/replay" – the replay iterations still allow non-recorded URLs to go through to the real server and add them to the recording. We ignore the performance results of such iterations that accessed real servers, but we keep running more iterations until we accumulate the desired number of perfectly replayed iterations, whose results we then use.
Reply all
Reply to author
Forward
0 new messages