11/19 Reviews - Ajaxscope

0 views
Skip to first unread message

Rodrigo

unread,
Nov 18, 2009, 10:44:16 PM11/18/09
to CSCI2950-u Fall 09 - Brown
Please post your reviews here.

Spiros E.

unread,
Nov 18, 2009, 11:41:48 PM11/18/09
to CSCI2950-u Fall 09 - Brown
The paper presents AjaxScope, system for unobtrusively and dynamically
instrumenting a web application's JavaScript. An AjaxScope proxy
server is placed somewhere above the application server in the
application stack. This proxy rewrites the application's JavaScript
according to developer-specified policies.

I would have liked to see a performance comaprison between code
instrumented with AjaxScope, and code that was manually instrumented
in the same way, but that was capable of being disabled by a boolean.
Seeing performance numbers for the four cases (uninstrumented,
AjaxScope instrumented, manually instrumented and disabled, and
manually instrumented and enabled) would

The paper is primarily concerned with instrumenting JavaScript for the
purposes of performance profiling. It would have been interesting to
see how AjaxScope could be used to determine correctness of programs
outside of the silly infinite loop example. On a related note, the
paper does not describe policies in much detail, though that would
seem to be the heart of the paper. Without a good description of these
policies, its difficult to evaluate AjaxScope's usefulness outside of
the examples the paper provides.

Marcelo Martins

unread,
Nov 19, 2009, 12:12:10 AM11/19/09
to brown-cs...@googlegroups.com
Paper Title "AjaxScope: A Platform for Remotely Monitoring the
Client-Side Behavior of Web 2.0 Applications"

Author(s) Emre Kiciman and Benjamin Livshits

Date SOSP'07, October 2007

Novel Idea

AjaxScope is a platform for instrumenting and remotely monitoring the
client-side execution of web applications within users' browsers. It
introduces two concepts based on the idea of instant redeployability:
adaptive instrumentation and distributed tests. AjaxScope rewrites
Javascript code on-the-fly and is able to deploy different applications
before they are sent to a user's browser.

Main Result(s)

The micro-benchmarks show that the parsing latency grows linearly with
size of the Javascript program and that AjaxScope can handle thousands
of LOC without major overhead. Furthermore, the adaptive drill-drown
profiling significantly reduces the number of instrumentation points
that have to be monitored. Finally, the memory leak checker was able to
find circular references on 12 Javascript-heavy applications deployed in
the Web.

Impact

The combination of adaptive instrumentation and distributed tests allows
for spreading the test overhead across users and rapid comparative
evaluation of optimizations, bug fixes, and other code modifications.

Evidence

Applying AjaxScope to real web applications, the authors were able to
find flaws in services like iGoogle and Yahoo!. In addition, through the
instrumentation policies provided by AjaxScope, the authors found
optimization spots in MS Live Maps. More importantly, no modifications
to the user's browser were necessary, which shows that AjaxScope is
flexible enough to support current standards.

Prior Work

AjaxScope takes inspiration from monitoring tools used for analyzing
distributed systems, such as Magpie and Javascript rewriting from
BrowserShield and CoreScript.

Competitive work

As mentioned in Section 9, AjaxScope is the first to extend the
developer's visibility into web application behavior onto the end-user's
desktop.

Reproducibility

AjaxScope is available at Microsoft Research's website. The experiments
are detailed enough to be reproduced; therefore we can perform the same
evaluations as the authors did.

Questions

Is there a way to automate the semantics and function non-determinism
checking?

Criticism

Figure 18 shows the relative improvements when using potential cacheable
functions in Live Maps. However, looking at the performance improvement
in absolute numbers, we see a very small amelioration (on the order of
milliseconds). The authors could have shown the frequency with which
such optimized functions are called and elaborate more on how much
processing time can be saved by using cache during an ordinary user session.

Ideas for future work

1.) Exploring the possibilities of using compression and better data
representation could reduce network latency and allow for large-scale
data modifications without large penalties.

Rodrigo Fonseca

unread,
Nov 19, 2009, 4:32:11 PM11/19/09
to brown-cs...@googlegroups.com
---------- Forwarded message ----------
From: sunil mallya <mall...@gmail.com>
Date: Thu, Nov 19, 2009 at 4:12 PM
Subject: AjaxScope: Review
To: Rodrigo Fonseca <rodrigo...@gmail.com>


Paper Title
AjaxScope: a platform for remotely monitoring the client-side behavior
of web 2.0 applications
Author(s)
Kiciman, Emre and Livshits,
Date
SOSP '07:
Novel Idea
In some sense providing a meta programming environment to perform on
the fly parsing and instrumentation of javascript as it is sent to the
browser.
Main Result(s)
The main idea of this paper is to provide a platform for incrementing
and monitoring client side execution of new age web applications
within user’s browsers. They propose enabling fine grain monitoring
with new capability of instant redeployability which is the ability to
serve new and different versions of the code each time the user runs a
web application. This is done using new techniques: adaptive
instrumentation: where instrumentation dynamically adapts over time,
analyses problems in depth and only gathers the data that is needed,
the other being distributed instrumentation & run time analyses across
many users, ie split the large monitoring policies into pieces such
that each user gets 1/N of the instrumentation code.
Ajaxscope fits in between the web server and the users and doesn’t
require any changes to the web application environment. It interprets
the JavaScript code from the unintrumented web application and
dynamically rewrites according to a set of instrumentation policies
and then sent to users. The instrumented code generates log messages
recording the observations and sends these aggregated log messages
periodically to the ajaxscope proxy.
Impact
This type of a platform can provide us indepth information into
application behaviour at the clients.
It has laid foundation for new-age ajax application optimization tools
which minimizes code downloads using interesting policies, again from
Microsoft!
http://msdn.microsoft.com/en-us/devlabs/ee423534.aspx
Evidence
The evaluation was done by implementing  many instrumentation policies
on upto 90 web apps running javascript code.They focus on measuing how
well their new approach of adaptive drill down performance improves on
the naïve version, infact the results support their claim and shows
how it reduces the instrumentation points that have to be monitored.
Prior Work
BrowserShield, CoreScript, borrowed ideas from Magpie, project5 and ParaDyn.
Ideas for further work
Since we are talking a lot with Meta programming here, may be have a
meta monitor to monitor ajaxscope itself !
Interesting links
( BAM, AjaxScope, and Doloto )   http://channel9.msdn.com/pdc2008/TL50/

Rodrigo Fonseca

unread,
Nov 19, 2009, 4:39:26 PM11/19/09
to brown-cs...@googlegroups.com
---------- Forwarded message ----------
From: Andrew Ferguson <adfer...@gmail.com>
Date: Wed, Nov 18, 2009 at 10:48 PM
Subject: Re: [csci2950-u] 11/19 Reviews - Ajaxscope
To: brown-cs...@googlegroups.com


Paper Title
"AjaxScope: a platform for remotely monitoring the client-side
behavior of web 2.0 applications"

Authors
Emre Kiciman and Benjamin Livshits

Date
SOSP 2007

Novel Idea / Main Results
This paper presents AjaxScope, a system for dynamically rewriting
JavaScript code between the server and client in order to add
instrumentation according to developer-defined policies. AjaxScope
successfully spreads the overhead of profiling across many users and
integrates the results to present a coherent picture for the
developer, as if the application had been completely instrumented. The
system is capable of detecting a variety of common JavaScript errors
and optimization opportunities such as memory leaks and inefficient
string concatenation.

Impact
Unknown. I do know that A/B testing is prominent in other Web 2.0 companies.

Evidence
AjaxScope is implemented and has been applied to maps.live.com. The
paper details how it is used in practice, provides examples of the
instrumentation code it generates, and quantifies the performance
overhead for end users. The authors also compare the overhead of
AjaxScope's partial profiling with the overhead of fully profiling
each application.

Prior Work
The authors note that the ParaDyn project is the "closest in spirit,"
although it is for parallel computing applications. The BrowserShield
and CoreScript projects also rewrite JavaScript on the fly, although
not for debugging purposes.

Competitive Work
Unknown.

Reproducibility
This project is well-detailed, so we could probably reproduce it with
significant effort. It does seem to be available for MS Visual Studio
under the name "Ajax View".

Question
Are there other packages like AjaxScope out there, such as an open
source version?

Criticism
My chief criticism is that they authors do not detail how they account
for differences in client CPU speed when distributing the timing
tests. If I instrument a function on a Pentium II, it will look much
slower than the same function in a modern machine. Can JavaScript
report the CPU speed? Or do they try to run the instrument code for
related functions on the same client, but distributing over time?

Ideas for further work
If I had the code to this system, I would consider adding better
statistical tools to the analysis package. For example, we could
introduce tests which look for statistically significant separations
of the change/no-change distributions.

Rodrigo Fonseca

unread,
Nov 19, 2009, 7:50:46 PM11/19/09
to brown-cs...@googlegroups.com
---------- Forwarded message ----------
From: Steven Gomez <stev...@gmail.com>
Date: Thu, Nov 19, 2009 at 7:24 PM
Subject: AjaxScope review
To: Rodrigo Fonseca <rfon...@cs.brown.edu>


Author: Emre Kiciman and Benjamin Livshits
Paper Title: "AjaxScope: A Platform for Remotely Monitoring the
Client-Side Behavior of Web 2.0 Applications"
Date: In SOSP '07

This paper presents AjaxScope as a tool for monitoring web application
behavior across users, relying on JavaScript rewriting and instant
redeployment to dynamically instrument code.  The authors show that
instrumentation policies can be applied for monitoring values, and for
debugging errors like infinite loops and memory leaks.  There are a
handful of good ideas in this paper, such as distributed testing (to
monitor the application in a distributed way, so no host gets
overloaded), adaptive instrumentation, and memory monitoring.

AjaxScope has the potential to impact both software deployment and (as
the authors note in their concluding remarks) may only "scratch the
surface" as far as what kinds of policies could be implemented and
applied to improve application performance.  The organization of
rewriting points and instructions as 'policy' abstractions seems has a
lot of power, so long as it doesn't introduce too much overhead.

Related work includes ParaDyn, which uses adaptive instrumentation in
parallel programs.  Rewriting is used in CoreScript and BrowserShield
for security purposes.  The authors suggest that their work is a
merger of these principles (monitoring and rewriting for performance),
and try to underscore their contributions for JavaScript because its
dynamic nature (and the ways it is used in Web programs) make
instrumentation difficult but especially important.

Most of the evaluation in this paper focuses on how specific policies
have improved performance for certain instances of Web content.  In
some ways, this is the best they can do, but it is also dubious
because the authors are choosing examples that demonstrate their
points.  That said, most of the examples were clear, and the results
of the experiments are well communicated.  Figures 8, 9, and 10 don't
include any scale on the X-axis (I know from the written explanation,
but still annoying not to include this).  Any time a graph shows
exponential growth (e.g. figure 9), I *definitely* want to know
something about scale and when those numbers take off!

Reproducing these results would be a challenge.  The authors are vague
about how they manually select websites to test, and include magic
numbers in the experiment design (e.g. 8 runs to log overhead, to
"account for performance variations" ... How does one trying to
reproduce this know what kind of convergence to look for?).

One question/enhancement idea I have is whether you could use policies
to enforce SLA guarantees: policies could respond to monitored
performance, and rewrite code in sections that are profiled badly.
Or, if the SLA guarantees some maximum overhead for
debugging/monitoring robustness, could we use a policy that decides
how to apply other policies to the code to meet this overhead
requirement?

Rodrigo Fonseca

unread,
Nov 19, 2009, 10:28:49 PM11/19/09
to brown-cs...@googlegroups.com
---------- Forwarded message ----------
From: Kevin Tierney <herr...@gmail.com>
Date: Thu, Nov 19, 2009 at 10:00 AM
Subject: Re: [csci2950-u] 11/19 Reviews - Ajaxscope
To: brown-cs...@googlegroups.com


Title: AjaxScope: A Platform for Remotely Monitoring the Client-Side
Behavior of Web 2.0 Applications
Author(s): Kiciman and Livshits
Date: SOSP 2007

Novel Idea
AjaxScope presents a method of profiling client-side JavaScript in the
browser along with a method for identifying potential coding mistakes
(infinite loops, "memory leaks"). In order to preserve the performance
of the service, AjaxScope instruments according to a set of Adaptation
Nodes (which are "policy nodes" that instrument portions of JS code).

Main Result(s)
The authors find that the overhead incurred by their system is
tolerable in a production environment, and that their system's
deployment is rather easy.

Impact
While this is an interesting paper, the idea of instrumenting code for
performance profiling and checking for memory leaks/infinite loops
isn't a new idea.

Evidence
The authors test their system across 90 different websites and look at
the overhead incurred by their system.

Prior Work
This builds on pretty much every profiler and code checker that has existed.

Reproducibility
Assuming a prototype is available, otherwise no since adaptation node
details seem to be omitted.

Question and Criticism

I think one important question here is whether or not users should be
subjected to profiling/remote monitoring of javascript applications
they are using without their consent. On the one hand, one might say
that they consent to it simply by using the service, but on the other
hand this system is sending profiling information and slowing down
script execution, two things that a user may not agree to. For
example, most programs wishing to send a crash report back to a
company ask for permission, and perhaps AjaxScope should be an opt-in
type of feature.

As I mention in my impact description, this paper provides what seems
to be a very useful system, but most of their novelty seems to be from
applying well known principles to AJAX. Although the authors claim to
be introducing "two new instrumentation techniques, adaptive
instrumentation and distributed tests" without a more detailed
description of how they actually work I'm inclined to say that these
are only minor contributions.

Ideas for further work
No

小柯

unread,
Nov 18, 2009, 11:54:10 PM11/18/09
to brown-cs...@googlegroups.com
Paper Title:    AjaxScope: A Platform for Remotely Monitoring the Client-Side Behavior of Web 2.0 Applications

Authors:        Leo A. Meyerovich
                    Arjun Guha
                    Jacob Baskin
                    Gregory H. Cooper
                    Michael Greenberg
                    Aleks Bromfield
                    Shriram Krishnamurthi

Date:            2007

Novel Idea:
    With the growth of rich-client application, monitoring and tracing client side behavior, error and performance became important. Here comes the AjaxScope, which works as a Proxy at server side to inject codes for tracing when sending web page to client. This codes would observe, log the information of client-side behavior and later sent it back to server. For the probable performance overhead that could be induced AjaxScope distributed and adaptive instrumentation to mitigate it.

Main Result:
    AjaxScope was created to do monitoring, error reporting and performance profiling in client side. Many useful mechanisms are implemented to detect various errors and problems.

Impact:
    Other client-side error-detecting tools become new field of research. In the trend of web 2.0, this is extremely useful for debugging and performance tuning.

Evidence:
    Authors provide several examples to explain what tasks and how these tasks could be done in AjaxScope. Later, they proposed some method to reduce the overhead and evaluate both effectiveness and efficiency of AjaxScope.

Prior Work:


Competitive work:


Reproducibility:
    Yes.

Question:
    There seems to be some errors that could be found in an earlier stage, not until run time.
    Using appropriate IDE, it might warn the programmer about inefficient string concatenation. Is this a problem that only could be detected in run-time environment?
    I don't know if it's possible to find possible memory leaks via earlier detection of circular reference.

Criticism:


Ideas for further work:


2009/11/18 Rodrigo <rodrigo...@gmail.com>
Reply all
Reply to author
Forward
0 new messages