Reviews: Ajaxscope

10 views
Skip to first unread message

Rodrigo Fonseca

unread,
Nov 17, 2010, 6:01:33 PM11/17/10
to CSCI2950-u Fall 10 - Brown
Hey,

Please post the reviews to Ajaxscope here.

Thanks!
Rodrigo

Dimitar

unread,
Nov 17, 2010, 11:03:47 PM11/17/10
to CSCI2950-u Fall 10 - Brown
AjaxScope: A platform for Remotely Monitoring The Client-Side Behavior
of Web 2.0 Applications

Authors: Emre Kiciman and Benjamin Livshits

Date: October 14-17 2007

Novel Idea: This paper presents AjaxScope, a dynamic instrumentation
platform that enables cross-user
monitoring and just-in-time control of web application behavior om end
user desktops. Author's
goals are to allow developers to monitor program behavior at the
source code level and to improve
visibility for measuring performance problems. The basic idea is the
following: web applications
provide JavaScript code which is intercept by the AjaxScope. The
original code is modified by
inserting instrumentation policies and then given to the end users.

Main Result: The authors have implemented an AjaxScope proxy
prototype. In order to achieve their
goals described above the authors use instant redeployability. This
allows them to use adaptation nodes
that allow policies to have different effects over time. Adaptation
nodes is used in order to reduce the
CPU and network overhead. AjoaxScope also provides distributed tests
which test for the existence or
nonexistence of a specific condition while distributing the workload
to many end users.

Impact: AjaxScope could be use to analyze the performance of web
applications. It is
especially useful for web sites that use large amount of JavaScript.

Evidence: In order to evaluate the flexibility and efficacy of Ajax
Scope, the authors implement variety
of monitoring policies such as error checking, performance problems,
and distributed memory leaks.
They tested these policies for 90 webs sites.

Competitive Work: A similar work to AjaxScope is ParaDyn which uses
dynamic , adaptive instrumentation
to find performance bottlenecks in parallel computing applications.

Reproducibility: I don't think their work is reproducible because the
papers lacks implementation
details.

Question For their experiments the authors drill down into any
functions functions believed to be slower
than 5ms. What are their reasons for selecting this number?

Criticism: The paper lacks any implementation details and their
experimental setup was limited.

Shah

unread,
Nov 17, 2010, 6:26:12 PM11/17/10
to CSCI2950-u Fall 10 - Brown
Title:

AjaxScope: A Platform for Remotely Monitoring the Client-Side Behavior
of Web 2.0 Applications

Authors:

[1] Emre Kiciman
[2] Benjamin Livshits

Source and Date:

Proceedings of Twenty-First ACM SIGOPS Symposium on Operating Systems
Principles, Stevenson, WA. October 14-17, 2007.

Novel Idea:

The scientists present AjaxScope: ‘a dynamic instrumentation platform
that enables cross-user monitoring and just-in-time control of web
application behavior on end-user desktops’. It’s also a proxy that
performs on-the-fly parsing.

Main Result:

The scientists discuss a wide array of policies that display various
facets of AjaxScope. These include simple error checking, performance
profiling, detection of memory leaks and optimization analysis. Their
aim with AjaxScope is to be able to monitor the large number of users
across web applications. Specifically, the paper presents the
following:

[1] The instant redoployability of applications

[2] A Web 2.0 monitoring platform

[3] Two new instrumentation techniques

[4] The evaluation of the AjaxScope by applying them to 90 web
applications

Impact:

This paper has been cited some 17 odd times. It doesn’t appear that
the paper is very popular.

Evidence:

The authors present two sections that evaluate the performance of
AjaxScope. The first section deals with measuring the performance of
adaptation nodes. The second section attempts at measuring two
instrumentation policies: the first searches for caching opportunities
while the second measures performance using automatic testing.

Prior Work:

In Section 9, the authors suggest that AjaxScope is novel and is the
first to extend the developer’s visibility into web application
behavior. They don’t mention much prior work but they mention
competitive work as is detailed below.

Competitive Work:

The authors state that ParaDyn is closest in spirit to AjaxScope.
Indeed they also mention two other products: BrowserShield and
CoreScript and mention that they too employ JavaScript to enforce more
stringent security standards.

Reproducibility

The authors go through pains to conduct rather detailed
experiments.Although they do provide snippets of code, they fail to
give the transparency that’s associated with non-corporate-funded
research.

Questions:

What might be some reasons that AjaxScope has not caught on in
popularity?

Criticism:

As with other papers funded by companies, not enough transparency is
provided to make the results reproducible.

Ideas for Further Work:

Though the authors don’t address this in this paper, perhaps, as the
authors mention in Section 8.1, they can delve into large-scale data
processing later on.

Matt Mallozzi

unread,
Nov 18, 2010, 12:08:00 AM11/18/10
to brown-csci...@googlegroups.com
Matt Mallozzi
11/18/10

Title:
AjaxScope: A Platform for Remotely Monitoring the Client-Side Behavior of
Web 2.0 Applications
Authors:
Kiciman, Livshits
Date:
2007
Novel Idea:
Sticking a proxy in between browsers and a web server to dynamically
insert extra code into select JavaScript code, which allows monitoring,
debugging, and profiling user-side JavaScript without changing server-side
architecture or requiring plugins in the client browser.
Main Results:
A working prototype of a system that does exactly this: rewrites JavaScript
code as it passes between the server and browser to allow for monitoring,
profiling, and debugging JavaScript code which previously was nearly a black
box.
Impact:
This could have a huge impact on the writing of web applications, as
real-time monitoring could alert developers to bugs before user outcry,
real-situation profiling could help developers tune their programs to real
networks and real interactions, and more descriptive debugging can save
developers from tedious attempts to reproduce problems. Also, with a deep
look at the UI interactions of a large sample, interface usability can be
evaluated and improved much more easily.
Evidence:
Various benchmarks to determine monitoring overhead, as well as usage tests
to help estimate how useful the gathered information is. This was done on
many popular websites instead of just contrived examples.
Prior Work:
Builds in spirit on previous distributed debugging systems. Shares concepts
with other systems that perform JavaScript rewriting, but these systems do
not do so for debugging purposes.
Competitive Work:
Very different from previous distributed debugging tools in that this is the
first system to see clearly into the client side execution. This allows data
to be gathered at the time of failure, which is obviously much more valuable
in a debugging scenario than more coarse-grained updates.
Reproducibility:
The prototype is available in binary (not source) format, so the results
should be reproducible, although reproducing the system itself is a
different story. From their descriptions, I think I would have a difficult
time trying to code this, although the concept is simple enough that it may
be not too difficult to figure out independently.
Question:
Most major websites now distribute their JavaScript files through CDNs...
does this break AjaxScope, or would there be an easy way to make the HTML
content sent to "chosen" browsers include scripts from the web server rather
than from a CDN?
Criticism:
The descriptions of how this is implemented could be significantly clearer.
Ideas For Further Work:
Make this work with (or around) content distribution networks.

Duy Nguyen

unread,
Nov 17, 2010, 10:37:23 PM11/17/10
to brown-csci...@googlegroups.com
Paper Title 
AjaxScope: a platform for remotely monitoring the client-side behavior
of web 2.0 applications 

Authors 
Emre Kiciman and Benjamin Livshits 

Date 
SOSP 2007 

Novel Idea / Main Results 
This paper presents AjaxScope, a monitoring system for web application
behaviors across users. It is deployed as a server-side proxy and does not
require any modifications to the web application. It then interprets web app's
JavaScripts and dynamically rewrite them based on a well defined set of policies.
The rewritten code can generate log messages used for monitoring client 
workload as well as detecting common problems: infinite loop, memory leaks,...

Impact 
The original idea of code profiling is not new, they just applied it to a new
environment.

Evidence 
90 web apps have been evaluated. They mainly showed how the defined policies
can improved web app's performance.

Prior Work 
ParaDyn project is the "closest in spirit," BrowserShield and CoreScript 
are those which also rewrite JavaScript on the fly. 

Competitive Work 
Unknown. 

Reproducibility 
No

Question/Criticism
N/A

On Wed, Nov 17, 2010 at 6:01 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

Visawee

unread,
Nov 17, 2010, 9:18:22 PM11/17/10
to CSCI2950-u Fall 10 - Brown
Paper Title :
AjaxScope: A Platform for Remotely Monitoring the Client-Side Behavior
of Web 2.0 Applications


Author(s) :
Emre Kıcıman and Benjamin Livshits


Date :
SOSP’07, October 14–17, 2007, Stevenson, Washington, USA


Novel Idea :
A flexible, policy-based platform for injecting arbitrary
instrumentation code to monitor and report on the dynamic runtime
behavior of web applications, including their runtime errors,
performance, function call graphs, application state, and other
information accessible from within a web browser’s JavaScript sandbox.


Evidence/Main Result(s) :
The authors set up several experiments using various policies to
evaluate AjaxScope. The results show that AjaxScope is able to inject
instrumentation code for analyzing and debugging web applications
based on policy used. The example usages of AjaxScope given in the
paper are
(1) use AjaxScope for reporting client-side error, detecting potential
infinite loops, and detecting inefficient string concatenation
(2) use AjaxScope to profile javascript function. The drill-down
approach is also be able to avoid the platform from placing extra
overhead on already-fast functions.
(3) use AjaxScope to find memory leaks in AJAX application. AjaxScope
can do this in a distributed way which helps spreading out the
overhead of instrumentation code across many users’ executions of a
web application.
(4) use AjaxScope to do A/B testing. The A/B test identified 2 caching
opportunities that were both semantically deterministic and improved
each function’s performance by 20%-100%.


Impact :
Developer has more end-to-end visibility into web application behavior
which helps in debugging and improving the application.


Prior Work :
There are several projects that worked on improving monitoring
techniques for web services and other distributed systems. However,
AjaxScope is the first to extend the developer’s visibility into web
application behavior onto the end-user’s desktop.


Reproducibility :
The results are reproducible if given AjaxScope’s code together with
the policies used in the paper.


Question/Criticism :
- How to write filters and rewriting rules? (The authors should give
some examples)
- In the setting that set up AjaxScope on the server side, AjaxScope
might be a bottleneck in heavy-load applications.

On Nov 17, 6:01 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Basil Crow

unread,
Nov 17, 2010, 11:50:42 PM11/17/10
to brown-csci...@googlegroups.com
Title: AjaxScope: A Platform for Remotely Monitoring

Authors: Emre Kıcıman and Benjamin Livshits

Date: SOSP 2007

Novel idea: Since changes can be made instantaneously to Web 2.0 applications, we can perform on-the-fly, per-user JavaScript rewriting. We can also take advantage of large userbases in order to distribute heavyweight instrumentation and crowdsource testing.

Main results: The authors built a prototype proxy that instruments web applications using on-the-fly JavaScript rewriting and a set of testing policies, some of which employ distributed instrumentation to gain particularly detailed insights into application performance.

Impact: The browser terrain varies greatly, and it is almost impossible to do thorough testing of a sophisticated web application before deployment. Therefore, gathering statistics from actual users may be one of the only ways to isolate and fix certain bugs.

Evidence: The authors instrument several applications with AjaxScope. They employ an adaptive policy to drill down into slow calls, a distributed policy in order to detect memory leaks (a costly operation), and an optimization policy in order to determine the impact of a potential application change. They find that while full-performance profiling instruments a median of 89 points per application, their drill-down profiler only instruments a median of 3 points. They were able to find several memory leaks using their distributed test, which may have been impossible to diagnose otherwise. Finally, their A/B test allows them to identify functions with a high probability of benefiting from cache optimization.

Prior/competitive work: In comparison to what I am familiar with, AjaxScope is unique.

Reproducibility: Apparently as of April 2009 one can download AjaxView [1], which is an implementation of the ideas of this paper. Consider my curiosity piqued.

Praise: AjaxScope seems to have a low barrier to entry, since it does not require extensive modification of existing applications.

[1] http://code.msdn.microsoft.com/AjaxView

Siddhartha Jain

unread,
Nov 17, 2010, 10:28:42 PM11/17/10
to brown-csci...@googlegroups.com
Title: Ajaxscope

Novel Idea:
The idea is the distribute instrumentation in Javascript code to detect performance errors
over multiple users such that one user is not burdened with the load of executing the 
instrumentation. In addition, instrumentation is dynamically added based on the behavior
of the application across different users

Main Results:
The policies to add instrumentation are described. A drill-down policy to add instrumentation
of slower functions as opposed to faster ones is described.

Evidence:
Results comparing drill-down instrumentation with a naive one are shown showing its effectiveness
in terms of performance overhead.  Additionally examples are given where Ajaxscope was able to 
successfully identify slow functions.

Impact:
Potential for a a lot of impact especially as web applications become more and more sophisticated.

Prior Work:
Prior work in monitoring systems for web services and distributed systems. ParaDyn using adaptive
instrumentation to find bottlenecks in parallel computing applications. However Ajaxscope is 
novel in that it extends instrumentation to find performance bottlenecks in web apps on user's
desktop.

Reproducibility:
Ajaxscope prototype available publicly. Extensible through plugins.


On Wed, Nov 17, 2010 at 6:01 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

Jake Eakle

unread,
Nov 18, 2010, 12:30:08 AM11/18/10
to brown-csci...@googlegroups.com
Paper Title

AjaxScope: A Platform for Remotely Monitoring 

the Client-Side Behavior of Web 2.0 Applications 

Author(s)

Emre Kıcıman and Benjamin Livshits 

Date 2007
Novel Idea Broaden the possibilities for testing and data mining of AJAX webapp use by introducing dynamic and distributed instrumentation for javascript.
Main Result(s) They describe AjaxScope, a platform for deploying a range of such instrumentations. They divide these into categories, of which they study two in particular: performance monitoring. and runtime analysis and debugging. 

It functions by sitting in between the webapp servers and the client's browser, passing requests through unmodified and potentially rewriting responses in accordance with the current instrumentation policy. From this vantage point, it can instrument likely sources of error and introduce logging triggered by client errors. It can also react differently to the same input over time, allowing large applications to be thoroughly instrumented without causing noticeable slowdowns to any individual user. Since any successful web app comes with a free, now-highly-observable population of users, such instrumentation can simply be divided among them.
Impact A good number of the papers that cite this one seem to be specifically interested in using the techniques it presents to track down security vulnerabilities or enforce security protocols. The rest seem more in line with this paper's direct goals, benchmarking and tracing app usage. 
Evidence To test adaptive profiling, they compare a naive full instrumentation and an adaptive 'drill down' instrumentation (which automatically zeroes in on the bottlenecks in code by making successive refinements to the instrumentation served to each user) on a number of different applications. They found modest execution time improvements, and slightly better logging overhead improvements. However, neither effect was particularly pronounced.

To test distributed profiling, they randomly applied a profiling mechanism to parts of the DOM for various users, aggregating the results to obtain a complete picture. Success here was more clear-cut, with equally good profiling results obtained with sometimes up to a second and a half less latency per user than with full instrumentation.
Prior Work They claim to be the first group to really look at instrumenting client-side code.
Reproducibility They don't really describe their code very much.
Question Why doesn't adaptive instrumentation provide more of a speed-up? The logging gains look kind of impressive, but they aren't really the main issue - companies tend to care a lot more about how long their site takes to load than wasting a few megabytes of logfile. It seems like such a clever idea, and as they illustrate with distributed instrumentation, the number of instrumented code points really does make a difference - so why doesn't it here?
Criticism The ideas presented in this paper seem very strong. From the references out there, it seems that they have gained traction, though I'm a little surprised there haven't been more (Google shows 43, though maybe that is plenty for only having been out 3 years?). The only weak point seems to be how little they talk about their actual implementation; however, this probably doesn't matter a lot, as the main purpose/impact of the paper is to get the ideas out there, not market a particular instantiation of them.
Ideas for further work There's lots more to do -- this paper only lays the groundwork. Starting to build applications that leverage the ideas it presents seems to be the next step.. I suppose there is also room for some more rigorous theoretical work into when/which of their ideas are really worth it.




--
A warb degombs the brangy. Your gitch zanks and leils the warb.

Sandy Ryza

unread,
Nov 18, 2010, 1:58:25 AM11/18/10
to CSCI2950-u Fall 10 - Brown
Title:
AjaxScope: A Platform for Remotely Monitoring the Client-Side Behavior
of Web 2.0 Applications

Authors:
Emre Kiciman and Benjamin Livshits

Date:
SOSP '07

Novel Idea:
The authors present AjaxScope, a platform for debugging JavaScript
applications that monitors their execution on the client side. It
relies on a proxy through which JavaScript code is sent that
dynamically rewrites code according to policies set by the
programmer. It provides a logging mechanism which queues log messages
on the client side and asynchronously sends them back to the server
for analysis.

Main Result(s):
Their platform worked successfully in supporting a diverse set of
monitoring policies and helping to find bugs and performance
bottlenecks with these policies. As evidenced by their micro
benchmarks, parsing and logging incur not entirely negligible, but
acceptable, overhead. An issue that they stumbled upon was that their
approach of dynamically rewriting JavaScript does not work well with
client-side caching of JavaScript.

Evidence:
The authors evaluated AjaxScope qualitatively by implementing
monitoring policies including runtime error reporting, drill-down
performance (a method of finding performance hot spots that doesn't
require checking every function), a memory leak checker, and others.
They carried out thorough tests on 12 web applications and benchmarked
78 other ones running a single proxy and client. They also performed
a series of micro benchmarks to quantify the overhead of logging and
parsing.

Impact:
None that I am aware of.

Prior Work:
The authors mention Paradyn, a system which similarly uses dynamic
instrumentation. Paradyn only looks for performance bottlenecks; it
does not perform general purpose logging/debugging. BrowserShield and
CoreScript similarly rewrite JavaScript, but focus on enforcing
security and safety.

Reproducibility:
None of the code is available and enough implementation detail is left
out that reproducing the system would require a fair amount of
creative work.

Criticism:
The paper devotes a lot of space to profiling performance, but little
to discovering correctness bugs. They could have provided more
examples of the latter, or, if their system was ill-suited to
discovering them, at least explained why.

Question:
Are there particular types of web applications that AjaxScope would be
particularly ill suited to debugging?


On Nov 17, 6:01 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

James Chin

unread,
Nov 17, 2010, 11:58:56 PM11/17/10
to CSCI2950-u Fall 10 - Brown
Paper Title: “AjaxScope: A Platform for Remotely Monitoring the Client-
Side Behavior of Web 2.0 Applications”

Authors(s): Emre Kiciman and Benjamin Livshits

Date: 2007 (SOSP ‘07)

Novel Idea: This paper presents AjaxScope, a dynamic instrumentation
platform that enables cross-user monitoring and just-in-time control
of web application behavior on end-user desktops. AjaxScope is a proxy
that performs on-the-fly parsing and instrumentation of JavaScript
code as it is sent to users’ browsers. AjaxScope provides facilities
for distributed and adaptive instrumentation in order to reduce the
client-side overhead, while giving fine-grained visibility into the
code-level behavior of web applications.

Main Result(s): The authors demonstrated the effectiveness of
AjaxScope by implementing a variety of practical instrumentation
policies for debugging and monitoring web applications, including
performance profiling, memory leak detection, and cache placement for
expensive, deterministic function calls. They also applied these
policies to a suite of 90 widely-used and diverse web
applications to show that 1) adaptive instrumentation can reduce both
the CPU overhead and network bandwidth, sometimes by as much as 30%
and 99%, respectively; and 2) distributed tests allow us fine-grained
control over the execution and network overhead of otherwise
prohibitively expensive runtime analyses.

Impact: As web applications grow larger and more complex, their
dependability is challenged by many of the same issues that plague any
large, cross-platform distributed system that crosses administrative
boundaries. One of these issues is a lack of end-to-end visibility
into the remote execution of the client-side code. Without visibility
into client-side behavior, developers have to resort to explicit user
feedback and attempts to reproduce user problems. Through AjaxScope,
the authors seek to enable practical, flexible, fine-grained
monitoring of web application behavior across the many users of
today’s large web applications.

Evidence: The authors evaluated the AjaxScope platform by implementing
a wide variety of instrumentation policies and applying them to 90 web
applications and sites containing JavaScript code. Their experiments
qualitatively demonstrated the flexibility and expressiveness of their
platform and quantitatively evaluate the overhead of instrumentation
and its reduction through distribution and adaptation.

Prior Work: Several previous projects have worked on improved
monitoring techniques for web services and other distributed systems.
However, to the authors’ knowledge, AjaxScope is the first to extend
the developer’s visibility into web application behavior onto the end-
user’s desktop.

Competitive Work: Perhaps the closest in spirit to our work is
ParaDyn, which uses dynamic, adaptive instrumentation to find
performance bottlenecks in parallel computing applications. The
BrowserShield and CoreScript projects are related to how AjaxScope
works as well.

Reproducibility: The findings appear to be reproducible if one follows
the testing procedures outlined in this paper and has access to the
code for AjaxScope.

Question: Is AjaxScope being used in the industry right now?

Ideas for further work: In the future, as the software-as-a-service
paradigm, centralized software management tools and the property of
instant redeployability become more wide-spread, AjaxScope’s
monitoring techniques have the potential to be applicable to a broader
domain of software. Moreover, the implications of instant
redeployability go far beyond simple execution monitoring, to include
distributed user-driven testing, distributed debugging, and
potentially adaptive recovery techniques, so that errors in one user’s
execution can be immediately applied to help mitigate potential issues
affecting other users.


On Nov 17, 6:01 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Abhiram Natarajan

unread,
Nov 17, 2010, 6:03:49 PM11/17/10
to CSCI2950-u Fall 10 - Brown
Paper Title: AjaxScope: A Platform for Remotely Monitoring the Client-
Side Behavior of Web 2.0 Applications

Author(s): Emre Kiciman, Benjamin Livshits

Date: 2007, SOSP

Novel Idea: Performing on-the-fly parsing and instrumentation of
JavaScript code as it is sent to users' (client's) browsers.

Main Result(s): AjaxScope, a dynamic instrumentation platform that
enables cross-user monitoring and just-in-time control of web
application behaviour on end-user desktops. It providence facilities
for distributed and adaptive instrumentaiton in order to reduce the
client-side overhead, while giving fine-grained visibility into the
code-level behaviour of web applications.

Impact: A system that provides web-application developers good amount
of visibility into the end-to-end behaviour of their systems.

Evidence: The build a prototype and analyse the behaviour of over 90
web 2.0 applications and sites that use large amounts of JavaScript.

Prior Work: ParaDyn, BrowserShield, CoreScript, Runtime Program
Analysis tools

Competitive Work: The authors do perform an extensive set of tests and
even give enough results to throw light on the superiority of the
system. The fact that they denote more than 3 sections to evaluation
is good enough evidence. Also, they give some extremely interesting
numbers on IE and Firefox; and given that this has been accepted in a
quality conference, the results must be accurate. The authors also
measure the performance of IE on common portal pages.

Reproducibility: The idea, although novel, looks to be not too hard to
implement. They do not seem to have given a lot details about the
architecture of the system. And of course, reproducing the exact
numbers they obtained is unlikely. However, the system could be
reproduced with a fair amount of work.

Criticism: The paper has a lot of interesting data. Nice to read.

On Nov 17, 6:01 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Zikai

unread,
Nov 17, 2010, 8:34:11 PM11/17/10
to CSCI2950-u Fall 10 - Brown
Paper Title: AjaxScope: A Platform for Remotely Monitoring the Client-
side Behavior of Web 2.0 Applications
Author(s): Emre Kıcıman and Ben Livshits

Date/Conference: SOSP 07

Novel Idea: (1) Build a flexible platform for monitoring, debugging
and profiling Web 2.0 applications based on instant redeployability of
applications. Specifically, use this ability to dynamically rewrite
client-side JavaScript code.
(2) Present two new instrumentation techniques: adaptive
instrumentation and distributed test. They dramatically reduce the per-
user overhead of otherwise prohibitively expensive testing policies in
practice.

Main Results: (1) Design and implement AjaxScope, a platform for
improving developer’s end-to-end visibility into web application
behavior through a continuous adaptive loop of instrumentation,
observation and analysis.
(2) Demonstrate effectiveness of AjaxScope by implementing a variety
of practical instrumentation policies for debugging and monitoring web
applications: performance profiling, memory leak detection and cache
placement for expensive, deterministic function calls.
(3) Evaluate instrumentation policies above by applying them to a
suite of 90 widely- used and diverse web applications.

Impact: Allow web application developers to gain end-to-end visibility
into their scripts and more easily perform performance profiling,
debugging and optimization. Web applications will become more reliable
and bug-free as a result.

Evidence: (1) In part4.1, message logging overhead and parsing latency
that affect almost every instrumentation are measured.
(2) In part5, 6, 7, three instrumentation policies (adaptive
instrumentation, distributed instrumentation, A/B testing) for
different purposes (performance profiling, memory leak detection and
cache placement for expensive, deterministic function calls) are
evaluated in terms of CPU overhead, network bandwidth overhead and
performance improvement.

Prior Work: runtime program analysis [17, 20], JavaScript’s ECMA
language specification [13]

Reproducibility:
AjaxScope is available at http://research.microsoft.com/projects/ajaxview.
One can easily deploy it and the web application server and reproduce
the experiments.

Question:
Nowadays, web-applications rarely have only one server. They tend to
have
multiple servers which may be geographically separated with a load
balancing mechanism involved. Furthermore, CDN may also be used. How
can AjaxScope’s single server-side proxy be extended to a distributed
architecture to handle this? Or we just deploy proxies for some of the
servers? Is it possible a client change its server during its usage of
web applications so that this single-proxy strategy fails?

Criticism: The profiling and debugging strategies presented in this
paper are interesting and much more effective than our traditional
testing methods. It will be amazing if we can somehow extend them to
traditional software even if they do not follow software-as-a-service
paradigm and we cannot utilize adaptive instrumentation.


On Nov 17, 6:01 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Joost

unread,
Nov 18, 2010, 10:42:53 AM11/18/10
to CSCI2950-u Fall 10 - Brown
Paper: AjaxScope: A Platform for Remotely Monitoring the Client-Side
Behavior of Web 2.0 Applications
Authors: Emre Kıcıman and Benjamin Livshits
Date: SOSP’07, October 14–17, 2007
Novel Idea: Ajaxscope tries to tackle the problems that have come up
as the web has evolved into the framework it is today. In particular
the rise of JavaScript on client side browsers has created new
programming environments where application crash, and given that the
language never envisioned the scope it has today lacks the tools for
remote debugging/cross platform communication consistency that more
complex applications require. Ajaxscope seeks to fill in this gap.
Main Result: The authors implemented a prototype of the Ajaxscope
framework, so that it works in a non-intrusive manner, and helps
detect such bugs as memory leaks.
Impact: This tool could prove rather helpful in the creation of newer
web applications as well as in the debugging of existing frameworks.
Evidence: The authors include a variety of monitoring techniques to
model the intrusiveness and overhead of the system, and tested their
framework on a variety of existing web-applications to demonstrate the
scope of errors that could be discovered.
Reproducibility: While one could reproduce the type of results for
monitoring specific web-applications as the paper did given the Ajax
framework, without more detail on implementation, copying the actual
scope of the article would be difficult.
Question/Criticism: Very little detail of implementation, or reason
for parameter selection.



On Nov 17, 6:01 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:
Reply all
Reply to author
Forward
0 new messages