there's a specification for exposing detailed timing information of web
page elements via the DOM:
http://dev.w3.org/2006/webapi/WebTiming/
Some people here would like to implement it in Firefox. The reason for
exposing it via the DOM is to gather actual concrete end-user timings;
other approaches don't allow that (also not all of this timing data is
currently available to extensions).
The idea is to change the spec a bit to limit the number of elements on
which this data is exposed, probably to drop the Ticks interface, etc.
Would people agree this is useful, and in particular, would the DOM
peers be willing to accept patches to implement this spec?
Chromium is implementing this spec as well
(https://lists.webkit.org/pipermail/webkit-dev/2009-October/010382.html).
For a bit of discussion on the spec, see the thread at
http://www.mail-archive.com/public-...@w3.org/msg07393.html
-christian
So the biggest issue I see here (past the fact that I'm not sure we can
sanely get some of this information) is that any element that exposes
this interface ends up bigger by about 100 bytes, right? That seems a
little unfortunate....
-Boris
This is real good news. I agree that this is useful. I am also willing
to contribute towards the implementation of this spec.
Thank you very much biesi for bringing this up.
Looking forward for more support.
-- Brahmana
>
> Would people agree this is useful, and in particular, would the DOM
> peers be willing to accept patches to implement this spec?
Timing is useful sure, but *I* am not sure that the *draft* spec is
even close to something which should be implemented.
But sure, IMO, patches should be accepted if and when the specification
is stable and reasonable enough.
-Olli
That places a burden on all users for performance timing information
useful to a few. Should we adopt this broadly and put more profiling and
debugging into DOM? If not why this particular one?
jjb
That is unfortunate, but there's optimization you can do - instead of
storing the timestamps as msec-since-epoch, you could store the startup
timestamp once, and only store the differences to that, for which a
PRUint32 might suffice. Or you can use the document-load-start timestamp
instead of startup. That way, you only need half the memory.
You could further optimize and instead of storing start+end, store only
the difference, which might be good enough.
-christian
Smaug wrote:
> On 2/11/10 8:28 PM, Christian Biesinger wrote:
>> Hi everyone,
>>
>> there's a specification for exposing detailed timing information of web
>> page elements via the DOM:
>> http://dev.w3.org/2006/webapi/WebTiming/
>>
>> Some people here would like to implement it in Firefox. The reason for
>> exposing it via the DOM is to gather actual concrete end-user timings;
>> other approaches don't allow that (also not all of this timing data is
>> currently available to extensions).
>>
>> The idea is to change the spec a bit to limit the number of elements on
>> which this data is exposed, probably to drop the Ticks interface, etc.
> So quite a few changes are needed, as I expected.
>
>
>>
>> Would people agree this is useful, and in particular, would the DOM
>> peers be willing to accept patches to implement this spec?
>
> Timing is useful sure, but *I* am not sure that the *draft* spec is
> even close to something which should be implemented.
>
> But sure, IMO, patches should be accepted if and when the specification
> is stable and reasonable enough.
Sure, the spec is being improved. If you have concrete feedback, I'm
sure Zhiheng would love to hear it :) We also don't have to implement
the spec completely from the start, we can implement the more reasonable
parts first.
-christian
I was looking at this a couple of weeks ago. I think blizzard pointed
me to it. A standardized mechanism for timing could be a great thing.
> The idea is to change the spec a bit to limit the number of elements on
> which this data is exposed, probably to drop the Ticks interface, etc.
>
> Would people agree this is useful, and in particular, would the DOM
> peers be willing to accept patches to implement this spec?
Has anyone filed a meta bug to capture implementation of these pieces
yet, assuming we do this? I think we should.
>
> Chromium is implementing this spec as well
> (https://lists.webkit.org/pipermail/webkit-dev/2009-October/010382.html).
> For a bit of discussion on the spec, see the thread at
> http://www.mail-archive.com/public-weba...@w3.org/msg07393.html
>
> -christian
Some feedback was sent to w3c webapps list already.
And that is the place where the changes to the draft should be discussed.
-Olli
That's a good way to end up with behavior that violates the spec due to
overflowing counters, so I don't think it would in fact be a valid
optimization.
-Boris
As expected, I will continue losing my hair on this draft so
please do send your feedback.
(Sorry for the late response. I've been on leave for the past two
weeks.)
cheers,
Zhiheng
> >> http://www.mail-archive.com/public-weba...@w3.org/msg07393.html
>
> >> -christian
I think we should make it configurable whether to enable it by default,
if the website requests for it, whether to ask the user, if the web site
requests for it, or whether to always keep it of. I think we should not
switch it on, if the website does not request for it using a HTTP
header, which is to be specified.
For non HTTP sources other rules might be specified, when to enable it.
Default should be not to enable it to prevent that burdon on each user
even those not using int.
thanks,
Zhiheng
Hi Christian,
Sorry about the slow response. Yes, I'd say we're definitely interested
in this. I do have some concerns about the spec, which I raised on the
relevant mailing lists. However they are mostly about syntax and which
specific elements are measured.
But all in all I'd love to see this get in to gecko. If you have plans
to implement this, or know of anyone who does, please do let us know.
/ Jonas
I do in fact know someone who wants to implement this (cc'd). I think
the idea is to start with implementing window.timing in an initial patch
and then implement this for elements in a separate patch.
-christian
Yeah, that sounds like a good approach. Keep in mind that if the spec
changes (which seems likely based on Ollis and mine reactions to it),
we'll need to keep gecko up to date. We might even want to prefix all
the properties with 'moz' for now.
/ Jonas
What is the root document? Any HTTP response might say "I like to enable
the interface". The browser then decides depending on the privacy rules
of the user, whether to really enable the interface and which parts to
expose and which not.
If a web application needs it, it will send that HTTP header in each
(X)HTML page embedding scripts that like to use it. A web application
does not need to test how long it takes to first time connect to that
server within that session. This is the only thing which can not be
measured, if the interface is activated after the response begins.
Usually a web application is interested in timing information during
usage of the application not in timing information before starting it.
Start of the application is, when the server received the incomming
connection. The time from receiving the incoming connection until
sending the first HTTP response header can be measured by the server, if
it is interested in this timing information.
If we are to start the timer when the request gets to the server
(or even when
the response reaches the user), we are missing a big trunk of the
latency seen
by the user. And the site owner can indeed do something about it, say,
using
CDN and change DNS TTL. Many pages start the time on top of the page,
imo,
because the APIs in this spec are not yet available. :-) To get a
complete picture
on what the users see, they have to turn to user side plugin like
toobar and
stand alone binaries that only available to a few.
cheers,
Zhiheng
Many pixels have been spent on how to time that very thing reliably,
but maybe you're right: why do you think that applications don't need
to test the total time for first-time access? Is that what you're
hearing from web developers who are watching the performance of their
sites?
Mike
Nothing is before Big Bang. Big Bang of a web application is, when the
first request receives the server. Nothing was before. No one must care
about the before.
The User sending the request may be interested in that timing and may be
therefore enable measuring this time. In that case this time from before
the start may be available, but why should this time be of interest for
the application? During the application run there are many times to
record, why is this single before application start of interest?
Interesting for the application are latencies caused by technical issues
and user reaction times. So the timing information is needed to
distinguish which server side waiting is caused by technical issues an
what is caused by the user.
You are looking at this only from the web-application point of view. Metrics
like DNS time, connect time and time taken to receive the first byte are
equally important to a lot of people. These numbers tell you how well your
deployment is working.
> _______________________________________________
> dev-tech-dom mailing list
> dev-te...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-dom
>
--
Regards,
Srirang G Doddihal
Brahmana.
The LIGHT shows the way.
The WISE see it.
The BRAVE walk it.
The PERSISTENT endure and complete it.
I want to do it all ALONE.
DNS time, connect time and so on is information that is interesting for
the end user, who may switch collecting this information on or off. It
should be off by default and should never be accessible to none chrome
because this is sensible information.
It is useful for service technicians to find out, why internet does not
work, but the question is, why must this analysis tool be part of the
browser? It has nothing to do with web application and therefore must
not be part of the DOM, because it is not part of the document.
> Srirang Doddihal wrote:
>
>> You are looking at this only from the web-application point of view.
>> Metrics
>> like DNS time, connect time and time taken to receive the first byte are
>> equally important to a lot of people. These numbers tell you how well your
>> deployment is working.
>>
>
> DNS time, connect time and so on is information that is interesting for the
> end user, who may switch collecting this information on or off.
It's not just the end user who is interested in this. A lot (most?) of
people hosting webapps ask for this information.
It should be off by default and should never be accessible to none chrome
> because this is sensible information.
>
Why would you classify this as sensible information?
>
> It is useful for service technicians to find out, why internet does not
> work, but the question is, why must this analysis tool be part of the
> browser? It has nothing to do with web application and therefore must not be
> part of the DOM, because it is not part of the document.
This information is necessary to understand the end to end
behavior/performance of the web app as seen by the end user. So this is not
for technicians trying to troubleshoot a connectivity problem, but this is
for people hosting the websites who want to know the QoS as seen by their
end users. Only the browser can provide this information to the website and
DOM is probably the best way to expose this information in a standard
manner.
We all know that client side technicians have various tools, which are far
more powerful than the browser, to find out why internet does not work.
I'm also interested in your account information to suck money from it.
But you probably won't accept satisfying my interest to fetch your money
by default. So their interest is no agrument.
>
> It should be off by default and should never be accessible to none chrome
>> because this is sensible information.
>>
>
> Why would you classify this as sensible information?
It gives a lot of information about infrastructure. Accessing DNS may
cause proxy to first request for authentication, which all causes a long
delay, because this requires user interaction. From point of view of the
application all this is client side infrastructure, which must not be
known to the application without explicit acceptance of the user before
collecting this information.
If name resolution of the first request takes more than 10 seconds, I
can assume that there is a proxy in between, which request for
authentication, because otherwise name resolution usually does not take
so much time. So timing information on that level is critical, because
it exposes information about client side network infrastructure.
> This information is necessary to understand the end to end
> behavior/performance of the web app as seen by the end user. So this is not
> for technicians trying to troubleshoot a connectivity problem, but this is
> for people hosting the websites who want to know the QoS as seen by their
> end users.
If they do not have a contract with the end user permitting them to
collect detailed information about his infrastructure, they must not be
able to collect any information related to other IP/port pairs than
directly communicating with.
Additionally, if they do not have a contract with other involved hosts
like DNS permitting them to collect such information, they must not do that.
> Only the browser can provide this information to the website and
> DOM is probably the best way to expose this information in a standard
> manner.
If I provide a server in the internet, why should I permit others to
collect timing statistics and exposing them to third parties without
being asked for doing this?
If others refer to resources on my server i.e. by embedding it into
their web application, why should I permit them to collect and publish
timing statistics about my server?
Embedding the external resource into the application makes it being a
piece of the application and therefore also a target of desire to
collect timing information. So the target also needs to be asked,
whether it permits collecting timing statistics and to whom it permits
collecting this information.
If timing information collected about my server is published this
enables comparison to servers of competitors which might expose my
server as a poor man server, which might cause potential customers to
say, oh if that server is so poor, we don't make business with that poor
company, even though the server might be not part of the business. So
collecting such timing information is very critical. So each involved
host must explicit grant collecting timing information telling who may
access this information before collecting this information starts.
Filed https://bugzilla.mozilla.org/show_bug.cgi?id=554045
-christian
In this particular case, there are other similar ways to tell,
e.g.,
websocket's readyState changes from CONNECTING to OPEN. Another minor
point but timeout from dns and tcp handshake will also makes it harder
to tell
if user interaction is involved. (Timeout is more common on lossy
access links.)
In short, I am not sure if there is any "added value" from this spec
to what's already
out there.
It doesn't take the timing info provided in this draft to figure
out if a service
has poor server/design. IMHO, it's more reliable to tell a poorly
performing
service by making it into an iframe and time its the loading time. In
fact,
I would argue this spec gives a tool to those companies and now they
can
find out whom to blame and what to do when they see poor user
performance.
Security issues are important. The draft tries to export timing
information in a way that it's
useful for performance diagnostics but not enough to expose client
info/privacy. Should there
be conner cases, fix it we shall.
thanks,
Zhiheng
One way to solve this would be to attach the timing data to the load
event fired from these elements, instead of on the elements themselves.
/ Jonas
To make sure I follow, do you mean allowing GC to remove the timing
objects earlier if we attach
DOMTiming to onload? This is an interesting idea and a couple
developers here actually had some
discussion on it. A main downside is that it limits the interface to
give out data up to the onload event
only. While right now the attributes mandated by the draft fit that,
the draft is also open to other
UA-specific attributes, e.g., pain event, etc.
thanks,
Zhiheng
>
> / Jonas
cheers,
Zhiheng
Specifically, to attach it to the Event object fired for the 'load'
event. Yes, the idea is that the DOMTiming object could be GCed earlier
since Event objects are often short lived. Of course, if a page want to
keep the timing data around for longer, it can always hold a reference
to the DOMTiming object.
> A main downside is that it limits the interface to
> give out data up to the onload event
> only. While right now the attributes mandated by the draft fit that,
> the draft is also open to other
> UA-specific attributes, e.g., pain event, etc.
(I take it you mean the pain*t* event?)
I'm not saying it needs to be exclusively on the 'load' Event. It can be
attached to other Events as needed as well.
/ Jonas
Why on earth, why?!
/ Jonas
And, yes, it's the "paint" event I referred to. :-)
cheers,
Zhiheng