Intent to Implement and Ship: WebAudio: 'AudioDestinationNode.outputTimeStamp' and 'AudioDestinationNode.outputLatency' attributes

154 views
Skip to first unread message

Mikhail Pozdnyakov

unread,
May 13, 2016, 10:03:39 AM5/13/16
to blink-dev
Contact emails

mikhail.p...@intel.com

Spec

http://webaudio.github.io/web-audio-api/#AudioDestinationNode
http://webaudio.github.io/web-audio-api/#attributes-4

Summary

The 'AudioDestinationNode.outputTimeStamp' and 'AudioDestinationNode.outpuLatency' attributes are used to synchronize between the audio time clock and the performance clock.

Motivation

Please see https://github.com/WebAudio/web-audio-api/issues/12 for motivation and WG discussions around this topic.

Interoperability and Compatibility Risk

Being a part of the WebAudio W3C API it is expected to be implemented in all browsers.

Ongoing technical constraints

None

Will this feature be supported on all six Blink platforms (Windows, Mac, Linux, Chrome OS, Android, and Android WebView)?


Yes

OWP launch tracking bug

TBD

Requesting approval to ship?

Yes

Chris Harrelson

unread,
May 13, 2016, 11:05:17 AM5/13/16
to Mikhail Pozdnyakov, blink-dev
LGTM1

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Raymond Toy

unread,
May 13, 2016, 2:38:27 PM5/13/16
to Mikhail Pozdnyakov, blink-dev
Just want to note that this depends on the audio latency hint intent (which hasn't yet been implemented pending internal design and refactoring that is happening) because the outputLatency critically depends on the latency/buffering hint that the user may provide.

I think it would be better to implement the latency hint first and then this.


Rick Byers

unread,
May 13, 2016, 8:25:32 PM5/13/16
to Raymond Toy, Mikhail Pozdnyakov, blink-dev
So should we revisit the debate in that intent before proceeding with this one?

Hongchan Choi

unread,
May 16, 2016, 12:28:41 PM5/16/16
to Rick Byers, Raymond Toy, Mikhail Pozdnyakov, blink-dev
I've also pointed it out the issue tracker and the PR. We have confusing variables in two different places. I feel like we're improvising new features as we go because of the flaws in original API. If we have to introduce this feature, I think we can do that in a cleaner and nicer way.

The following is what's planned in the intent:
AudioDestinationNode.outputTimeStamp.contextTime
AudioDestinationNode.outputTimeStamp.performanceTime
AudioDestinationNode.outputLatency

Currently we have (or fimly decided):
AudioContext.currentTime
AudioContext.baseLatency

In the original discussion, Mikhail argued the destination node having the time and the latency information is the right design, but I believe otherwise. The destination node is the final gate of audiograph, not the driving engine of audio rendering process. Somehow Blink implements the rendering mechanism inside of the destination node, but it does not mean the API design must follow how it is implemented. Conceptually the destination node is a passive element just like any other node. It is not responsible for advancing time or calculating the expected latency.

We already placed the time information in the context, and this we cannot change. (it has been there since day one of Web Audio API). I believe the context is the right place to query the time/latency information.

Raymond Toy

unread,
May 16, 2016, 4:00:09 PM5/16/16
to Rick Byers, Mikhail Pozdnyakov, blink-dev
On Fri, May 13, 2016 at 5:25 PM, Rick Byers <rby...@chromium.org> wrote:
So should we revisit the debate in that intent before proceeding with this one?

I think they can be done somewhat independently, but ideally the latency hint should be implemented first.  If done independently, it would be unfortunate if the implementation of this outputTimeStamp had to be completely redone to support the latency hint.

Mikhail Pozdnyakov

unread,
May 18, 2016, 10:26:26 AM5/18/16
to blink-dev, rby...@chromium.org, mikhail.p...@intel.com
We discussed this with Raymond on IRC yesterday and came to conclusion that the intended functionality does not actually depend on the latency hint implementation, because the data for both outputTimeStamp and outputLatency is fetched directly from the platform API (e.g. WASAPI IAudioClock::GetPosition on Windows) and this should work fine with any latency requirements provided by the user.

Raymond Toy

unread,
May 18, 2016, 11:46:19 AM5/18/16
to Mikhail Pozdnyakov, blink-dev, Rick Byers
On Wed, May 18, 2016 at 7:26 AM, Mikhail Pozdnyakov <mikhail.p...@intel.com> wrote:
We discussed this with Raymond on IRC yesterday and came to conclusion that the intended functionality does not actually depend on the latency hint implementation, because the data for both outputTimeStamp and outputLatency is fetched directly from the platform API (e.g. WASAPI IAudioClock::GetPosition on Windows) and this should work fine with any latency requirements provided by the user.

To be clear, we agreed that the baseLatency (from the buffering/latency hint) doesn't affect the time stamp much.  The double buffering needs to be accounted for, but the latency hint doesn't introduce any additional delay that would be propagated to the timestamps especially contextTime.

Well, I think that's true. We haven't implemented it yet, so this is really just my best guess based on what we actually do today (without the latency hint).

I did not look into the platform API stuff. Are there equivalent APIs for the other platforms? If not, is that going to be a problem to implement this for all six platforms you said this would be implemented for?

Mikhail Pozdnyakov

unread,
May 18, 2016, 12:24:09 PM5/18/16
to blink-dev, mikhail.p...@intel.com, rby...@chromium.org


On Wednesday, May 18, 2016 at 6:46:19 PM UTC+3, Raymond Toy wrote:


On Wed, May 18, 2016 at 7:26 AM, Mikhail Pozdnyakov <mikhail.p...@intel.com> wrote:
We discussed this with Raymond on IRC yesterday and came to conclusion that the intended functionality does not actually depend on the latency hint implementation, because the data for both outputTimeStamp and outputLatency is fetched directly from the platform API (e.g. WASAPI IAudioClock::GetPosition on Windows) and this should work fine with any latency requirements provided by the user.

To be clear, we agreed that the baseLatency (from the buffering/latency hint) doesn't affect the time stamp much.  The double buffering needs to be accounted for, but the latency hint doesn't introduce any additional delay that would be propagated to the timestamps especially contextTime.

Well, I think that's true. We haven't implemented it yet, so this is really just my best guess based on what we actually do today (without the latency hint).

I did not look into the platform API stuff. Are there equivalent APIs for the other platforms? If not, is that going to be a problem to implement this for all six platforms you said this would be implemented for?

Raymond Toy

unread,
May 18, 2016, 1:17:23 PM5/18/16
to Mikhail Pozdnyakov, blink-dev, Rick Byers
On Wed, May 18, 2016 at 9:24 AM, Mikhail Pozdnyakov <mikhail.p...@intel.com> wrote:


On Wednesday, May 18, 2016 at 6:46:19 PM UTC+3, Raymond Toy wrote:


On Wed, May 18, 2016 at 7:26 AM, Mikhail Pozdnyakov <mikhail.p...@intel.com> wrote:
We discussed this with Raymond on IRC yesterday and came to conclusion that the intended functionality does not actually depend on the latency hint implementation, because the data for both outputTimeStamp and outputLatency is fetched directly from the platform API (e.g. WASAPI IAudioClock::GetPosition on Windows) and this should work fine with any latency requirements provided by the user.

To be clear, we agreed that the baseLatency (from the buffering/latency hint) doesn't affect the time stamp much.  The double buffering needs to be accounted for, but the latency hint doesn't introduce any additional delay that would be propagated to the timestamps especially contextTime.

Well, I think that's true. We haven't implemented it yet, so this is really just my best guess based on what we actually do today (without the latency hint).

I did not look into the platform API stuff. Are there equivalent APIs for the other platforms? If not, is that going to be a problem to implement this for all six platforms you said this would be implemented for?

I believe all the major platforms offer API equivalents so that it should be possible to implement the intended functionality for them.

Thanks for the links.  This looks good, but a few questions (not necessarily for you). 
AudioTimestamp says it came in Android API 19 (Kitkat, I think).  I don't know if we need to support this on earlier versions or not.

This is for iOS. For OSX, it seems it's only available on 10.10.  What about earlier versions?  I think we still need to support 10.9.  How will this be handled?

Rick Byers

unread,
May 18, 2016, 3:28:12 PM5/18/16
to Raymond Toy, Mikhail Pozdnyakov, blink-dev
LGTM2

Raymond Toy

unread,
May 18, 2016, 4:04:03 PM5/18/16
to Hongchan Choi, Rick Byers, Mikhail Pozdnyakov, blink-dev
On Mon, May 16, 2016 at 9:28 AM, Hongchan Choi <hong...@chromium.org> wrote:
I've also pointed it out the issue tracker and the PR. We have confusing variables in two different places. I feel like we're improvising new features as we go because of the flaws in original API. If we have to introduce this feature, I think we can do that in a cleaner and nicer way.

The following is what's planned in the intent:
AudioDestinationNode.outputTimeStamp.contextTime
AudioDestinationNode.outputTimeStamp.performanceTime
AudioDestinationNode.outputLatency

Currently we have (or fimly decided):
AudioContext.currentTime
AudioContext.baseLatency

In the original discussion, Mikhail argued the destination node having the time and the latency information is the right design, but I believe otherwise. The destination node is the final gate of audiograph, not the driving engine of audio rendering process. Somehow Blink implements the rendering mechanism inside of the destination node, but it does not mean the API design must follow how it is implemented. Conceptually the destination node is a passive element just like any other node. It is not responsible for advancing time or calculating the expected latency.

What about these issues?  Although the spec is written, that hasn't prevented us from changing it before.

I rather agree with hongchan@ on this.  The AudioDestinationNode is really just another node in the graph, especially since we've added an output to it and you can create a loop from the destination to any other node.  What latency really means with such a loop is no longer so clear. The timestamps and such are really associated with the AudioContext, not the node. 

Rick Byers

unread,
May 18, 2016, 5:09:10 PM5/18/16
to Raymond Toy, Hongchan Choi, Mikhail Pozdnyakov, blink-dev
Sorry, I thought Mikhail was saying you guys had reached a consensus on this.  Is there a GitHub issue tracking this debate?  That's a better forum for spec debates than the intent thread.  Once the WG comes to a conclusion on the issue, then we can circle back on the intent.

Hongchan Choi

unread,
May 24, 2016, 4:18:41 PM5/24/16
to Rick Byers, Raymond Toy, Mikhail Pozdnyakov, blink-dev
WG reached the conclusion: https://github.com/WebAudio/web-audio-api/issues/817

Rick, the change is quite simple: moving all the new property/method into AudioContext. Do you think we should file another intent or continue to work on this thread?

Mikhail Pozdnyakov

unread,
May 26, 2016, 2:43:16 PM5/26/16
to blink-dev, rby...@chromium.org, rt...@google.com, mikhail.p...@intel.com
The issue https://github.com/WebAudio/web-audio-api/issues/817 is closed now and the specification is updated so that the same attributes are moved to AudioContext http://webaudio.github.io/web-audio-api/#attributes-1, so can we proceed with the intent?

Philip Jägenstedt

unread,
May 27, 2016, 7:58:27 AM5/27/16
to Mikhail Pozdnyakov, blink-dev, rby...@chromium.org, rt...@google.com
I took a look and filed two new issues:

I have no doubt that these issues can be sorted out, so perhaps you can begin implementing in the meantime?

Mikhail Pozdnyakov

unread,
May 30, 2016, 2:59:01 PM5/30/16
to blink-dev, mikhail.p...@intel.com, rby...@chromium.org, rt...@google.com
Yeah, I could start implementing it behind a flag, and ship after all the spec issues are solved (those are syntax-related and should not influence the implementation too much). Thanks.

Mikhail Pozdnyakov

unread,
Dec 13, 2016, 4:13:45 PM12/13/16
to blink-dev, mikhail.p...@intel.com, rby...@chromium.org, rt...@google.com
Hi,

The mentioned specification issues  (https://github.com/WebAudio/web-audio-api/issues/829, https://github.com/WebAudio/web-audio-api/issues/830) were resolved a while ago and there are no new issues regarding the intended API.
At the moment Implementation of  'AudioContext.getOutputTimestamp()' is close to completion (crrev.com/2060833002).

Would it be fine to ship it (one more LGTM is missing so far)?

Mikhail Pozdnyakov

unread,
Dec 14, 2016, 9:59:23 AM12/14/16
to blink-dev, mikhail.p...@intel.com, rby...@chromium.org, rt...@google.com
Posted a separate "Intend to Ship" thread for 'AudioContext.getOutputTimestamp()' method at https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/fEWdrU4C-3Y since the API has slightly changed semantically in the specification (functionality remains the same) comparing to what was intended in this thread (i.e., AudioContext.getOutputTimestamp() =>  AudioDestinationNode.outputTimeStamp)
Reply all
Reply to author
Forward
0 new messages