Intent to Implement: Media Stream Recording API

2,278 views
Skip to first unread message

Rachel Blum

unread,
Jul 11, 2013, 8:06:53 PM7/11/13
to blink-dev


Title:


Intent to Implement: Media Stream Recording API


Body:


Contact emails

gbil...@chromium.org

gr...@chromium.org


Spec

http://www.w3.org/TR/mediastream-recording/


Summary

An API for media stream recording. Allows web applications to record encoded media streams.


Motivation

getUserMedia makes raw media input available to web apps, but apps currently have no way to access the media streams after encoding. This has led to workarounds like recorder.js (https://github.com/mattdiamond/Recorderjs using web audio, https://github.com/jwagener/recorder.js/ for Flash-based recording) and weppy (movie recording as raw image sequences - http://antimatter15.github.io/weppy/demo.html) for video. The media files so produced are extremely bulky compared to compressed and encoded formats.


Specific applications that would benefit from this capability are audio recording apps, screen recording, video editing and compositing, and communications apps (such as Hangouts), which all currently require plugins or NaCl libraries to achieve acceptable performance.



Compatibility Risk

Small to medium. One of the authors of the spec is Travis Leithead from Microsoft. Mozilla is currently actively working on this (http://lists.w3.org/Archives/Public/public-media-capture/2013Jul/0016.html , https://bugzilla.mozilla.org/show_bug.cgi?id=803414 ).


This is still a working draft. There are e.g. discussions about changing the API to support promises.


Ongoing technical constraints

None


Will this feature be supported on all five Blink platforms (Windows, Mac, Linux, Chrome OS and Android)?

The plan is to target all platforms where getUserMedia and WebRTC is available. This includes all of the above, some in experimental form.



OWP launch tracking bug?

Doesn’t exist for now. Will be created.

Row on feature dashboard?

Yes, needs to be created.


Requesting approval to ship?

No




Darin Fisher

unread,
Jul 15, 2013, 7:45:52 PM7/15/13
to Rachel Blum, blink-dev
I haven't studied this too closely yet, but just some quick comments:

1-  Note that we store blobs in the browser process.  Is this API going to end up spamming the browser process with a lot of blobs?

2-  For video processing, I can imagine wanting to route video frames to a worker thread to be manipulated.  This probably involves the creation of a new video frame.  I'd then want to send that video frame back to the main thread to be available as a media stream track.  Is that a supported use case?  How much attention will be paid to optimizing this use case?  I'm worried blobs living in the browser could interfere with this use case.

-Darin


Rachel Blum

unread,
Jul 15, 2013, 8:42:28 PM7/15/13
to Darin Fisher, blink-dev
1. Yes, there will be a new blob created every 'timeslice' milliseconds if timeslice has been set. Otherwise, all data will be gathered into one large blob. So the blob generation rate is under the applications control.
2. Video processing is out of scope for MediaRecorder - it is strictly about recording/encoding. (Processing should be the domain of http://www.w3.org/TR/streamproc/)

 - rachel

Harald Alvestrand

unread,
Jul 16, 2013, 8:55:32 AM7/16/13
to Darin Fisher, Rachel Blum, blink-dev
Speaking as the chair of the relevant W3C task force:

- yes, this will create blobs. The current thinking is that it will create a blob every <suitable unit of time>, so that the JS can (for instance) store them to a file, send them to a recording server, or otherwise get rid of them, so that one doesn't have to store the whole recorded video in memory.

- no, this is not an API for accessing frames out of a video. The purpose of this API is creating a recording. Applications that want to access frames out of a MediaStream should connect the MediaStream to a video tag to a canvas (which is already supported), and copy data from there.

Darin Fisher

unread,
Jul 16, 2013, 10:52:46 AM7/16/13
to Harald Alvestrand, blink-dev, Rachel Blum

OK, sounds good.

Just please keep in mind that Blobs have a bit of extra cost in Chrome.

-Darin

Greg Billock

unread,
Jul 16, 2013, 12:23:19 PM7/16/13
to blin...@chromium.org, Harald Alvestrand, Rachel Blum
I'd expect the latency cost of the Blobs to be the biggest worry. I think we can likely manage that, but it is a good question. I'll do some investigation along those lines.

Do you think the API should use, for example, ArrayBuffer instead of Blob? Harald can likely point us to discussions of this nature if they've been had.

Darin Fisher

unread,
Jul 16, 2013, 12:31:32 PM7/16/13
to Greg Billock, blink-dev, Harald Alvestrand, Rachel Blum
I'm not arguing for ArrayBuffer.  I just want to make sure people have considered the impact of blobs
being backed by data held by the browser.

Some notable implications:
1-  If data is generated in the renderer process, then it must be uploaded to the browser process.
2-  If data needs to be accessed by the renderer process, then it must be downloaded from the browser process.
3-  Browser process memory usage will go up if a renderer process allocates a lot of blobs.

Thus, in cases where blob data is generated in the browser process and rarely read by renderer processes,
blobs, as implemented, are pretty good.  #3 can mostly be addressed by storing blobs on disk, but there is
still a slight concern about bloat.

By the way, we store blobs in the browser process so that we can share URLs to them across processes.
It enables a blob: URL to be passed from a web page to a shared worker for instance, and then on to other
renderer processes, etc.

-Darin

Greg Billock

unread,
Jul 16, 2013, 12:36:20 PM7/16/13
to blin...@chromium.org, Greg Billock, Harald Alvestrand, Rachel Blum
On Tuesday, July 16, 2013 9:31:32 AM UTC-7, Darin Fisher wrote:
I'm not arguing for ArrayBuffer.  I just want to make sure people have considered the impact of blobs
being backed by data held by the browser.

Yes. I've emailed Michael asking about the impact of this. I think it's a good question. Blobs have semantics beyond just 'hold some data', so if the API wants to indicate that those powers are desirable, that's one thing. If it was just a convenient way for the authors to say 'hold this data', and no Blob-ish powers are meant to be used, then something like ArrayBuffer might be a better choice. I'll do some asking and report back.

Harald Alvestrand

unread,
Jul 17, 2013, 7:45:00 AM7/17/13
to Greg Billock, blink-dev, Rachel Blum
One question.... where should the process that actually creates the blobs live? Browser or renderer?

The origin of data is either the network (for remote streams), the camera/microphone (for live streams), or some other source like the screen, a file or a programmatic source. Many of these may live in the browser process, but I'm not sure which ones do.

If the Blob is recorded based on data in the browser process, and subsequently written to a File object that also lives in the browser process, the current Chrome implementation of Blob might just possibly be optimal for our purposes.

But I may be too optimistic....

Darin Fisher

unread,
Jul 17, 2013, 12:51:52 PM7/17/13
to Harald Alvestrand, Greg Billock, blink-dev, Rachel Blum
You hit on a key issue for sure.  Any data that exists browser-side is already cheap to expose to a renderer as a blob ;-)

In the case of media processing, I'd imagine that decoding probably happens in a sandboxed process.  The data is probably not in the browser process in that case.

It might help to explore this further with folks who know more details about how the media backend works.

-Darin

Ami Fischman

unread,
Jul 17, 2013, 1:59:00 PM7/17/13
to Darin Fisher, Harald Alvestrand, Greg Billock, blink-dev, Rachel Blum
To be clear, the API under discussion is for exposing the result of /encoding/ a video stream, not /decoding/ or post-processing it.

While the source of the stream's frames is the browser process (for most scenarios), the encoder runs in the renderer for security/sandboxing reasons, and because it matches chrome's general process split model better.  Someday soon the encoder may run in the GPU process when we have HW-accelerated encoding, but it doesn't seem likely that the encoder would ever run in the browser process. 
Most/all output from the video encoder is destined for other processes (either for saving to disk or for sending on the network) so I suspect the right thing to do is to make the encoders emit their output to shared memory in the first place to minimize copying of encoded bits, and to allow cross-process access to them.  Certainly this is the plan for HW-accelerated video encode, if only b/c that happens in the GPU process and we don't want to incur the extra copy to ship it to the renderer.

Darin: if Blobs can wrap shared-memory segments are you still concerned with "spamming the browser process with a lot of blobs"?
Is there guidance on what makes the difference between "spamming" and "prudent use"? :)

Cheers,
-a

Darin Fisher

unread,
Jul 17, 2013, 2:07:52 PM7/17/13
to Ami Fischman, Harald Alvestrand, Greg Billock, blink-dev, Rachel Blum
On Wed, Jul 17, 2013 at 10:59 AM, Ami Fischman <fisc...@chromium.org> wrote:
To be clear, the API under discussion is for exposing the result of /encoding/ a video stream, not /decoding/ or post-processing it.

While the source of the stream's frames is the browser process (for most scenarios), the encoder runs in the renderer for security/sandboxing reasons, and because it matches chrome's general process split model better.  Someday soon the encoder may run in the GPU process when we have HW-accelerated encoding, but it doesn't seem likely that the encoder would ever run in the browser process. 
Most/all output from the video encoder is destined for other processes (either for saving to disk or for sending on the network) so I suspect the right thing to do is to make the encoders emit their output to shared memory in the first place to minimize copying of encoded bits, and to allow cross-process access to them.  Certainly this is the plan for HW-accelerated video encode, if only b/c that happens in the GPU process and we don't want to incur the extra copy to ship it to the renderer.

I see.  It it also possible for us to add complexity to the blob system to back blobs with promises to provide data from other sources.  We avoided doing something like that originally to minimize complexity.

 

Darin: if Blobs can wrap shared-memory segments are you still concerned with "spamming the browser process with a lot of blobs"?
Is there guidance on what makes the difference between "spamming" and "prudent use"? :)

Using SHM can probably help a lot.  We wouldn't need to map the SHM into the browser process for instance.  We might want to think carefully about what it means for the blobs to be mutated by sandboxed processes after the browser has a handle to their data.  There might be some security concerns there.

-Darin

Harald Alvestrand

unread,
Jul 17, 2013, 2:37:42 PM7/17/13
to Darin Fisher, Ami Fischman, Greg Billock, blink-dev, Rachel Blum
Aren't blobs immutable?
That was an argument posed on the public-webrtc list in favour of using blobs rather than ArrayBuffers.

Ami Fischman

unread,
Jul 17, 2013, 3:22:45 PM7/17/13
to Harald Alvestrand, Darin Fisher, Greg Billock, blink-dev, Rachel Blum
Harald: Darin is talking about the possibility of a compromised renderer mutating the shared memory after handing a handle off to the browser (i.e. exploiting the mutability of the proposed implementation of Blob, ignoring the immutability of Blob's API).

Darin: agreed that this should be kept in mind during implementation.  I suspect that the result will be that the browser never inspects the bits in these SHMs, only ever handing them off to other processes or sockets, and that all receivers of such data will treat it the same as encoded media data from the web - as untrusted potentially malicious bits.

Cheers,
-a

Darin Fisher

unread,
Jul 17, 2013, 3:26:54 PM7/17/13
to Ami Fischman, Harald Alvestrand, Greg Billock, blink-dev, Rachel Blum
On Wed, Jul 17, 2013 at 12:22 PM, Ami Fischman <fisc...@chromium.org> wrote:
Harald: Darin is talking about the possibility of a compromised renderer mutating the shared memory after handing a handle off to the browser (i.e. exploiting the mutability of the proposed implementation of Blob, ignoring the immutability of Blob's API).

Darin: agreed that this should be kept in mind during implementation.  I suspect that the result will be that the browser never inspects the bits in these SHMs, only ever handing them off to other processes or sockets, and that all receivers of such data will treat it the same as encoded media data from the web - as untrusted potentially malicious bits.

Yeah, hopefully it's a non-issue.

Note: the browser will need to map these blobs when writing them to files or sending them over the network, but that would just be a temporary thing.  The "spamming" concern is about using up browser memory / address space.

Greg Billock

unread,
Jul 22, 2013, 2:53:01 AM7/22/13
to blin...@chromium.org, Ami Fischman, Harald Alvestrand, Greg Billock, Rachel Blum


On Wednesday, July 17, 2013 12:26:54 PM UTC-7, Darin Fisher wrote:



On Wed, Jul 17, 2013 at 12:22 PM, Ami Fischman <fisc...@chromium.org> wrote:
Harald: Darin is talking about the possibility of a compromised renderer mutating the shared memory after handing a handle off to the browser (i.e. exploiting the mutability of the proposed implementation of Blob, ignoring the immutability of Blob's API).

Darin: agreed that this should be kept in mind during implementation.  I suspect that the result will be that the browser never inspects the bits in these SHMs, only ever handing them off to other processes or sockets, and that all receivers of such data will treat it the same as encoded media data from the web - as untrusted potentially malicious bits.

Yeah, hopefully it's a non-issue.

Note: the browser will need to map these blobs when writing them to files or sending them over the network, but that would just be a temporary thing.  The "spamming" concern is about using up browser memory / address space.

Yes. This came up in the public-media-capture discussion -- if the intention is to send them out over the wire, the bytes having been transferred may end up not being that big a deal -- presuming they'd be transferred to the browser thread anyhow to get copied to the network.

Would it make sense in the first pass to not use SHM and the worries that accompany it, and instead apply some rate limiting?

That may end up impairing the feature in ways that are platform-inconsistent, though (i.e. harder for high-res encodes on constrained platforms). But perhaps the API needs some back pressure error handling to account for this kind of scenario... if the platform just can't handle whatever the app is trying to do, there's onerror and onwarning events in the API it can watch, but perhaps we need some more definition on what these kinds of errors/warnings will look like.

Daniel Bratell

unread,
Jul 25, 2013, 5:28:09 AM7/25/13
to blin...@chromium.org, Greg Billock, Harald Alvestrand, Rachel Blum
Den 2013-07-16 18:23:19 skrev Greg Billock <gbil...@chromium.org>:

> I'd expect the latency cost of the Blobs to be the biggest worry. I
> think we can likely manage that, >but it is a good question. I'll do
> some investigation along those lines.
>
> Do you think the API should use, for example, ArrayBuffer instead of
> Blob? Harald can likely point us >to discussions of this nature if
> they've been had.

What is the lifespan of these objects? Blobs can be converted to url
strings through the createObjectURL method and I wonder if that means that
blobs have to be kept around indefinitely? When blobs refer to files they
are cheap since all you need is a url <-> file name mapping but if they
are going to refer to an in-memory representation of a media recording
that can be MBs or GBs in size, then it matters quite a lot.

Also, if they are going to be stored in the browser process it seems they
could easily exhaust the browser process address space which would be very
bad for the whole browser (denial-of-service kind of).

I admit to not knowing all the details here but memory usage (temporary
peaks and long term) need to be considered.

/Daniel

Darin Fisher

unread,
Jul 25, 2013, 11:43:46 AM7/25/13
to Daniel Bratell, blink-dev, Greg Billock, Harald Alvestrand, Rachel Blum
createObjectURL is a pretty unfortunate API.  Yes, when you use it you are creating the potential for a memory leak.  You have to call revokeObjectURL when you are done with the Blob URL.  Otherwise, the Blob data is retained (browser-side) until the document is unloaded.

We can avoid some of the browser-side memory issues by storing Blob data in files or unmapped shared memory.

-Darin

Thibault Imbert

unread,
Jul 26, 2013, 2:30:30 AM7/26/13
to Darin Fisher, Daniel Bratell, blink-dev, Greg Billock, Harald Alvestrand, Rachel Blum
This is great news. I actually wanted to port recently an existing lib [1] I wrote in AS3 a few years back and it was brutal to have to rely on Web Audio for that. I shared some thoughts here [2].

A few more thoughts:

1. I am sorry if I missed that in the spec, but it seems like the data is encoded to specific formats, which is very cool. But for best flexibility, why not still provide the raw PCM samples as an option when it comes to audio? This would first, simplify largely the way you retrieve the stream (no Web Audio required), but also allow people to write the encoders they want using typed arrays if needed. Additionally, what if you want to access the samples to draw a simple spectrum like [3]. Again, having a very simple way to access the incoming raw samples would be very flexible.

2. If such raw samples were exposed. It would also be convenient to have both channels interleaved already (not like with Web Audio with getChannelData) or make this optional, so that we don't have to store both channels to interleave them later manually. 

Harald Alvestrand

unread,
Jul 26, 2013, 2:39:59 AM7/26/13
to Thibault Imbert, Darin Fisher, Daniel Bratell, blink-dev, Greg Billock, Rachel Blum
On Fri, Jul 26, 2013 at 8:30 AM, Thibault Imbert <thibaul...@gmail.com> wrote:
This is great news. I actually wanted to port recently an existing lib [1] I wrote in AS3 a few years back and it was brutal to have to rely on Web Audio for that. I shared some thoughts here [2].

A few more thoughts:

1. I am sorry if I missed that in the spec, but it seems like the data is encoded to specific formats, which is very cool. But for best flexibility, why not still provide the raw PCM samples as an option when it comes to audio? This would first, simplify largely the way you retrieve the stream (no Web Audio required), but also allow people to write the encoders they want using typed arrays if needed. Additionally, what if you want to access the samples to draw a simple spectrum like [3]. Again, having a very simple way to access the incoming raw samples would be very flexible.

What's raw about PCM?

If you want raw, audio/L16 is your friend.
(This is a good example of the wisdom of encoding to specific MIME types - all the questions like sample rate, number of bits, applied compression and so on either turn into parameters or "read the spec").
 

2. If such raw samples were exposed. It would also be convenient to have both channels interleaved already (not like with Web Audio with getChannelData) or make this optional, so that we don't have to store both channels to interleave them later manually. 

audio/l16 channels=2 provides that.

RFC 2586 is the registration of "raw audio data" as a MIME type.
There's also RFC 3190 for audio/L24 if you think 16 bits is too coarse.

Thibault Imbert

unread,
Jul 26, 2013, 3:04:27 AM7/26/13
to Harald Alvestrand, Darin Fisher, Daniel Bratell, blink-dev, Greg Billock, Rachel Blum
Perfect then, thanks! For raw/PCM, thanks for the reminder about the terminology.

Greg Billock

unread,
Jul 29, 2013, 12:21:53 PM7/29/13
to Daniel Bratell, blink-dev, Harald Alvestrand, Rachel Blum
On Thu, Jul 25, 2013 at 2:28 AM, Daniel Bratell <bra...@opera.com> wrote:
Den 2013-07-16 18:23:19 skrev Greg Billock <gbil...@chromium.org>:


I'd expect the latency cost of the Blobs to be the biggest worry. I think we can likely manage that, >but it is a good question. I'll do some investigation along those lines.

Do you think the API should use, for example, ArrayBuffer instead of Blob? Harald can likely point us >to discussions of this nature if they've been had.

What is the lifespan of these objects? Blobs can be converted to url strings through the createObjectURL method and I wonder if that means that blobs have to be kept around indefinitely? When blobs refer to files they are cheap since all you need is a url <-> file name mapping but if they are going to refer to an in-memory representation of a media recording that can be MBs or GBs in size, then it matters quite a lot.


There's two use cases. One is that recording goes directly to a file, with the blob only being produced upon stream exhaustion or stop(). This shouldn't be a problem in terms of memory.

The other case is when the API produces intermediate results -- a blob every 50ms or something, say. (the record(timeslice) API). The assumption here is that the app will immediately do something with the intermediate blob (send it via the network, write it to disk) and then get rid of it. Obviously there's a lot more room for error there, and a lot more latency sensitivity to, for example, writing each intermediate blob to disk and then producing the handle.

This second use case is what I was thinking of in suggesting rate limiting, since as you say, there could be a lot of memory cost, especially in a low-memory device.

Rachel Blum

unread,
Jul 29, 2013, 5:17:39 PM7/29/13
to Greg Billock, Daniel Bratell, blink-dev, Harald Alvestrand
If you read the blob spec creatively, I don't think there's anything in there that forbids expiration of blobs. In fact, snapshot state seems to explicitly enables this kind of mechanism.

So, as the strange thought for the day: what if record(timeslice) required you to specify a sliding window size for blobs, so memory can be reserved (and failure be detected) at invocation, as opposed to some unspecified point later in history?

 - rachel

Greg Billock

unread,
Aug 2, 2013, 12:14:01 PM8/2/13
to blin...@chromium.org, Daniel Bratell, Greg Billock, Harald Alvestrand, Rachel Blum, Takeshi Yoshino, Kenji Baheux
Rachel and I have started a discussion on public-media-capture about using Stream for this API. It gets around some of the sharp edges of using Blob by using a piece of the File API that's a bit more suitable for, well, streamed results.

A complication is that Stream is under active discussion and hasn't fully stabilized at this point.

Under the covers, it looks like the internal structure we'll be want to use is a mechanism where the encoded bits are appended to an internal buffer, which is then either piped to the JS API or to an underlying disk location which is then provided to the API as a Blob (Streams have a way to do that as well.) I believe the Stream API could end up being a good match for a lot of the issues that we've talked about -- it looks like it'll have the discard-on-read behavior that we need.

Greg Billock

unread,
Aug 5, 2013, 7:04:54 PM8/5/13
to blink-dev, Daniel Bratell, Greg Billock, Harald Alvestrand, Rachel Blum, Takeshi Yoshino, Kenji Baheux
Thanks to everyone for comments about the implementation. I've forwarded the questions on to public-media-capture if you're interested to read more discussion there.

For next steps with Blink, what is your recommendation? We could move forward with the API as is (it'll be behind a flag and/or exposed only in dev for the time being). We could move forward with a variant of the API we think is more likely to stabilize. We could wait and see what the outcome is of the suggestion to leverage Stream to reuse the File API better.

My guess is that the bulk of the implementation work won't be that different regardless of approach -- my estimate is that it'll be in getting the plumbing from the encoders working correctly. We may learn important facts in that process that contribute to the development of the API. So my preference would be to start earlier with exploratory work, even if we're targeting an API signature we think is likely to change.

I think the concerns about memory exhaustion and SHM security and the like are important and I'm glad we discussed them before starting, and I'm guessing there'll be more questions that arise as we go. My reading of the discussion is that we don't anticipate any dealbreakers, however.


Greg Billock

unread,
Aug 14, 2013, 7:53:35 PM8/14/13
to blink-dev, Daniel Bratell, Greg Billock, Harald Alvestrand, Rachel Blum, Takeshi Yoshino, Kenji Baheux
I've filed a bug for this:


Any other action needed here? WebRTC folks are currently reviewing the design doc. Any volunteers for Blink code reviews? Shouldn't be that much code.

Rachel Blum

unread,
Aug 19, 2013, 6:42:32 PM8/19/13
to Greg Billock, blink-dev, Daniel Bratell, Harald Alvestrand, Takeshi Yoshino, Kenji Baheux
Just pinging if anybody volunteers for Blink code reviews of MediaRecorder - it seems there's mostly agreement that it's a good idea to implement and experiment?

 - rachel

kaust...@samsung.com

unread,
Feb 10, 2014, 4:13:11 AM2/10/14
to blin...@chromium.org
Hi, is anyone implementing this? I am interested in pushing the patches for MediaStreamRecorder implementation. 

anup.k...@gmail.com

unread,
Apr 29, 2014, 9:09:13 AM4/29/14
to blin...@chromium.org
Is there any update on this ?

mca...@google.com

unread,
Jul 10, 2015, 7:19:19 PM7/10/15
to blin...@chromium.org, bra...@opera.com, gbil...@chromium.org, tyos...@chromium.org, gr...@chromium.org, h...@google.com
Hi everyone,

I'd like to resuscitate this thread and get updated comments on the Recording JS API.

I started writing the Chromium parts, way easier, there's a DD under https://goo.gl/kreaQj if anyone wants to read it.

FTR, http://crbug.com/262211 is the implementation bug of the OWP Launch tracking in http://crbug.com/261321.

Domenic Denicola

unread,
Jul 10, 2015, 7:37:50 PM7/10/15
to mca...@google.com, blin...@chromium.org, bra...@opera.com, gbil...@chromium.org, tyos...@chromium.org, gr...@chromium.org, h...@google.com, Yutaka Hirano
This is very exciting! I've heard from at least one developer at a major video site that this API being only available in Firefox is a blocker for them.

The spec at https://w3c.github.io/mediacapture-record/MediaRecorder.html is quite unfortunate, in that MediaRecorder is basically a poorly-designed ReadableStream [1]. E.g., it requires manual pause/resume, and uses an awkward requestData + onddataavailable combo instead of a promise-returning read() method. In an ideal world, we would take the time to reformulate it as a true ReadableStream, and get all the attendant benefits in terms of interoperability, automatic pause/resume via the pull interface, ease of use, matching Fetch, and the like.

However, I lack background in this area, and I might be missing something crucial that ends up causing this to be a mismatch for streams. Additionally, I don't want to block getting a useful feature into the hands of web developers; it sounds like there's already a decent amount of work underway toward implementing this as-is. (Although maybe it is the infrastructure code, and not so much the web-facing-API code?). And Mozilla does ship it already.

Nevertheless, I do want to offer that, if there is interest, I'd be happy to buckle down and produce a version of the spec that works in terms of the modern Streams Standard. I could probably get that done by mid-next-week. My understanding of Yutaka's streaming Fetch code is that it should be generic enough to implement a type of Blob-producing stream in Blink without much work; he can probably speak to the implementation better.

WDYT?

[1]: https://streams.spec.whatwg.org/#rs
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Philip Jägenstedt

unread,
Jul 13, 2015, 8:05:34 AM7/13/15
to Domenic Denicola, mca...@google.com, blin...@chromium.org, bra...@opera.com, gbil...@chromium.org, tyos...@chromium.org, gr...@chromium.org, h...@google.com, Yutaka Hirano
I also took a quick look at the spec and found a number of trivial issues.

Is it only Gecko that has implemented and shipped it, and are they happy with it, or would they be willing to change some things? What Domenic suggests makes sense to me, but it comes down to usage in the wild and the willingness of other vendors to change it.

Harald Alvestrand

unread,
Jul 13, 2015, 2:58:28 PM7/13/15
to Domenic Denicola, mca...@google.com, blin...@chromium.org, bra...@opera.com, gbil...@chromium.org, tyos...@chromium.org, gr...@chromium.org, Yutaka Hirano
Domenic,

the MediaRecorder spec is the work mainly of Jim Barnett and Travis Leithead, was basically finished some time in 2013-2014, and hasn't been updated much since.

At the time it was finished, Promise was still a controversial mechanism that was undergoing siginficant change - and Streams was very many iterations from being what it is today.

I think a proposal to use a Streams-like interface with Promises would be very interesting - but the right forum for that is the Media Capture Task Force - we can't do the standards work on the Blink list.

Coming?

Harald



On Sat, Jul 11, 2015 at 1:37 AM, Domenic Denicola <d...@domenic.me> wrote:

Domenic Denicola

unread,
Jul 13, 2015, 6:22:37 PM7/13/15
to Harald Alvestrand, mca...@google.com, blin...@chromium.org, bra...@opera.com, gbil...@chromium.org, tyos...@chromium.org, gr...@chromium.org, Yutaka Hirano
From: Harald Alvestrand [mailto:h...@google.com]

> I think a proposal to use a Streams-like interface with Promises would be very interesting - but the right forum for that is the Media Capture Task Force - we can't do the standards work on the Blink list.
>
> Coming?

At this point I'm more interested in the question of how this impacts the Blink development effort mentioned by mcasas@ than I am about figuring out how to create a Rec-track document. If Blink is amenable to the idea and has no strong implementation concerns about the spec draft I produce, then I'm happy to turn such a draft over to whatever task force or working group or whatever is necessary so they can push it through The Process.
Reply all
Reply to author
Forward
0 new messages