Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Help combining MediaRecorder with Web Audio

721 views
Skip to first unread message

AirMike

unread,
Jun 17, 2014, 7:03:25 PM6/17/14
to mozilla-...@lists.mozilla.org
Hi,

I have task of recording webrtc local stream audio and webrtc remote stream audio.
I succeeded in recording audio with MediaRecorder (using timeSlice) in which I get recorder chunk Blob (audio/ogg) for local and remote audio.

Now, before I send this to server I would like to mix recorded local and remote audio chunks to one chunk using Web Audio API and this is where I have some problems.

I used this steps:

1. when both local and remote audio blobs are available I'm using FileReader to get ArrayBuffer for each

2. using AudioContext decodeData to get AudioBuffer (here I get error: The buffer passed to decodeAudioData contains an unknown content type. and The buffer passed to decodeAudioData contains invalid content which cannot be decoded successfully.)

3. then I thought I can use OfflineAudioContext to connect both AudioBuffer to OfflineAudioContext destination and get AudioBuffer mix. Is this OK?

4. After OfflineAudioContext rendering is done I need to convert AudioBuffer back to Blob so I can get Data URL using File Reader to sent to server. How get Blob from AudioBuffer.

Please help,
Thank You



Robert O'Callahan

unread,
Jun 17, 2014, 7:54:58 PM6/17/14
to AirMike, mozilla-...@lists.mozilla.org
On Wed, Jun 18, 2014 at 11:03 AM, AirMike <airm...@gmail.com> wrote:

> I have task of recording webrtc local stream audio and webrtc remote
> stream audio.
> I succeeded in recording audio with MediaRecorder (using timeSlice) in
> which I get recorder chunk Blob (audio/ogg) for local and remote audio.
>
> Now, before I send this to server I would like to mix recorded local and
> remote audio chunks to one chunk using Web Audio API and this is where I
> have some problems.
>

It sounds like you're using MediaRecorder to compress the local and remote
audio chunks on the client, and then trying to uncompress them on the
client, mix them, recompress them and send the result to the server. Is
that right? If so, why are you doing the first compression step instead of
just leaving them uncompressed?

I used this steps:
>
> 1. when both local and remote audio blobs are available I'm using
> FileReader to get ArrayBuffer for each
>
> 2. using AudioContext decodeData to get AudioBuffer (here I get error:
> The buffer passed to decodeAudioData contains an unknown content type. and
> The buffer passed to decodeAudioData contains invalid content which cannot
> be decoded successfully.)
>

Is your initial compression step using timeSlice to produce multiple Blobs
from a single MediaRecorder? If so, those Blobs must be concatenated to get
a single resource which you can decode successfully. E.g. you can't pass
just the second Blob created by a MediaRecorder to
AudioContext.decodeAudioData and expect it to work.

Rob
--
Jtehsauts tshaei dS,o n" Wohfy Mdaon yhoaus eanuttehrotraiitny eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o Whhei csha iids teoa
stiheer :p atroa lsyazye,d 'mYaonu,r "sGients uapr,e tfaokreg iyvoeunr,
'm aotr atnod sgaoy ,h o'mGee.t" uTph eann dt hwea lmka'n? gBoutt uIp
waanndt wyeonut thoo mken.o w

AirMike

unread,
Jun 18, 2014, 2:38:32 AM6/18/14
to mozilla-...@lists.mozilla.org
Thank You for replying.

Videochat between agent and client should last at least couple of minutes and I can't allow to lose audiolog of conversation if something happens on the browser or computer side and that's why I'm using time slice in MediaRecorder so every 3 seconds I'm sending recorded audio chunk to the server (on the server side I'm appending new recorded chunk to an existing recording)
At first step I've done this only for remote audio and that work.
Now I need audiolog to have local and remote audio mixed together and I have two options (as I seet it):
1. mixed them on the client side and send only one chunk of data to server
2. send two chunks of data (local and remote) and mixed them on the server side

I'm not doing any explicit compression but the result of MediaRecorder data available is blob of type audio/ogg.
It seems I can't decode that blob to AudioBuffer so I can mix them using Web Audio API to get one blob as a result.

Thank You

Robert O'Callahan

unread,
Jun 18, 2014, 6:59:56 AM6/18/14
to AirMike, mozilla-...@lists.mozilla.org
Right, if you use the timeSlice option then the individual blobs can't be
decoded individually.

Seems to me you can implement option 1 using Web Audio: create two
MediaStreamAudioSourceNodes, one for the local and one for the remote audio
MediaStreams, mix them together as the input to a
MediaStreamAudioDestinationNode, feed that node's MediaStream into a
MediaRecorder using the timeSlice option, and send the resulting Blobs of
compressed audio to the server. The server can then concatenate the Blobs
to get a complete resource that can be decompressed later.

AirMike

unread,
Jun 18, 2014, 7:57:16 AM6/18/14
to mozilla-...@lists.mozilla.org
Woohooo it works!

Thank You very much.
0 new messages