Don't is working record audios in conference

138 views
Skip to first unread message

Adler Parobocz

unread,
Dec 14, 2021, 11:55:47 AM12/14/21
to discuss-webrtc
Hi guys, 

I'm tryed record all audios in conference of participants along with screen sharing , but I can only record audio from my computer, any idea?

My test code for record screen sharing and audios

const stream = new MediaStream();
const videoTrack = screenStream.getVideoTracks()[0];
stream.addTrack(videoTrack)

const audioTrack = audioStream.getAudioTracks()[0];
stream.addTrack(audioTrack);
stream.addTrack(audio_track_connections[0].audio_track.track); <<---- here add others audios, but not working

Thanks

Adler Parobocz

unread,
Dec 15, 2021, 12:39:59 PM12/15/21
to discuss-webrtc
I tried that way and nothing works when recording the audio of other participants, only my audio I can record, any ideas?

const screenStream = await captureScreen();
const audioStream = await captureAudio();

const stream = new MediaStream([
...screenStream.getTracks(),
...audioStream.getTracks(),
]);

async function captureScreen(mediaContraints = {video: true}) {
    const screenStream = await navigator.mediaDevices.getDisplayMedia(
      mediaContraints
    );
    return screenStream;
}
async function captureAudio(mediaContraints = {video: false,audio: true,}) {
    const audioStream = await navigator.mediaDevices.getUserMedia(
      mediaContraints
    );
    return audioStream;
}

guest271314

unread,
Dec 15, 2021, 10:33:27 PM12/15/21
to discuss-webrtc
You can utilize Web Audio API MediaStreamAudioDestinatioNode to connect multiple audio tracks to the same MediaStream, see Is it possible to mix multiple audio files on top of each other preferably with javascript https://stackoverflow.com/a/40571531.

Adler Parobocz

unread,
Dec 16, 2021, 9:34:08 AM12/16/21
to discuss-webrtc
I appreciate the information.

I would like to do what is in the information passed on the stackoverflow, with a little detail, add the audio of the video conference participants, I tried to do it this way here

const ctx = new AudioContext();
const dest = ctx.createMediaStreamDestination();
               
if(audioStream.getAudioTracks().length > 0)
   ctx.createMediaStreamSource(audioStream).connect(dest);
               
let tracks = dest.stream.getTracks();
tracks = tracks.concat(audioStream.getVideoTracks()).concat(screenStream.getVideoTracks());
               
stream = new MediaStream(tracks);
mediaRecorder = new MediaRecorder(stream);

I still have problem, I can't add everyone's audio, only my audio, the audios of the other videos in the recording are not being captured

Thanks

Adler Parobocz

unread,
Dec 16, 2021, 12:47:17 PM12/16/21
to discuss-webrtc
I tried it and not working

var ac = new AudioContext();
var osc = ac.createOscillator();
var dest = ac.createMediaStreamDestination();
mediaRecorder = new MediaRecorder(dest.stream);
osc.connect(dest);

mediaRecorder.start();
mediaRecorder.addEventListener('start', handleMediaRecorderStart);
mediaRecorder.addEventListener('dataavailable', function(event) {
    console.log('MediaRecorder data: ', event);
    if (event.data && event.data.size > 0) recordedBlobs.push(event.data);
});
mediaRecorder.addEventListener('stop', function(){
const type = 'mp4';
const blob = new Blob(recordedBlobs, { type: 'video/mp4', type: 'audio/ogg; codecs=opus' });
const recFileName = getDataTimeString() + '-REC.' + type;
const url = window.URL.createObjectURL(blob);
    const a = document.createElement('a');
    a.style.display = 'none';
    a.href = url;
    a.download = file;
    document.body.appendChild(a);
    a.click();
    setTimeout(() => {
        document.body.removeChild(a);
        window.URL.revokeObjectURL(url);
    }, 100);
}
});

Adler Parobocz

unread,
Dec 16, 2021, 3:17:38 PM12/16/21
to discuss-webrtc
This code seems more sensible to me to perform the audio and video recording by merging the data, unfortunately it is not working, does anyone have any other ideas?
 
const DISPLAY_STREAM = await navigator.mediaDevices.getDisplayMedia({video: {cursor: "motion"}, audio: {'echoCancellation': true}}); // retrieving screen-media
const VOICE_STREAM = await navigator.mediaDevices.getUserMedia({ audio: {'echoCancellation': true}, video: false }); // retrieving microphone-media

AUDIO_CONTEXT = new AudioContext();
MEDIA_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(DISPLAY_STREAM); // passing source of on-screen audio
MIC_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(VOICE_STREAM); // passing source of microphone audio

AUDIO_MERGER = AUDIO_CONTEXT.createMediaStreamDestination(); // audio merger

MEDIA_AUDIO.connect(AUDIO_MERGER); // passing media-audio to merger
MIC_AUDIO.connect(AUDIO_MERGER); // passing  microphone-audio to merger

const TRACKS = [...DISPLAY_STREAM.getVideoTracks(), ...AUDIO_MERGER.stream.getTracks()] // connecting on-screen video with merged-audio
stream = new MediaStream(TRACKS);

mediaRecorder = new MediaRecorder(stream);

guest271314

unread,
Dec 16, 2021, 8:16:15 PM12/16/21
to discuss...@googlegroups.com
unfortunately it is not working

That code should work as intended. Can you describe what is not working?

--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/PIiHqo1FSMU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/ebddc9ec-d171-427a-b3c4-004483ad1fe7n%40googlegroups.com.

guest271314

unread,
Dec 16, 2021, 8:29:03 PM12/16/21
to discuss...@googlegroups.com
unfortunately it is not working

Just tested. Working as intended when mediaRecorder.start() is called. Note, Chromium on *nix only captures audio when Tab is selected, and has unexpected, unspecified muting and unmuting behaviour when no user action is occurring on the tab https://github.com/w3c/mediacapture-screen-share/issues/141.

On Thu, Dec 16, 2021 at 12:17 PM Adler Parobocz <adlerd...@gmail.com> wrote:
--
Message has been deleted

Adler Parobocz

unread,
Dec 17, 2021, 6:22:53 AM12/17/21
to discuss-webrtc
The code I presented here both work, as long as it's just for you and on your machine. However, if in the application with more than one video, in the example I will have a video conference room and I intend to put all the videos and audios to pack and then save when I try to perform this task, this way it is not working.
My problem is, how can I record the audio of all the participants in the room?

guest271314

unread,
Dec 17, 2021, 9:56:43 AM12/17/21
to discuss...@googlegroups.com
Where do you get all of the video and audio that is not on your machine from?

Message has been deleted
Message has been deleted

Adler Parobocz

unread,
Dec 17, 2021, 12:43:00 PM12/17/21
to discuss-webrtc
This attached file has my logic with audio and video using the handleVideoOffer function 

room.js.txt

guest271314

unread,
Dec 17, 2021, 6:39:08 PM12/17/21
to discuss...@googlegroups.com
Are you expecting us to debug your entire script?

On Fri, Dec 17, 2021 at 9:43 AM Adler Parobocz <adlerd...@gmail.com> wrote:
This attached file has my logic with audio and video using the handleVideoOffer function 

--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/PIiHqo1FSMU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.

Adler Parobocz

unread,
Dec 20, 2021, 7:10:14 AM12/20/21
to discuss-webrtc
Sorry, I had separated all the code into a file for better understanding, but it wasn't saved, I'll explain it here

I have a mystreamvideo variable
let mediaConstraints = { video: true, audio: {'echoCancellation': true} };
mystreamvideo = await navigator.mediaDevices.getUserMedia(mediaConstraints);
At this point, I check if the user is going to share the screen using the myscreenshare variable, placing it on top of the video so as not to have the two separate videos, I use it like this:
myscreenshare = await navigator.mediaDevices.getDisplayMedia({cursor: true});
myscreenshare.getVideoTracks()[0].onended = () => { startStream('screen_off'); };

In the mystreamvideo variable, I use it to display the video of all participants in the room

Then I do a for loop in my connections to add tracks

When everyone is in the room, there is a button to start recording which I showed during the conversation here what I did when I clicked the record button start getDisplayMedia and getUserMedia separating the audio and video and then putting it in an array joining in MediaStream to record

Is my logic right?

guest271314

unread,
Dec 26, 2021, 10:28:08 PM12/26/21
to discuss...@googlegroups.com
I am still not sure what the issue is?

Adler Parobocz

unread,
Dec 27, 2021, 8:31:45 AM12/27/21
to discuss-webrtc
I appreciate the feedback, my problem is summed up like this.

A brief explanation of how the code works.
When I access the video conference room, I have a screen that defines the username forwarding to the room using the access link already with the room defined, for example, xyz.com/room/room123
at this moment I use socketIO to inform the nodejs server which room is the user, do the treatment in the database and return the socketIO data informing which room is connected
and the socket_id, at this moment with the data formatted, start to add the socket id to the RTC, generating an array of each socket_id for the RTCPeerConnection.

// my code commented with some explanations
socket.on('join room', async (data) => {
    // add socketID connections
    connections[sid] = new RTCPeerConnection(configuration)
      // send the candidate event data
      connections[sid].onicacandidate
      // I use it to add new video entries
      connections[sid].ontrack
      // I use it to renegotiate the session
      connections[sid].onnegotiationneeded
      // start the stream
      mystreamvideo = await navigator.mediaDevices.getUserMedia(mediaConstraints)

      // do a for to get the connections and add the video tracks
      for(key in connections) {
           let senders = connections[sid].getSenders();
           let audio_sender = (senders && senders.length) ? senders.filter(sender => {return sender && sender.track && sender.track.kind == 'audio'}) : null;
           console.log('audio sender', audio_sender);
               if((!audio_sender || !audio_sender.length) && audio_track) {
                   audio_track_stream = await connections[sid].addTrack(audio_track, mystreamvideo);
                  console.log('startStream: add audio');
                 senders = connections[sid].getSenders();
              }

          let video_sender = (senders && senders.length) ? senders.filter(sender => {return sender && sender.track && sender.track.kind == 'video'}) : null;
              if(video_sender && video_sender.length) {
                  if(!video_track) {
                      await connections[sid].removeTrack(video_sender[0]);
                      console.log('startStream: remove video');
                  } else if(video_track.id != video_sender[0].track.id) {
                      await video_sender[0].replaceTrack(video_track);
                      console.log('startStream: switch video');
                  }
              } else if(video_track) {
                  // await connections[sid].addTrack(video_track, myscreenshare ? myscreenshare : mystreamvideo);

                  if(myscreenshare) {
                      await connections[sid].addTrack(video_track); //streamless addtrack -> add track to remote peer stream manually, to add along with audio
                      console.log('startStream: add screen video');
                  } else {
                      await connections[sid].addTrack(video_track, mystreamvideo);
                      console.log('startStream: add webcam video');
                  }
            }
    } // close for()
})

I think this, thank you.

guest271314

unread,
Dec 27, 2021, 10:18:18 AM12/27/21
to discuss...@googlegroups.com
I am still not sure what the issue is with your code. 

Where are audio_track and video_track defined?

When loop is used why is filter() used instead of find(), where filter() will return an array including connections already established, instead of only the new single connection?

Perhaps start a gist (github) for this communication so that I may help, if possible?

Adler Parobocz

unread,
Dec 28, 2021, 6:37:43 AM12/28/21
to discuss-webrtc
I created a gist with these functions shown here
Set them in sequence for better understanding


Reply all
Reply to author
Forward
0 new messages