Adding functionality to iOS

1,672 views
Skip to first unread message

Wayne Hawkins

unread,
Jul 10, 2014, 7:21:16 AM7/10/14
to discuss...@googlegroups.com
Hi guys

I am having some trouble adding some functionality to my iOS webrtc application. 

At this point I have a simple webrtc application with chat functionality. 
However this is pretty basic and I would like to add in the following:

- Ability to pause my video and audio transmission, but still receive it. Then un-pause at will.
- Ability to take a snapshot with the camera
- Ability to record video directly from the camera

I am fairly new to all this and maybe I have missed something in my research.

I know how to record video with the camera using AVFoundation, and I have implemented this, but of course this doesnt work when a 'call' is occuring. 
The same holds true for a screenshot, however that even crashes the application, I can provide more details if required. 

I assume the reason neither of these work is because the camera is already in use for the local stream. 

Finally regarding the video and audio muting, I keep seeing people referring to setting my localstream.audiotrack[0].enabled = false (again im likely missing something)
but when I access the audio tracks from the local stream there is no option to set enabled to be false. 

Thanks for any time and help. 

Wayne 

Kermarec Julien

unread,
Jul 10, 2014, 9:35:27 AM7/10/14
to discuss...@googlegroups.com
Your application is on gitHub ? 

What webRTC server do you use ?

Julien
Message has been deleted

Wayne Hawkins

unread,
Jul 11, 2014, 4:48:32 AM7/11/14
to discuss...@googlegroups.com
Sorry the app is not available on github
Message has been deleted

Wayne Hawkins

unread,
Jul 11, 2014, 6:55:56 AM7/11/14
to discuss...@googlegroups.com
Okay so ive been working on this and an update is in order I feel. 

Basically what I have now is no better than before. But what happens is this:
-During stream I press a button to take a snapshot or start recording from the camera
- The stream freezes my outgoing video, in the sense that it just displays the last frame. (in the local preview and on the other end). 
    - I assume this is because the avcapturesession 'takes' control of the device. The result is that my snapshot or recording works, but the video from my ipad/iphone is frozen. 
Audio remains at least. 

This would almost be fine if I could figure out how to begin the transmission of the video again, but I guess im just being slow. 

Thanks

Wayne Hawkins

unread,
Jul 15, 2014, 8:12:52 AM7/15/14
to discuss...@googlegroups.com
I have continued working on this and have found a potential solution to muting the video/audio, though it seems a bit 'dodgy' to me. 

As I could not find any way to enable or disable audio and video tracks in the mediastream, I figured I would first remove the local mediastream, then remove the audio or video component from the stream. 
Then finally add the stream back to the peerconnection. 

This is successful in allowing me to have one of the other component essentially 'muted'. 


I am still having issues with the recording of the local stream and with taking a snapshot. 

Any thoughts? Thanks again

Wayne Hawkins

unread,
Jul 30, 2014, 7:01:04 AM7/30/14
to discuss...@googlegroups.com
Just gonna throw out one last call for help. 

I am still having issues with recording my local stream, saving it to file.
This is also true of taking a snapshot. As i said previous the issue isn't with the functionality of the capturing. The issue lay with the video dropping while recording and taking a snapshot. 
I have found a way in which to re-enable the video, and this is a possible solution with taking a snapshot, but when it comes to recording, I dont want the video conference to drop.  

Has anyone done this on iOS yet? I am not having much luck searching, but then maybe I am just not looking hard enough?

My only theory at this point is that when I grab the device using the AVFoundation stuff the video track no longer has access to the device? 

Thanks 


Zeke Chin

unread,
Jul 30, 2014, 1:43:51 PM7/30/14
to discuss...@googlegroups.com
You're right that there can only be one capture session at a time on an iOS device. If a capturer is already running you won't be able to create a second session. Instead of creating a second session you should just use the output from the existing one. You can do this by adding an RTCVideoRenderer to the RTCVideoTrack in question. The video renderer's delegate will receive i420 frames, which you can use to save to a video file, or a snapshot by converting the bytes into the appropriate format.

With respect to enabling/disabling a track, you can call -[RTCVideoTrack setEnabled:] and -[RTCAudioTrack setEnabled:]


--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Wayne Hawkins

unread,
Jul 31, 2014, 5:13:22 AM7/31/14
to discuss...@googlegroups.com
Thanks for getting back me, I appreciate your time. 

If I may bother you with some more questions. 
I have followed what you said and I now have the i420 frames at my disposal. However this is a little daunting to me, maybe because I dont fully understand what to do, I have no experience in converting formats.
Sorry to ask this, but is there a simple way to convert the bytes into the appropriate format?

The set enabled functions never used to work for me, but I updated to the latest revision and tried the functions today, works fine I believe. 
Thanks so much. 

Zeke Chin

unread,
Jul 31, 2014, 1:38:46 PM7/31/14
to discuss...@googlegroups.com
That depends on what format you need. libyuv has an I420ToARGB method you can use if rgb is what you want.You can probably get stills that way - construct a CIImage using RGB bitmap, convert to a UIImage and then use UIImageJPEGRepresentation. It seems like this could work for video as well if you stuff the ARGB frame into a CMSampleBuffer and pass it along to an AVAssetWriterInput, but I haven't tried that myself. 

Wayne Hawkins

unread,
Aug 1, 2014, 4:08:33 AM8/1/14
to discuss...@googlegroups.com
You are great, thank you so much. 
This should be enough to get me going. I appreciate you taking the time to respond and explain things to me. 

Wayne Hawkins

unread,
Aug 8, 2014, 7:44:58 AM8/8/14
to discuss...@googlegroups.com
So im back again...

I am having trouble with getting the ability to use the method ConvertFromI420 
I cant seem to include libyuv in my solution. I have tried to make sure the library files are in and I can get it so that I can include libyuv.h, but when i come to building it gives me a clang error some times and other times it complains about the architecture... I have only tried to use the ones built out with the other libraries... any ideas?

Thanks

Wayne 

Pablo Martinez Piles

unread,
Jan 26, 2015, 2:36:30 PM1/26/15
to discuss...@googlegroups.com
Hi Wayne,

What did You finally to integrate both functionalities in your app.

In my app I'm trying to record a video with AVCapture libraries and use WebRTC at the same time. But this doesn't work because two libraries is trying to access to the camera at the same time.

Di You solve this problem?

Thanks a lot.


On Thursday, 10 July 2014 12:21:16 UTC+1, Wayne Hawkins wrote:

Michael Schropp

unread,
Dec 17, 2015, 3:26:05 AM12/17/15
to discuss-webrtc
If you still need to record the camerastream, I had the same problem and found a way to do so.

You can use the RTCAVFoundationVideoSource (instead of the RTCVideoSource) Class that exposes the AVCaputureSession. Sadly it is not possible to add a second output to the Session, because the RTC-Framework will lose the camerastream.

But it is possible to add a own delegate and write, with an AVAssetWriter, the buffersamples to a file and still send the samples to the rtc-framwork.

Code in Swift:
   func configurateForRecording()
   {
       for outputs in captureSession.outputs {
           if let videoOutput = outputs as? AVCaptureVideoDataOutput {
// adding self as delegate and store rtc as externalSampleBufferDelegate
               externalSampleBufferDelegate = videoOutput.sampleBufferDelegate;
               videoOutput.setSampleBufferDelegate(self, queue: recordQueue)
           }
       }
   }



   
    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
       
       // write here with AVAssetWriter to File
       writeSampleBuffer(sampleBuffer, type: AVMediaTypeVideo)

        // send sampelBuffer to RTC
        externalSampleBufferDelegate?.captureOutput!(captureOutput, didOutputSampleBuffer: sampleBuffer, fromConnection: connection)
   }



 
If you have questions, hit me up.

Michael

Andy Shephard

unread,
Feb 16, 2016, 12:38:41 PM2/16/16
to discuss-webrtc, schro...@googlemail.com
Is there any way to access the AVCaptureSession from the remote RTCVideoSource, perhaps by casting the RTCVideoTrack.source property as an RTCAVFoundationVideoSource?
It seems that the WebRTC libraries favour the use of RTCVideoSource rather than the subclass.

Zeke Chin

unread,
Feb 16, 2016, 12:54:40 PM2/16/16
to discuss-webrtc, schro...@googlemail.com
For the remote video source, you shouldn't be casting because it's not an RTCAVFoundationVideoSource.
If you created a local video source using RTCAVFoundationVideoSource though, then you can cast it (relatively) safely and grab the active capture session off of that. 

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

Andy Shephard

unread,
Feb 16, 2016, 1:05:51 PM2/16/16
to discuss-webrtc, schro...@googlemail.com
It's somewhat unrelated to the OP's question, but my aim is to grab the video & audio data from the remote source, and use such data to display an accurate sound wave within my app, or to even detect what peer is currently speaking, and set the video as the main focus within the view.

Seeing as the RTCAVFoundationVideoSource has the captureSession property, I was hoping to take the Video and Audio outputs from there (if possible).

Zeke Chin

unread,
Feb 16, 2016, 1:33:14 PM2/16/16
to discuss-webrtc, schro...@googlemail.com
RTCAVFoundationVideoSource is backed by an AVCaptureSession that represents the local capture session.
Remote audio/video sources are not backed by an AVCaptureSession, so you cannot cast and retrieve it.

Today, you can get I420 video frames by attaching a video renderer, as described earlier in the post. However, in order to retrieve audio frames, you will need to write some C++ since the ObjC layer does not provide an interface for getting audio data callbacks today.

Message has been deleted
Message has been deleted

potato

unread,
Jul 29, 2017, 3:10:17 AM7/29/17
to discuss-webrtc, schro...@googlemail.com
You are so great!

Niro

unread,
May 8, 2018, 7:00:02 AM5/8/18
to discuss-webrtc
Hi Michael, 
I am dealing with an issue of transferring the CMSampleBuffer to the webRTC (or the AppRTC to be exact).
Can you please explain how did you managed to transform the CMSampleBuffer and insert as a video track to the WebRTC?
What type of object is captureSession in your example?
Many thanks'
Nir.

Hardik Dabhi

unread,
Jul 24, 2019, 6:27:18 AM7/24/19
to discuss-webrtc
Disclaimer
In addition to generic Disclaimer which you have agreed on our website, any views or opinions presented in this email are solely those of the originator and do not necessarily represent those of the Company or its sister concerns. Any liability (in negligence, contract or otherwise) arising from any third party taking any action, or refraining from taking any action on the basis of any of the information contained in this email is hereby excluded.

Confidentiality
This communication (including any attachment/s) is intended only for the use of the addressee(s) and contains information that is PRIVILEGED AND CONFIDENTIAL. Unauthorized reading, dissemination, distribution, or copying of this communication is prohibited. Please inform originator if you have received it in error.

Caution for viruses, malware etc.
This communication, including any attachments, may not be free of viruses, trojans, similar or new contaminants/malware, interceptions or interference, and may not be compatible with your systems. You shall carry out virus/malware scanning on your own before opening any attachment to this e-mail. The sender of this e-mail and Company including its sister concerns shall not be liable for any damage that may incur to you as a result of viruses, incompleteness of this message, a delay in receipt of this message or any other computer problems. 

Hardik Dabhi

unread,
Jul 24, 2019, 6:27:18 AM7/24/19
to discuss-webrtc
Hello guys,

we am working on webRTC base application for client server video calling.We have done video calling phase successfully.Now we want to record that video from our IOS devices.
We are getting stuck at these point does any one have idea about it.

1)We record the local video and audio using AVAssetswriter but we could not able to recored streaming audio which comes from server side.
Does any one know how to add that streaming audio on already created session.

thanks in advance



On Thursday, July 10, 2014 at 4:51:16 PM UTC+5:30, Wayne Hawkins wrote:
Reply all
Reply to author
Forward
0 new messages