Hi Arik,
I was trying to integrate WebRTC VoiceEngine in PJSIP (just to replace PJMEDIA). It is working fine.
If you have any doubts, let me know. If possible, i will share my knowledge.
--
Regards,
J Alex Antony Vijay.
Good luck Arik,I am interested in the work you do down this path. I do not know enough about the CoreAudio libraries on the iOS to know how to hook everything up. But excited for what you can get done.
- Nick
Delay is pretty low for me and don't have any noise problem.
Perhaps the noise is because a mistmatch between sampling frequencies
in the "driver" and audio processing in mistmatch. What are the values
of N_REC_SAMPLES_PER_SEC and N_PLAY_SAMPLES_PERSEC macros in
audio_device_iphone.h?
Regards,
G.
Hi Arik,Delay is pretty low for me and don't have any noise problem.
Perhaps the noise is because a mistmatch between sampling frequencies
in the "driver" and audio processing in mistmatch. What are the values
of N_REC_SAMPLES_PER_SEC and N_PLAY_SAMPLES_PERSEC macros in
audio_device_iphone.h?Regards,
G.
I remembered 2 additional tweaks I made to reduce the computational
complexity, but it shouldn't be needed if you are using the simulator
or using G.711.
In case you want to use iSAC you have to enable iSACfix.
engine_configuration.h
//#define WEBRTC_CODEC_ISAC // floating-point iSAC
implementation (default)
#define WEBRTC_CODEC_ISACFX // fix-point iSAC implementation
Probably not mandatory but I also changed the sampling rate from 48K
to 16K to remove the necessity of resampling:
voice_engine_defines.h
enum { kVoiceEngineAudioProcessingDeviceSampleRateHz = 16000 };
audio_device_iphone.h
const WebRtc_UWord32 N_REC_SAMPLES_PER_SEC = 16000;
const WebRtc_UWord32 N_PLAY_SAMPLES_PER_SEC = 16000;
Some day I will make a script to apply all these changes, I promise :-)
G.
AURemoteIO::Initialize failed: -308 (enable 3, outf< 1 ch, 1600 Hz, Int16> inf< 1 ch, 1600 Hz, Int16>)
>>> >> On Mon, Apr 16, 2012 at 2:34 PM, mismatch <arik.h...@gmail.com>
>>> >> On Mon, Apr 16, 2012 at 2:34 PM, mismatch <arik.halperin@gmail.com>
Hi,I've applied Gustavo's tweaks, but I get a lot of warnings during calls like:Warning(webrtcvoiceengine.cc:890): WebRtc:too long delay (play:4294966 rec:4294967)
It doesn't seem to cause any problems as the call quality is excellent running on an iPhone (WiFi or 3G), just curious. The code spits out this warning if the sum of play and rec delays is greater than 300. This hints me that the two delays I get: 4294966 + 4294967 = 8589933 are way too big.
==Adam
--
I did start playing with that patch but quickly gave up. I ended up downloading trunk and peerconnection, and building them separately. For core WebRTC I did a mix of changing the Xcode project files and make. libvpx and libjpeg are build using make (never got libjpeg_turbo building). The rest is built using Xcode. Additionally I had to modify a handful of files mainly to #if defined(MAC_IPHONE or IOS) etc. For peerconnection I took the route of modifying the gyp file for libjingle to generate an iOS based Xcode build. Where I am at now is the core WebRTC trunk builds all the core media API static libraries (Including a AVFoundation based video capture class). I link to these in the peerconnection project. I then ported the linux peerconnection client to iOS. Right now I have limited the codec support to iSAC/16000/1 and VP8. I have also limited the capture to 352x288 or 192x144 (added to the supported formats). The iPad 2/3 performs really really well, but the iPhone 4S struggles on the VP8 encode at 352x288. All this is debug based code. I still have not modified the release build projects (not looking forward to that).I also disabled secure RTP as SRTP was blowing up when encrypting. There are also many other hacks that I had to do. For the most it was a painless process but tedious. My todo list includes starting from scratch and editing all the gyp files to produce iOS Xcode builds, and also running diff to create a patch set for the source. Should I ever get a 'turn key' patch set I will post to github.btw.. Right now I am armv7 build only.
--
--
Hi Steve!
You did PeerConnection client port for iOS platform, as I understood you right. So, could you please share your code? May be you have this code in GitHub?
BR,
Oleg
четверг, 19 июля 2012 г., 3:15:34 UTC+4 пользователь Steve Mcfarlin написал:
I did start playing with that patch but quickly gave up. I ended up downloading trunk and peerconnection, and building them separately. For core WebRTC I did a mix of changing the Xcode project files and make. libvpx and libjpeg are build using make (never got libjpeg_turbo building). The rest is built using Xcode. Additionally I had to modify a handful of files mainly to #if defined(MAC_IPHONE or IOS) etc. For peerconnection I took the route of modifying the gyp file for libjingle to generate an iOS based Xcode build. Where I am at now is the core WebRTC trunk builds all the core media API static libraries (Including a AVFoundation based video capture class). I link to these in the peerconnection project. I then ported the linux peerconnection client to iOS. Right now I have limited the codec support to iSAC/16000/1 and VP8. I have also limited the capture to 352x288 or 192x144 (added to the supported formats). The iPad 2/3 performs really really well, but the iPhone 4S struggles on the VP8 encode at 352x288. All this is debug based code. I still have not modified the release build projects (not looking forward to that).I also disabled secure RTP as SRTP was blowing up when encrypting. There are also many other hacks that I had to do. For the most it was a painless process but tedious. My todo list includes starting from scratch and editing all the gyp files to produce iOS Xcode builds, and also running diff to create a patch set for the source. Should I ever get a 'turn key' patch set I will post to github.
btw.. Right now I am armv7 build only.
On Monday, July 9, 2012 12:01:09 AM UTC-7, Harold wrote:
Hi Arik,
I was trying to integrate WebRTC VoiceEngine in PJSIP (just to replace PJMEDIA). It is working fine.
If you have any doubts, let me know. If possible, i will share my knowledge.
--
Regards,
J Alex Antony Vijay.
--
---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
WebRTC project is quite broken for iOS at the moment.
Undefined symbols for architecture i386:
"std::string::push_back(char)", referenced from:
cricket::GetFourccName(unsigned int) in libjingle_media.a(videocapturer.o)
talk_base::CreateRandomString(unsigned long, char const*, int, std::string*) in libjingle.a(helpers.o)
bool talk_base::Base64::DecodeFromArrayTemplate<std::string>(char const*, unsigned long, int, std::string*, unsigned long*) in libjingle.a(base64.o)
talk_base::quote(std::string const&) in libjingle.a(httpcommon.o)
"std::ostream::operator<<(unsigned long)", referenced from:
Do you know those errors ?
I built iOS device without those errors .
Undefined symbols for architecture i386:
"std::string::push_back(char)", referenced from:
cricket::GetFourccName(unsigned int) in libjingle_media.a(videocapturer.o)
talk_base::CreateRandomString(unsigned long, char const*, int, std::string*) in libjingle.a(helpers.o)
bool talk_base::Base64::DecodeFromArrayTemplate<std::string>(char const*, unsigned long, int, std::string*, unsigned long*) in libjingle.a(base64.o)
talk_base::quote(std::string const&) in libjingle.a(httpcommon.o)
"std::ostream::operator<<(unsigned long)", referenced from:
I used static libraries built from steps that you gave .
Can you tell me how to fix the above errors ? (i got 144 errors Mach-O linker ) .
onICEServers - add local stream.
GAE onOpen - create offer.
PC - createOffer.
SDP onSuccess(SDP) - set local description.
PC setLocalDescription.
SDP onSuccess() - possibly drain candidates
GAE onMessage type - candidate
GAE onMessage type - answer
PC - setRemoteDescription.
SDP onSuccess() - possibly drain candidates
SDP onSuccess - drain candidates
GAE onMessage type - candidate
GAE onMessage type - candidate
GAE onMessage type - candidate
No video
Ya..I want to implement WebRTC for IOs native application...
Is there a bug to track the state of audio support?
m=audio 1 RTP/SAVPF 111 103 104 9 102 0 8 107 106 105 13 127 126
a=rtpmap:111 opus/48000/2
a=rtpmap:103 ISAC/16000
The sdp spec (roughly) says that you define your media channel as m=<media type> <id> <protocol> <preferred codec list>
You then have to specify using the a tag what each of those numbers corresponds to in terms of codec.
So, you can just use some regex to swap the preferred codec list around in the SDP string before sending your local sdp string and before processing the received remote sdp string.
NSError *error = NULL;
NSRegularExpression *regex = [NSRegularExpression regularExpressionWithPattern:
@"RTP/SAVPF 111 103" options:(NSRegularExpressionOptions)nil error:&error];
NSString *modifiedSDP = [regex stringByReplacingMatchesInString:sdp.description options:0 range:NSMakeRange(0, [sdp.description length]) withTemplate:
@"RTP/SAVPF 103 111"];
Personal Disclaimer: I do not think this should ever be used for production code, as I don't think those numbers are actually set in stone. But it seems to work fine as a quick hacky work-around for testing.