Kurento LoadTest SFU Video dropping at ~20-35% cpu

592 views
Skip to first unread message

Pascal Tozzi

unread,
Jan 10, 2021, 5:56:11 PM1/10/21
to kurento
We use azure F64s_v2 server (64 Cores cpu optimized)
Memory isn't an issue (128GB ram)
Bandwidth isn't an issue (0.055GB/s used of 3.4GB/s pipe)

We ran a test to determine the max load possible.
607 subscriber (0kbps video) (not connected to any publisher)
484 subscriber (300kbps video 160p)
363 subscriber (100kbps video thumbnail size)
124 subscriber (1500kbps video 720p)
121 subscriber (40kbps audio)
80 subscriber (800kbps audio+video 360p)

121 publisher (300kbps video 160p)
121 publisher (800kbps audio+video 360p)
4 publisher (1500kbps video 720p)
1 publisher (40kbps audio)
All the publisher and subscriber are running in a controlled environment over 50VM and the issue is reproducible each time.

All video are in vp8 and audio using opus.
Max and min bandwidth are set to ensure no fluctuation.
We use a single MediaPipeline and create on it all the subscriber and publisher with a WebRtcEndpoint object each.

When 95% of the subscriber and publisher are connected, everything is stable. but when we reach the amount listed above, we see streams randomly going black and loosing the video yet kurento doesn't crash and stream are not re-negotiating.

Virtual memory used by kurento seem to be really high
c2.png
p.png
I took a snapshot of all streams (json in attachment of a few sample).

We raised the kurento.conf.json threads count from 10 to 32, an issue was discovered using a count of 10 (will need to re-test to confirm the behavior difference).
We are using a turn server in front of kurento. (it could be possible that the issue is coming from coturn).

Stream dropping started at 6:34AM, about 95% of all publisher and subscriber were connected, it's the last 5% that seem to be tipping it over and causing stuff to behave badly.

Bandwidth usage:
b.png

CPU usage
c.png
We see 2 events, around 6:34-6:35 cpu drop a bit, the CPU spike around 6:42 it's when we turned off all the VM of subscriber and publisher, it seem to spike the CPU.
We connect a lot of subscriber and publisher at once around 6:25AM and gradually connect the rest of the 50 VM over time up till 6:34AM where we almost have everything connected and the issue start.

Kurento version 6.15.0

The only value which we aren't sure as of right now, is the virtual memory used by kurento which seem to grow up to around 126GB exactly around 6:34AM (might be a coincidence?). I am not too familiar with Virtual memory usage. What would happen if an application virtual memory is higher than the computer total memory?

We are planning on running the test a few more times.
Is there any configuration which we could tweak on kurento and/or any log we could activate which could be useful for such a behavior/issue.
kurento.json
publisher.800kbps.video.audio.json
subscriber.0kbps.video.not.connected.json
subscriber.1500kbps.video.json

Ayaan

unread,
Jan 11, 2021, 5:23:19 PM1/11/21
to kurento
Can you share some info on how to do a kurento load test? 

Try setting vm.swappiness = 1 in sysctl.conf and see it helps with the memory.

Pascal Tozzi

unread,
Jan 11, 2021, 8:59:49 PM1/11/21
to kurento
We had the swap disabled thus lowering the swappiness from 60 to 1 wouldn't do much.
We just did another test today after seeing your message and added 280GB swap partition and set the vm.swappiness = 1  
Sadly we can still reproduce the same issue.

We load VM with selenium with custom video feed as publisher.
We load a total of 60 VM to create the load, we could go higher.. but the issue happen at around 50VM loaded.
When we load the extra VM, the subscriber are not re-negotiating, but the publisher are disconnecting and re-negotiating (goes a bit against what I said in my previous message).

I will ignore the virtual memory and look at the log a bit more tomorrow and post again with my finding.

Juan Navarro

unread,
Jan 12, 2021, 4:54:06 AM1/12/21
to kur...@googlegroups.com
Great initiative!!

WebRTC load testing is complex to set up, and can get expensive to run... but tests like this tend to provide very useful information about the performance limits of the software and where is the low-hanging fruit to improve results.


Ayaan wrote:
Can you share some info on how to do a kurento load test? 

Original poster probably knows this already, but we did run some performance tests a while ago for OpenVidu. People interested in how to build some test, and see results of our own, can have a look here:

OpenVidu load testing: a systematic study of OpenVidu platform performance

Just note that at the time, OpenVidu was deployed natively (i.e. a native apt-get installation of Kurento + a Java JAR file being run directly on the server machine), but over time all this has been redesigned to use Docker as primary deployment method (i.e. now everything runs inside a Docker container).



@Pascal Tozzi regarding your tests, I'd like to add some observations / questions:

  • Could you please explain more about how does the test grow to 95%?
    I see you mention "607 subscriber" and "121 publisher". But how does it exactly grow these numbers over time? It's not clear to me what "95%" means in term of test progression.
    Is it an M:N kind of setup (M=121 publishers, N=607 subscribers)? How do M and N grow over time?

  • Do ALL subscribers receive videos from ALL publishers? If that's the case, this would impose an uneven load increment when a new subscriber is added (121 new WebRTC streams from publishers to the new subscriber) vs. when a new publisher is added (607 new streams from the new publisher to the already existing subscribers)

  • Are you monitoring the number of open file handler descriptors (FDs)?. Kurento uses quite a bit of those, and in default Ubuntu installations the number of available FDs is not too high. When a process reaches the configured kernel limits (see ulimit) it then rejects further requests from the application. I'm not 100% sure of the ulimits inside Docker containers, though, but I believe they are set to "unlimited" by default. W

  • How are those 607 subscribers actually consuming the WebRTC media? Keep in mind that the browser itself has a limited ability to consume 100s and 100s of incoming streams, and this fact should be kept away from polluting the test results. This was the main reason to limit sessions to 7 participants in the OpenVidu performance tests: with just 7 participants on each room, we kept the browsers well below their maximum capacity. Otherwise browsers would start failing and our test would think this was due to reaching maximum capacity of the server (which was not true).

  • Did you see any Warnings or Errors on the Kurento logs? KMS uses a couple of thread pools in order to handle the JSON-RPC comms between server and clients, + 1 health checker thread on each of them; when the thread pools are too busy, it can happen that the health checker thread takes too much time to do its thing, and this will be reflected on the logs with a Warning message: "Worker thread pool is lagging! (CPU exhausted?)".

    Also, other errors or warnings might give some info about the problems that KMS is facing.


Regards,


--
Juan Navarro
Kurento developer
@j1elo at GitHub & Twitter
I took a snapshot of all streams (json in attachment of a few sample).

We raised the kurento.conf.json threads count from 10 to 32, an issue was discovered using a count of 10 (will need to re-test to confirm the behavior difference).
We are using a turn server in front of kurento. (it could be possible that the issue is coming from coturn).

Stream dropping started at 6:34AM, about 95% of all publisher and subscriber were connected, it's the last 5% that seem to be tipping it over and causing stuff to behave badly.

Bandwidth usage:


CPU usage

We see 2 events, around 6:34-6:35 cpu drop a bit, the CPU spike around 6:42 it's when we turned off all the VM of subscriber and publisher, it seem to spike the CPU.
We connect a lot of subscriber and publisher at once around 6:25AM and gradually connect the rest of the 50 VM over time up till 6:34AM where we almost have everything connected and the issue start.

Kurento version 6.15.0

The only value which we aren't sure as of right now, is the virtual memory used by kurento which seem to grow up to around 126GB exactly around 6:34AM (might be a coincidence?). I am not too familiar with Virtual memory usage. What would happen if an application virtual memory is higher than the computer total memory?

We are planning on running the test a few more times.
Is there any configuration which we could tweak on kurento and/or any log we could activate which could be useful for such a behavior/issue.
--
You received this message because you are subscribed to the Google Groups "kurento" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kurento+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kurento/f23562b8-bfba-438c-90ae-230eacbf0106n%40googlegroups.com.

Pascal Tozzi

unread,
Jan 14, 2021, 5:54:49 AM1/14/21
to kurento
I have ran another test and activated more logging.

I will also answer your question, it seem like my terminology was wrong/confusing.
In the first post, the publisher and subscriber are simply RTCPeerConnection with track of type video/audio and publisher mean they are sending only, subscriber they are receiving only.

A "User" has 
5 subscriber (0kbps video) (not connected to any publisher) (we are just not using them in this test case. but they are connected)
4 subscriber (300kbps video 160p)
3 subscriber (100kbps video thumbnail size)
1 subscriber (1500kbps video 720p)
1 subscriber (40kbps audio)
1 publisher (300kbps video 160p)
1 publisher (800kbps audio+video 360p)

We managed to tweak it and have up to 160 users before reproducing the issue in my last 2 attempts after changing a few values.
It is a single room with lots of users connected to it, we are trying to raise the CPU to 80% where we loose streams because we reach the CPU limit as a proof of concept.
It is not a conference type where every users see every users.
In short, in this scenario, they see the last 4 speakers and a main camera view which can be switched.

We use multiple VM on azure to load 4 users per VM using selenium and spam VM up until we see the issue and turn off and delete all resource afterward.

In my last test:
Between 02:45:29:048 and 03:18:53:122 we connected 160 users + 1 tester with no issue.
There is a cue indicator where the audio drop for 1-2 seconds and it happened after loading 160 users.
This seem to indicate we reached the limit (always happen before everything drop).
We waited about 3 minutes, everything was perfect with 161 users, we added 8 more users and everything went bad afterward.

We added 8 users which 4 of them connected and 4 of them partially connected.
We also have in our log of our signaling server talking to kurento on port 8888:
03:23:04:008 error - {"stack":"Error: Request has timed out\n
    at \\node_modules\\kurento-client\\lib\\KurentoClient.js:365:24\n
    at dispatchCallback (\\node_modules\\kurento-jsonrpc\\lib\\index.js:546:9)\n
    at Timeout.timeout (\\node_modules\\kurento-jsonrpc\\lib\\index.js:593:9)\n
    at listOnTimeout (internal/timers.js:549:17)\n
    at processTimers (internal/timers.js:492:7)",
"message":"Request has timed out",
"request":"{\"jsonrpc\":\"2.0\",\"method\":\"invoke\",\"params\":{\"object\":\"cc0392b8-0607-4163-9cbe-0a56e38989eb_kurento.MediaPipeline/26f18394-c0eb-4c4c-878a-c3a095c2396a_kurento.WebRtcEndpoint\",\"operation\":\"processOffer\",\"operationParams\":{\"offer\":\"{THE OFFER MSG REMOVED}"},\"sessionId\":\"9804809e-aa6c-4499-a8f7-8578da0cef92\"},\"id\":54014}","requestTimestamp":1610612564005,"responseTimestamp":1610612584006}
Is there any configuration to increase the amount of thread somewhere since the CPU at 30%? And kurento returning a timeout error? 
 
We also see streams which had video go black and loose their video, which feel like kurento might be freezing, but the cpu at 30% range.
I also have a 230MB log file from kurento for the entirety of this session (zipped => 13mb): https://drive.google.com/file/d/103s7xx-CAvGLx9f_wcH8gvDi-MEcMuVv/view?usp=sharing

The log of kurento seem to claim a lot of: (packet loss?); will request a new keyframe
Which raise again the possibility that our turn server might be to blame and cause packet loss and the issue is unrelated to kurento.
I will do another test and remove the turn server, just need a few infrastructure change.
That being said, if it is the turn server dropping packet, and this cause kurento to become unresponsive, it could be interesting to look into it, but first I need to confirm this hypothesis.

I have 3 graph, in this test, we increased the amount of bandwidth, confirming that bandwidth isn't an issue compared to the first test in my first post.
I added the disk usage as the log file is what triggered the disk to skyrocket.
There isn't any error in the log, but it's filled of warning.
disk.pngnetwork.png

From the CPU graph we see again that when I disconnected all the users, the CPU was in average 3-5%....
but the min and max are pretty much 0% and 100%
The cpu did went back down on it's own.
cpu.png

Pascal Tozzi

unread,
Jan 14, 2021, 10:12:06 AM1/14/21
to kurento
I just looked at the turn server log and it's spammed of:
03:23:48.701928065Z 4312: session 052000000000000163: TLS/TCP socket error: Connection reset by peer ***.***.***.***:49926
03:23:48.701931465Z 4312: ERROR: session 052000000000000163: TLS/TCP socket error: Connection reset by peer ***.***.***.***:49926

The issue seem to be coming from coturn potentially.
We will test without coturn.

israel.r...@gmail.com

unread,
Jan 14, 2021, 10:34:37 AM1/14/21
to kurento
I made some tests as well some time ago.
You can see a big difference when clients doesn't have a good GPU on the quality of the picture.
And of course transcoding really adds a -lot of CPU (without it was like 5%, with transcoding it jumps to 30/70%, depends on the resolution you transcode too).

Pascal Tozzi

unread,
Jan 14, 2021, 3:36:10 PM1/14/21
to kurento
The quality of the picture is fine for us, no issue during our test case and no lag on the client sided.
We use CPU instead of GPU with pre-recorded video.
We also have the selenium hidden/minimized, it cause chrome to not render the video locally to display it and save a lot of cpu locally on the test machine.
We limit to a small machine with 4 user per machine as chrome create lots of handles which can make the machine unresponsive if we load too many client on a bigger machine.


After doing another test and switching the TURN server to become a STUN server...
We reproduce the exact same behavior, thus the connection dropping isn't caused by the turn server, it's just displaying that the connection got reset.
I would suspect at this point that kurento is dropping the connection.

So the server resource is still pretty good (cpu, memory, bandwidth are all below 40% usage).
We detect the audio freezing for 1-2 seconds when we are near the capacity.
If we add a few extra user, the signaling server communicating with kurento via websocket receive a timeout and kurento is disconnecting RTCPeerConnection of those sending video track (it doesn't disconnect those receiving the track).

I believe at this point that I need to change kurento log settings to become more verbose for my next test.

Pascal Tozzi

unread,
Jan 18, 2021, 7:02:38 PM1/18/21
to kurento
Did one more test this morning with more logging...
I found stuff within the log, but it's not consistent with my previous sessions, meaning that I can't find Stream gap detected nor worker threads locked in my previous log from my previous test...

Session started around 14:55

Between 15:43:14 and 15:44:57 we got 995,860 times
kmsutils kmsutils.c:483:gap_detection_probe:<kmsagnosticbin2-226:sink> Stream gap detected, timestamp: 0:35:59.736164231, duration: 0:00:00.000000023

Between 15:44:57:800 - 15:45:08:293  we receive 477 disconnect from kurento:
[{"jsonrpc":"2.0","method":"onEvent","params":{"value":{"data":{"newState":"DISCONNECTED","oldState":"CONNECTED","source":"35006cf8-26c2-49bb-a5ab-65fdd8ea18d9_kurento.MediaPipeline/215b9e47-a780-4550-affa-bcc3a39b40f2_kurento.WebRtcEndpoint","tags":[],"timestamp":"1610984600","timestampMillis":"1610984600287","type":"MediaStateChanged"},"object":"35006cf8-26c2-49bb-a5ab-65fdd8ea18d9_kurento.MediaPipeline/215b9e47-a780-4550-affa-bcc3a39b40f2_kurento.WebRtcEndpoint","type":"MediaStateChanged"}}}]    

Our reconnection logic basically started spamming the reconnection of all 477 streams causing that between 15:45:11.980 - 15:45:19.954 we received 1525 times the Worker thread being locked within kurento log file:
WARN       KurentoWorkerPool WorkerPool.cpp:155:operator(): Worker threads locked. Spawning a new one.

This cause a Timeout on the port 8888 which make sense.
I am unsure if the stream gap detected is even related, but it does feel excessive within this scenario.

Juan Navarro

unread,
Feb 9, 2021, 5:25:29 AM2/9/21
to kur...@googlegroups.com
Hi Pascal,


On 19/01/2021 01.02, Pascal Tozzi wrote:
Our reconnection logic basically started spamming the reconnection of all 477 streams causing that between 15:45:11.980 - 15:45:19.954 we received 1525 times the Worker thread being locked within kurento log file:
WARN       KurentoWorkerPool WorkerPool.cpp:155:operator(): Worker threads locked. Spawning a new one.

If you're seeing the message "Worker threads locked. Spawning a new one." it means you are testing an old version of Kurento (kms-core to be precise):

https://github.com/Kurento/kms-core/commit/62020ff6bf2c21dfbf4e4eebc0b53c99bd6eb0c4#diff-67ecb6a81e52bf9bd9577d881d29275f5a46199d092dfc101811dafc8751d5f9L155

In there you can see (on the top) that this commit removed the mentioned string, and it exists since 6.14.0.

This string was changed already for Kurento 6.14, so you must be testing 6.13 or earlier. Don't lose time with those old versions because you'll be missing all improvements that have been added over time.

You mentioned in your first post that the tested version was Kurento 6.15, but the fact that you're seeing the mentioned string tells me that you're actually not. For starters, the result of kurento-media-server --version should show that all modules and components are actually on the most recent version. Otherwise, you have mixed different versions of different modules during your installation.

Some useful links:

* https://doc-kurento.readthedocs.io/en/latest/dev/dev_guide.html#clean-up-your-system
* https://doc-kurento.readthedocs.io/en/latest/user/installation.html#local-upgrade

Now, regarding your performance metrics: if all of CPU, Memory, and Bandwidth are under-utilized, but KMS is still showing signs of saturation around a consistent limit, that's clearly an indication that you are not measuring the feature that KMS is really consuming. This could be the number of open file handles (vs. maximum), the number of simultaneous threads (vs. the maximum that the kernel can handle), or maybe the bottleneck comes from an excessive context switches between different threads (note that KMS uses GStreamer, and GStreamer is very heavily threaded). None of these possibilities would show up as a high CPU% usage in any tool, so you see how measuring performance metrics can drive to confusing results if not all options are considered.



On 19/01/2021 01.02, Pascal Tozzi wrote:
Between 15:43:14 and 15:44:57 we got 995,860 times
kmsutils kmsutils.c:483:gap_detection_probe:<kmsagnosticbin2-226:sink> Stream gap detected, timestamp: 0:35:59.736164231, duration: 0:00:00.000000023

This message points to missing packets on the RTP receiving part of Kurento. "Stream gap detected" means that the GStreamer RTP elements marked a gap in the incoming stream. A gap means that some packets are missing, so there is missing information and the decoder will need to work around that (or ask for a new Keyframe, if it's video). Gaps happen when there is packet loss in the network, but assuming this is not the case in your network, I guess this can also happen due to thread exhaustion affecting some of the GStreamer RtpBin elements and causing them to overflow and lose packets in their internal queues.

The log messages related to packet loss and gaps were edited in Kurento 6.15 to make them clearer; you can find examples of them in here:
https://doc-kurento.readthedocs.io/en/latest/user/troubleshooting.html#video-quality-issues


Regards

Pascal Tozzi

unread,
Feb 10, 2021, 11:59:17 AM2/10/21
to kurento
Thank Juan, I will review the version.
I am aware that the docker was on v6.13, and I raised it to v6.15 before all my test.
The ServerInfo from my dump clearly has every plugin pointing to v6.15

The first post of this thread contain a file called "Kurento.json" which was created from kurentoClient.getServerManager().getInfo(); with a few extra properties

In short: 
"type": "KMS", "version": "6.15.0",
"generationTime": "Nov  4 2020 12:20:06", "name": "chroma", "version": "6.15.0"
"generationTime": "Nov  3 2020 18:12:59", "name": "core", "version": "6.15.0"
"generationTime": "Nov  4 2020 12:26:52", "name": "crowddetector", "version": "6.15.0"
"generationTime": "Nov  3 2020 18:22:58", "name": "elements", "version": "6.15.0"
"generationTime": "Nov  3 2020 18:33:58", "name": "filters", "version": "6.15.0"
"generationTime": "Nov  4 2020 12:26:25", "name": "platedetector", "version": "6.15.0"
"generationTime": "Nov  4 2020 12:26:22", "name": "pointerdetector", "version": "6.15.0"

That being said, I do trust your word that we have the wrong version.

I did a docker-compose up with  image: kurento/kurento-media-server:6.13.0 (multiple months ago) on that server.
Before all my test, I did a docker-compose down, switched the version to  image: kurento/kurento-media-server:6.15.0 and did another docker-compose up
The only override we have is for
- /etc/kurento/modules/kurento/WebRtcEndpoint.conf.ini
- /etc/kurento/kurento.conf.json

I am unsure how that manage to get old code within it, I will double check to ensure I'm not wrong about it and also delete the VM and simply have a fresh VM with v6.15 of docker directly on it.

-------

That being said, we did more testing, and managed to find interesting result and a way to avoid the crash at low CPU.
I am waiting for a new version of our code to test that the fix within the next 2 weeks. I was planning ton post about the finding as soon as I could run that extra test.

That being said, the short version of what we discovered is:
Creating lots of RTCPeerConnection and connecting them all to kurento at the same time seem to cause no issue (there always a point where the load could cause some issue, but we haven't reached it).
When you have LOTS of RTCPeerConnection connected, if you start attaching streams from sender to receiver, there is also no issue.

For a webRtcEndpoint of type sender, we wait for the event MediaStateChanged with the state CONNECTED before connecting the stream to a receiving webRtcEndpoint.
But for a webRtcEndpoint of type receiver, we connected the sender webRtcEndpoint before we even processed the offer and processed the offer only after connecting it.

What we discovered is that the webRtcEndpoint of type receiver, if we connect the sender to it only after it's in a connected state, we don't get a crash at low cpu.
If we connect the sender to it before we process the offer, we only get a crash when there a high amount of streams connected (it feel like there no issue, but most likely we should avoid it).

If it's true that we should avoid connecting webRtcEndpoint sender to receiver before they are in a connected state, maybe an exception could be thrown and the connect is ignored to avoid users to connect this way.
I am unsure if this is an unrelated scenario. We will have more information if this change completely fixed our issue.

Juan Navarro

unread,
Feb 10, 2021, 2:48:53 PM2/10/21
to kur...@googlegroups.com
If you are using the Docker images then it's indeed more difficult to get things wrong (that's actually why we started pushing the Docker images, to make it easier to deploy and especially to upgrade between versions). But still, those log messages surprised me. Please have a look and try starting from a clean state as you mentioned.

Does docker-compse always perform a previous "docker pull" internally? I.e. to ensure the actual latest version of the tag is downloaded. (We push all of major, major.minor, and major.minor.patch tags, so you could e.g. depend on "kurento-media-server:6.15" and it would point to either 6.15.0, 6.15.1, 6.15.2 etc. as we pblished those patches, if ever)



On 10/02/2021 17.59, Pascal Tozzi wrote:
What we discovered is that the webRtcEndpoint of type receiver, if we connect the sender to it only after it's in a connected state, we don't get a crash at low cpu.
If we connect the sender to it before we process the offer, we only get a crash when there a high amount of streams connected (it feel like there no issue, but most likely we should avoid it).

Let me see if I got it right. For a large number of streams,

this works:
1. sender.processOffer(sdp)
2. (Wait for sender connected event)
3. receiver.processOffer(sdp)
4. (Wait for receiver connected event)
5. sender.connect(receiver)

this fails:
1. sender.processOffer(sdp)
2. (Wait for sender connected event)
3. sender.connect(receiver)
4. receiver.processOffer(sdp)
5. (Wait for receiver connected event)

Is that right?

If not, please edit the above lines as to represent how things are done in the good and bad scenarios
--
You received this message because you are subscribed to the Google Groups "kurento" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kurento+u...@googlegroups.com.

Pascal Tozzi

unread,
Feb 10, 2021, 4:00:53 PM2/10/21
to kurento
docker should indeed take care of it and not have issue, I will analyze the docker image before resetting anything and search within the binary to see if I can find those raw string. It's puzzling a few of us unless the image of 6.15 is wrong.
I will come back on this one, if I can't find the string within it, it mean the first few test were on 6.13, but the log pulled and serverinfo were done on the same day... Let me review it.

---------

Your scenario for the success and fail are valid.
When we want to do the test and connect 250 users well 250 times our application:

With the fail method, it work for up to ~140 users  without any issue with around 30% cpu used on the server machine and we see stream switching to black as soon as we connect a few more users.

With the success method, we managed to reach 250 users and could add more users if we spawned more VM.

We are planning 1-2 more test next week.

Juan Navarro

unread,
Feb 11, 2021, 11:45:04 AM2/11/21
to kur...@googlegroups.com
I'd like to mention that those 2 scenarios are expected to be equivalent and work the same. The fact that one offers wildly different performance than the other is thus a bug.

I don't know what might be happening there, but the behavior should definitely be the same and both methods should work correctly:


On 10/02/2021 20.48, Juan Navarro wrote:
For a large number of streams,

this works:
1. sender.processOffer(sdp)
2. (Wait for sender connected event)
3. receiver.processOffer(sdp)
4. (Wait for receiver connected event)
5. sender.connect(receiver)

this fails:
1. sender.processOffer(sdp)
2. (Wait for sender connected event)
3. sender.connect(receiver)
4. receiver.processOffer(sdp)
5. (Wait for receiver connected event)


This means there is some kind of unexpected performance degradation in the second case. I'm afraid this is going to be a difficult one to find out, because it seems difficult and costly to reproduce the conditions required to trigger the bad behavior and analyze it.

I'll take note of this; meanwhile, the obvious workaround is that you switch to using the first method, that seems to work better.

And if you find any extra hint about where the problem could be, don't hesitate to mention it here! so we can have more chances to fix this.
Reply all
Reply to author
Forward
0 new messages