



Can you share some info on how to do a kurento load test?
Regards,
-- Juan Navarro Kurento developer @j1elo at GitHub & Twitter
I took a snapshot of all streams (json in attachment of a few sample).
We raised the kurento.conf.json threads count from 10 to 32, an issue was discovered using a count of 10 (will need to re-test to confirm the behavior difference).We are using a turn server in front of kurento. (it could be possible that the issue is coming from coturn).
Stream dropping started at 6:34AM, about 95% of all publisher and subscriber were connected, it's the last 5% that seem to be tipping it over and causing stuff to behave badly.
Bandwidth usage:
CPU usage
We see 2 events, around 6:34-6:35 cpu drop a bit, the CPU spike around 6:42 it's when we turned off all the VM of subscriber and publisher, it seem to spike the CPU.
We connect a lot of subscriber and publisher at once around 6:25AM and gradually connect the rest of the 50 VM over time up till 6:34AM where we almost have everything connected and the issue start.
Kurento version 6.15.0
The only value which we aren't sure as of right now, is the virtual memory used by kurento which seem to grow up to around 126GB exactly around 6:34AM (might be a coincidence?). I am not too familiar with Virtual memory usage. What would happen if an application virtual memory is higher than the computer total memory?
We are planning on running the test a few more times.Is there any configuration which we could tweak on kurento and/or any log we could activate which could be useful for such a behavior/issue.
--
You received this message because you are subscribed to the Google Groups "kurento" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kurento+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kurento/f23562b8-bfba-438c-90ae-230eacbf0106n%40googlegroups.com.



The quality of the picture is fine for us, no issue during our test case and no lag on the client sided.We use CPU instead of GPU with pre-recorded video.We also have the selenium hidden/minimized, it cause chrome to not render the video locally to display it and save a lot of cpu locally on the test machine.We limit to a small machine with 4 user per machine as chrome create lots of handles which can make the machine unresponsive if we load too many client on a bigger machine.
Our reconnection logic basically started spamming the reconnection of all 477 streams causing that between 15:45:11.980 - 15:45:19.954 we received 1525 times the Worker thread being locked within kurento log file:WARN KurentoWorkerPool WorkerPool.cpp:155:operator(): Worker threads locked. Spawning a new one.
Between 15:43:14 and 15:44:57 we got 995,860 timeskmsutils kmsutils.c:483:gap_detection_probe:<kmsagnosticbin2-226:sink> Stream gap detected, timestamp: 0:35:59.736164231, duration: 0:00:00.000000023
What we discovered is that the webRtcEndpoint of type receiver, if we connect the sender to it only after it's in a connected state, we don't get a crash at low cpu.If we connect the sender to it before we process the offer, we only get a crash when there a high amount of streams connected (it feel like there no issue, but most likely we should avoid it).
--
You received this message because you are subscribed to the Google Groups "kurento" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kurento+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kurento/d3457706-bf57-41c5-ad69-65241ff9bdacn%40googlegroups.com.
For a large number of streams,
this works:
1. sender.processOffer(sdp)
2. (Wait for sender connected event)
3. receiver.processOffer(sdp)
4. (Wait for receiver connected event)
5. sender.connect(receiver)
this fails:
1. sender.processOffer(sdp)
2. (Wait for sender connected event)
3. sender.connect(receiver)
4. receiver.processOffer(sdp)
5. (Wait for receiver connected event)
To view this discussion on the web visit https://groups.google.com/d/msgid/kurento/f36ac424-f886-46d2-be63-a533f30d9c4an%40googlegroups.com.