The configured min bitrate (xxx kbps) is greater than the estimated available bandwidth (30 kbps).

2,094 views
Skip to first unread message

V

unread,
Apr 15, 2014, 8:19:07 AM4/15/14
to discuss...@googlegroups.com
Hi,

I'm hoping someone can help me out with a short term work around for this issue (most likely a bug) with webrtc.  The end result for us is an unusable system the way things currently are with the webrtc sdk because of  dropped frames.  Note, I have cpuoveruse disabled and dropframes set to not be used.  I have my configurations below.  We are using  webrtc's trunk from about a week ago, but this issue has been with the sdk for quite a while.  I have a demo soon so I'm really hoping someone can help me out.  We are running over a local LAN, either a windows PC or android device on the same router, (stun server is on the wan).  The router says traffic shows we are not even using 1/16th of its bandwidth and QOS is turned off.  Note, only 3 devices are connected to this router.

I start to see errors like:
Error(stunport.cc:226): Jingle:Port[video:1:0::Net[Intel(R):192.168.1.103/32]]: UDP send of 1168 bytes failed with error 10035
Warning(channel.cc:546): Got EWOULDBLOCK from socket.


which then eventually, I think leads into the error:
Warning(webrtcvideoengine.cc:1540): webrtc: The configured min bitrate (128 kbps) is greater than the estimated available bandwidth (30 kbps).


Sometimes we even hit this error in rtp_packet_history:

  if (packet_length > max_packet_length_) {
    WEBRTC_TRACE(kTraceError, kTraceRtpRtcp, -1,
        "Failed to store RTP packet, length: %d", packet_length);
    return -1;
  }


Note, we are not a typical camera.  We have a scene that may look as if the image is never changing over many frames (at 15hz, 720P),   If we configure the min video bandwidth to be about 768k or 1.5M via the SDP, I can see where things are tracking with this static scene as the bit rate eventually goes down to the setting we have in the SDP,

Note, we only use video and an datachannel.  We do not have an audio track.

      settings->plType = VCM_VP8_PAYLOAD_TYPE;
      settings->startBitrate = 100;
      settings->minBitrate = VCM_MIN_BITRATE;
      settings->maxBitrate = 0;
      settings->maxFramerate = VCM_DEFAULT_FRAME_RATE;
      settings->width = VCM_DEFAULT_CODEC_WIDTH;
      settings->height = VCM_DEFAULT_CODEC_HEIGHT;
      settings->numberOfSimulcastStreams = 0;
      settings->qpMax = 56;
      settings->codecSpecific.VP8.resilience = kResilientStream;
      settings->codecSpecific.VP8.numberOfTemporalLayers = 1;
      settings->codecSpecific.VP8.denoisingOn = false;
      settings->codecSpecific.VP8.errorConcealmentOn = false;
      settings->codecSpecific.VP8.automaticResizeOn = true;
      settings->codecSpecific.VP8.frameDroppingOn = false;
      settings->codecSpecific.VP8.keyFrameInterval = 3000;

  // rate control settings
  config_->rc_dropframe_thresh = 0;
  config_->rc_end_usage = VPX_CBR;
  config_->g_pass = VPX_RC_ONE_PASS;
  config_->rc_resize_allowed = true;
  config_->rc_min_quantizer = 2;
  config_->rc_max_quantizer = inst->qpMax;
  config_->rc_undershoot_pct = 100;
  config_->rc_overshoot_pct =  30;
  config_->rc_buf_initial_sz = 60;
  config_->rc_buf_optimal_sz = 80;
  config_->rc_buf_sz = 100;
  // set the maximum target size of any key-frame.
  rc_max_intra_target_ = MaxIntraTarget(config_->rc_buf_optimal_sz);



  setup_constraints.AddOptional( webrtc::MediaConstraintsInterface::kEnableRtpDataChannels,  true);
  setup_constraints.AddOptional( webrtc::MediaConstraintsInterface::kEnableDtlsSrtp,  true);
  setup_constraints.AddOptional( webrtc::MediaConstraintsInterface::kEnableIPv6, true);  //
   setup_constraints.AddOptional( webrtc::MediaConstraintsInterface::kCpuOveruseDetection,  false);
  peer_connection_ = peer_connection_factory_->CreatePeerConnection(servers,
                                                                    &setup_constraints,
                                                                   /* &identityservice*/NULL,
                                                                    this);


V

unread,
Apr 21, 2014, 11:25:44 AM4/21/14
to discuss...@googlegroups.com
One thing I noticed today is error messages get logged due to the fact  vie_sync_.ConfigureSync is never called, which ConfigureSync only get called if SetVoiceChannel is called.  Thus, RtpReceiver* video_receiver is never set... These errors can be seen whenever the guts of webrtc call SetReceiverBufferingMode.

Note my constraints are
setup_constraints.AddOptional( webrtc::MediaConstraintsInterface::kOfferToReceiveVideo, true );
setup_constraints.SetMandatoryReceiveAudio(false);

V

unread,
Apr 24, 2014, 2:41:39 PM4/24/14
to discuss...@googlegroups.com
Are any of the webrtc experts out there able to help me narrow this down?  I don't understand why others are not having the same issue, as maybe the current sdk user base only uses the current sdk for a low res web camera and desktop picture?  It seems i start to see the would be blocking errors and they are what is leading up to the packet loss which then intern makes the sdk think it has no bandwidth.. so it drops to the min bandwidth and starts spamming the log file with the message I have posted?  It happens on window desktop to androids and windows desktops to MS surface, and windows desktops to windows desktops.  A 720P video goes from a windows desktop to these other chrome browsers, along with a bidirectional datachannel.  No audio in either direction.  Again, only video in 1 direction (windows to andorid for example).  I'm using the native c++ peer example with a custom capture class that captures at 15hz.  The local video never misses a frame, but the remote video drops frames and eventually craps out with the bandwidth issue.  Again, bandwidth is NOT the issue, as its software bug somewhere in the webrtc sdk that I'm having trouble narrowing down.

On Tuesday, April 15, 2014 8:19:07 AM UTC-4, V wrote:

Binary Chopper

unread,
Apr 25, 2014, 3:08:56 AM4/25/14
to discuss...@googlegroups.com
Disabling CPU adaptation is dangerous and often leads to similar problems because encoder can't meet target or round trip time is fluctuating too much.

You should check chrome://webrtc-internals in another tab during a call.
Look at estimated bandwidth in / out under bwev... section and the 4 src sections for resolutions, fps and packets lost or late.

It is better to run at a low resolution and frame rate rather than break the system and I have made similar changes on iOS and Android.

Take iPad 2 for example. if its doing 30fps at just 320x240 it will only end up being sent 50kbps by Chrome due to round trip I think caused by cpu on device and this is the lowest encoding setting it will fall to by default.


WebRTC is not designed so well in this area, it should be possible to 1) reduce FPS of encoder in low CPU situation and 2) reduce resolution of encoder in low bandwidth situation.

 






--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

V

unread,
Apr 25, 2014, 1:45:14 PM4/25/14
to discuss...@googlegroups.com
Thanks for the response.  Yeah, I validated all those things to not be the issue, as bandwidth was validated and we are talking about even going between two very high end windows desktop machines with hardly any CPU utilization, with only these two machines on a high end router.  We can easily send 1080P CPU encoded with H264 at 15Hz using another SDK and NOT webrtc (not 720p at 15Hz as I'm doing with VP8 in webrtc which should be even easier) and things are run perfectly. Thus, all appears to be a bug somewhere in the webrtc sdk on how it figures out available bandwidth, and I think things start to fall apart when a "would be blocking error message" starts to happen.  I'm just having a hard time figuring out the "intent" of some of the code is as most of this code doesn't have comments what the intentions are suppose to be.. so its been painful trying to track down if its a design failure or if its just a typo bug that not following the software designer's intentions..or even if its a block of code used in the webrtc sdk. 

For a gripe for anyone at google/webrtc... with whats going on, I've been working with WebRTC since November of 2012.   I really want to use webrtc and this sdk, but with such a flaw for so long is really turning me off as I've been having trouble with the video quality on the client side to be smooth video without a massive amount of dropped frames.  Coupled with the fact chrome frame/IE is now gone and it seems the interoperability with firefox is frequently broken, the excitement of running on multiple browsers is dwindling.  A coworker also informed be that google is backing off of the webrtc initiative as well...I don't know where he get this info, but I hope this isn't true and that such a rumor(if is wrong, I hope it can get squashed with some real progress). 


On Tuesday, April 15, 2014 8:19:07 AM UTC-4, V wrote:

V

unread,
Jun 9, 2014, 2:04:33 PM6/9/14
to discuss...@googlegroups.com
I've attached a zip file that is compile against the latest trunk(as of today),  using defaults of the encoder and decoder.  All this does is constrain to a 720P video stream with no audio, and I even took out the datachannel we use.  All this does is send a static checkboard image.  All you do is run the app on your local machine (or remote) as its just the c++ peer client from the webrtc example app which uses my checkboard capture device instead of the directshow capture device.  You should see in less than a minute how the estimated bandwidth calculation starts to spam the console window that it does not have enough bandwidth.  I see a few people complaining about this looking for help (some describe the issue a little differently), but I don't see where anyone is able to help move anyone forward or hack in a work around.  If anyone has some advice on how to get around what causes the estimated bandwidth issue(as its NOT a real bandwidth issue at all), I would be very grateful.  Keep in mind this issue happens while running both clients on the same high end desktop, as well as if over a wire on a a lan with only two computers connected to  the switch.


On Tuesday, April 15, 2014 8:19:07 AM UTC-4, V wrote:
GWPeer.zip

ste...@webrtc.org

unread,
Jun 10, 2014, 4:19:03 AM6/10/14
to discuss...@googlegroups.com
Sounds like you are running into several issues here:

- EWOULDBLOCK issue: One possibility could be that you're sending too many packets at once. Try enabling the googLeakyBucket constraint and see if that improves things. It's supposed to be leaking out packets so that you don't overflow the socket. Another thing to test would be to reduce the resolution and see if that improves it, just to narrow things down.

- The configured min bitrate (xxx kbps) is greater than the estimated available bandwidth (30 kbps): The bandwidth estimator can't tell if it's possible to send at a higher bitrate and the bitrate currently in use. It has to try the higher bitrate to see that there is enough bandwidth. Encoding video where nothing changes requires very little bitrate, as you have noticed, and the effect of that is that you're not utilizing the bandwidth of your link. This is why the estimator sometimes ends up believing there's very little bandwidth available, until the video content starts getting a bit more complex and the estimator can start ramp up its estimate. 

There are some ways around this issue, one way is to send padding data up to a certain bitrate when the content is static. What's the best solution depends on the application.

Vincent Autieri II

unread,
Jun 10, 2014, 12:22:58 PM6/10/14
to discuss...@googlegroups.com
Thank you! Finally someone I can talk to about this.  I did try  different resolutions and it makes no difference for the bandwidth issue.  I'll try the leakybucket and see how that goes.  I 100% agree about the  bandwidth estimator limitation as I was pretty sure it was flawed in how its working when I looked over the code.  Basically we are like a video game (a very clean image), and we have what would appear as a static scene very often.  But we also have conditions when around a city has tons of detail and sharp edges which requires a lot of bandwidth while the camera is moving from its waypoints.  So the camera moving is not an indicator that the scene is static or not, as the camera may be still, but traffic is moving.

So my question is, shouldn't a link/connection exist between the actual required bandwidth vs. whats available and up to the max bandwidth allocated?  My ignorance on VP8/H264 may be in play, as maybe they don't have a quick way to know what the bandwidth really should be for the incoming frames vs. what its being told to encode at (but still be considered CBR like now vs VBR).

I guess option two would be " padding data up to a certain bitrate when the content is static".  Can I tie into the encoder somehow to tell me if it detected a change frame to frame to know when to start adding padding data?   I also do not know how to add padding data, unless you mean to somehow add noise to the video frame to make the encoder think its not static.  I'm all for a temp work around if padding is the only way to go, I just don't know how to add this extra data in without messing with the video.

All the best,
V

 


--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/rL5S7tQNT3Q/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.

Stefan Holmer

unread,
Jun 11, 2014, 7:35:35 AM6/11/14
to discuss...@googlegroups.com
On Tue, Jun 10, 2014 at 6:22 PM, Vincent Autieri II <vaut...@gmail.com> wrote:
Thank you! Finally someone I can talk to about this.  I did try  different resolutions and it makes no difference for the bandwidth issue.  I'll try the leakybucket and see how that goes.  I 100% agree about the  bandwidth estimator limitation as I was pretty sure it was flawed in how its working when I looked over the code.  Basically we are like a video game (a very clean image), and we have what would appear as a static scene very often.  But we also have conditions when around a city has tons of detail and sharp edges which requires a lot of bandwidth while the camera is moving from its waypoints.  So the camera moving is not an indicator that the scene is static or not, as the camera may be still, but traffic is moving.

So my question is, shouldn't a link/connection exist between the actual required bandwidth vs. whats available and up to the max bandwidth allocated?  My ignorance on VP8/H264 may be in play, as maybe they don't have a quick way to know what the bandwidth really should be for the incoming frames vs. what its being told to encode at (but still be considered CBR like now vs VBR).

Not sure I follow you here. We always ask the encoder to produce bits according to what we estimate is available. If the content is too easy, the encoder still won't be able to produce what we ask of it. In that case there's the option to send padding.
 

I guess option two would be " padding data up to a certain bitrate when the content is static".  Can I tie into the encoder somehow to tell me if it detected a change frame to frame to know when to start adding padding data?   I also do not know how to add padding data, unless you mean to somehow add noise to the video frame to make the encoder think its not static.  I'm all for a temp work around if padding is the only way to go, I just don't know how to add this extra data in without messing with the video.

You can try to do something similar to what we do here for screencasting. It makes sure that we always try to send at a specified min bitrate (if the link has the capacity for it).
 

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

Jason Wood

unread,
Jul 3, 2014, 7:17:51 AM7/3/14
to discuss...@googlegroups.com
Hi,

This use case sounds identical to the one that I am trying to get working.

In our case, we have live video that we pause, annotate with graphics, and then start playing again. After a pause, the video stutters for a second or so before it becomes smooth again.

In our case, smooth performance is more desirable than minimising bandwidth usage, so padding sounds like it would be a solution for us.

What is the best way to play around with the padding settings? Looking at the link above, the screen cast logic seems to be deep inside of the webrtc video engine, and I can't see how I can get to that from my code. 

Would it be best for me to modify the webrtc source code and recompile?
Or is there a way to set up padding via e.g. PeerConnection constraints?
Or can I get access to the webrtcvideoengine->engine() in some way?

I currently have a PeerConnection, set up with a single VideoTrackInterface, that uses a custom VideoCapturer. We send 25 fps or 29.97fps depending on the video format. We downscale the video we send from HD (1920x1080) to 1024x576

I saw that there is a media constraint called kPayloadPadding, but grepping the code base I can't see it being used anywhere.

Cheers,
Jason

Stefan Holmer

unread,
Jul 7, 2014, 10:08:05 AM7/7/14
to discuss...@googlegroups.com
On Thu, Jul 3, 2014 at 1:17 PM, Jason Wood <jasonw...@gmail.com> wrote:
Hi,

This use case sounds identical to the one that I am trying to get working.

In our case, we have live video that we pause, annotate with graphics, and then start playing again. After a pause, the video stutters for a second or so before it becomes smooth again.

In our case, smooth performance is more desirable than minimising bandwidth usage, so padding sounds like it would be a solution for us.

What is the best way to play around with the padding settings? Looking at the link above, the screen cast logic seems to be deep inside of the webrtc video engine, and I can't see how I can get to that from my code. 

I think this is your only option right now.
 

Would it be best for me to modify the webrtc source code and recompile?

Yes, at the moment I think that will be the way to go.
 
Or is there a way to set up padding via e.g. PeerConnection constraints?
Or can I get access to the webrtcvideoengine->engine() in some way?

I currently have a PeerConnection, set up with a single VideoTrackInterface, that uses a custom VideoCapturer. We send 25 fps or 29.97fps depending on the video format. We downscale the video we send from HD (1920x1080) to 1024x576

I saw that there is a media constraint called kPayloadPadding, but grepping the code base I can't see it being used anywhere.

This is not what you're looking for.

xfengtes...@gmail.com

unread,
Dec 3, 2014, 6:30:26 AM12/3/14
to discuss...@googlegroups.com
I seen the same issues as you said in this topic, do you have any solution or conclusion about them? thanks.
a. EWOULDBLOCK issue
b. The configured min bitrate (xxx kbps) is greater than the estimated available bandwidth (30 kbps)


在 2014年6月11日星期三UTC+8上午12时22分58秒,V写道:
Reply all
Reply to author
Forward
0 new messages