I would like to know how buffers are managed
within the protocol stack and if there's any API to the buffer
management that would allow me to track utilization. Then I could
"throttle back" the generation of UDP traffic to match the buffer
availability.
I'd also like to know how PPP arbitrates between traffic demands from
each peer. That is, if one peer is sending a ton of UDP traffic, it
seems like the other peer can't get a packet in (or perhaps it's the
case that the responses are queued way back in the first peer?). Are
there any priority schemes within the protocols that would allow me to
always have fast response, regardless of how much "bulk" UDP traffic is
queued up in one peer?
My envrionment is VxWorks 5.3.1 on i386. Their documentation states
"PPP attaches to the TCP/IP stack at the driver (link) level." Does
this mean that all the buffer management is handled within the serial
port driver, which likely uses a fairly small character FIFO?
Thanks for any help.
--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---
This sort of control is often done by round-trip methods.
I don't think such an API is generally possible in any common
implementation. The user doesn't necessarily know which of the system
interfaces he's going to go out, and the buffering is done at the
edge. If you know offhand that you'll be going out some particular
interface (perhaps because it's the *only* interface), then you might
try looking for a transmit status interface there.
> I'd also like to know how PPP arbitrates between traffic demands from
> each peer. That is, if one peer is sending a ton of UDP traffic, it
> seems like the other peer can't get a packet in (or perhaps it's the
> case that the responses are queued way back in the first peer?).
Huh? The two paths -- transmit and receive -- are completely
independent. Are you asking about how all the data is multiplexed on
the transmit path? If not, then it sounds like you're probably
talking about an implementation bug.
> Are
> there any priority schemes within the protocols that would allow me to
> always have fast response, regardless of how much "bulk" UDP traffic is
> queued up in one peer?
Sort of. Some PPP implementations will give loosly-defined
"interactive" applications preferential treatment in output queuing.
A common hack is to look for small TCP packets and put them at the
front of the queue. In general, though, no.
> My envrionment is VxWorks 5.3.1 on i386. Their documentation states
> "PPP attaches to the TCP/IP stack at the driver (link) level." Does
> this mean that all the buffer management is handled within the serial
> port driver, which likely uses a fairly small character FIFO?
Dunno about VxWorks. The typical implementation is a straight shot
through UDP, IP, and down to the driver. The interface the driver
(PPP in this case) exposes has a small (BSD default is 50 packets)
queue. There is, as you suspect, also a very small buffer (only a few
characters) in the serial hardware. All of this depends on the
internals of the OS and the hardware platform. If you need to know,
you should probably take it up with WindRiver.
--
James Carlson, Software Architect <car...@ibnets.com>
IronBridge Networks / 55 Hayden Avenue 71.246W Vox: +1 781 372 8132
Lexington MA 02421-7996 / USA 42.423N Fax: +1 781 372 8090
"PPP Design and Debugging" --- http://people.ne.mediaone.net/carlson/ppp
Thanks in advance,
Fadi.
sausti...@my-dejanews.com wrote:
> I'm interested in finding out more detail about how the higher layers
> of a protocol stack (specifically SNMP/UDP) are affected by running
> over IP/PPP instead of IP/Ethernet. My application seems to have
> serious problems when all of a sudden the bandwidth dips from
> 10 Mbps to 9600 bps.
>
> I would like to know how buffers are managed
> within the protocol stack and if there's any API to the buffer
> management that would allow me to track utilization. Then I could
> "throttle back" the generation of UDP traffic to match the buffer
> availability.
>
> I'd also like to know how PPP arbitrates between traffic demands from
> each peer. That is, if one peer is sending a ton of UDP traffic, it
> seems like the other peer can't get a packet in (or perhaps it's the
> case that the responses are queued way back in the first peer?). Are
> there any priority schemes within the protocols that would allow me to
> always have fast response, regardless of how much "bulk" UDP traffic is
> queued up in one peer?
>
> My envrionment is VxWorks 5.3.1 on i386. Their documentation states
> "PPP attaches to the TCP/IP stack at the driver (link) level." Does
> this mean that all the buffer management is handled within the serial
> port driver, which likely uses a fairly small character FIFO?
>
> Thanks for any help.
>
> --== Sent via Deja.com http://www.deja.com/ ==--
> ---Share what you know. Learn what you don't.---
--
+++++++++++++++++++++++++++++++++++++
Fadi Nasser
Software Engineer Alcatel USA
(707)665 8049 FAX (707)792 7807
Fadi....@usa.alcatel.com
++++++++++++++++++++++++++++++++++++++
What I ended up doing was "throttling" the delivery of the bursty
traffic (snmp traps), based on overall mbuf usage. Each time a
trap was to be generated, I look through all the mbuf statistics
and see if any are below a certain threshold. If they are, I delay
and try again, up to some maximum number of times. When the
free mbufs are above the threshold again, or I went through the loop
the max # times, I generate the trap. Ideally I'd only need to
throttle on the mbuf queue for the size buffer I need, but
unfortunately I have no way of telling what that is because the SNMP
agent actually builds the buffer with all the variable bindings, etc.
Here's what the code looks like:
#undef SEND_DELAY_DEBUG
void trapThrottle ( void ) {
/*
SRA 052499 Logic to avoid flooding PPP interface with traps.
Look at the percentage in use of the network mbuf clusters
If below the threshold, delay here while the interface
clears out a bit and frees some mbufs. In an ideal world we'd
know how big
the trap is and would just have to look at the mbuf cluster
that would be used.
If the traps are sent on an Ethernet interface, the threshold
should never get
hit - buffers are processed too fast to ever see a drop.
However, on a slow PPP
interface sometimes the trap generation will exceed the link's
ability to deliver
the traps to the NMS. This is the case the throttling should
handle
*/
/* NET_POOL *mySysPtr = ( NET_POOL *)_pNetSysPool ; */ /* FYI */
NET_POOL *myDataPtr = ( NET_POOL *) _pNetDpool ;
CL_POOL *myPoolPtr ; /* point at each pool cluster */
unsigned int percentFree ;
unsigned int i , j , delayCount ;
#define MAX_TRAP_DELAY_TRIES 20
#define MBUF_DELAY_THRESHOLD 90 /* percent */
/* find the mbuf pool "statistics" for each mbuf size */
for ( i = 0 ; i < CL_TBL_SIZE ; i++ ) {
myPoolPtr = (CL_POOL *) myDataPtr->clTbl[i] ;
delayCount = 0 ;
/* only interested in non-zero entries */
if ( (NULL == myPoolPtr) ||
(NULL == myPoolPtr->clSize) )
continue ;
for ( j = 0 ; j < MAX_TRAP_DELAY_TRIES ; j++ ) {
/* for this mbuf size, how free is it? */
percentFree = (myPoolPtr->clNumFree * 100) /
(myPoolPtr->clNum) ;
if ( percentFree < MBUF_DELAY_THRESHOLD ) {
/* hold off a while */
delayCount++ ;
taskDelay ( 20 ) ; /* want to be able to delay up to*/
/* an empirically derived # */
}
} /* end for j */
#ifdef SEND_DELAY_DEBUG
if ( delayCount ) {
printf("Delayed %d times mbuf %d\n", delayCount ,
myPoolPtr->clSize);
}
#endif
} /* end for i */
}
Sent via Deja.com http://www.deja.com/
Though it never seems like a satisfactory response, if you read the Stevens
books you will find that there is a type field in the IP header and that it
can be used to flag some traffic as immediate. Although this is pretty much
falling out of use the one place I have still seen it used is in PPP for queue
jumping (it said).
Good luck,
Bob