Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Max Size of Winsock Buffer

3,732 views
Skip to first unread message

nattythread

unread,
Nov 25, 1999, 3:00:00 AM11/25/99
to
Hello,

I'm looking for the value or the constant that defines the maximum size
of the
WindockBuffer supported by the Winsock dll when using recv(...) and
send(..) functions.

I saw it once, but I'm unable to find it again ... If someone still
remebers this data , please post an answer ....


Thanks

Peter Akerstrom

unread,
Nov 25, 1999, 3:00:00 AM11/25/99
to
I know the feeling ... when you know you've seen it somewhere ....
The default (to my knowledge) is 8192 bytes.
(I don't know where I got it from but I added a comment in a source code
file of mine, thought it might prove handy one day ... it did!!)

--
Regards
Peter Akerstrom
peak(delete this)@europe.com


nattythread <natty...@altavista.net> wrote in article
<383D77E1...@altavista.net>...

Andy Lutomirski

unread,
Nov 27, 1999, 3:00:00 AM11/27/99
to
I would say the maximum defined by wsock32.dll or ws2_32.dll (this is a win
*32* group) is either 2GB or 4GB. The SP may or may not support this. But
the limit should be very large.

Andy

"Peter Akerstrom" <peak(delete this)@europe.com> wrote in message
news:01bf377d$03a2bde0$d20564c3@studio...

nattythread

unread,
Nov 29, 1999, 3:00:00 AM11/29/99
to
I would be surprised if the max buffer lenghth was 2 GB , but it just can't be
that large/
Because of in win32 specs 2 GB is the maximum memory amount for your user
process , but not for a transmission buffer.

I would say that Peter's value of 8192 bytes seems fair to me. It's enough
small to reflect microsoft technology [ :) ]

anyway ,
thanks a lot guys,

Natty


Andy Lutomirski a écrit :

Alun Jones

unread,
Nov 29, 1999, 3:00:00 AM11/29/99
to
In article <384233EE...@altavista.net>, nattythread
<natty...@altavista.net> wrote:
> I would be surprised if the max buffer lenghth was 2 GB , but it just can't be
> that large/
> Because of in win32 specs 2 GB is the maximum memory amount for your user
> process , but not for a transmission buffer.
>
> I would say that Peter's value of 8192 bytes seems fair to me. It's enough
> small to reflect microsoft technology [ :) ]

And yet, Windows 98 and 2000 support large window scale option, which would
require the setting of send and receive buffers prior to connect/accept at
lengths of greater than 64k.

Despite its occasional appearance, this isn't a "bitch at Microsoft"
newsgroup - could we at least limit speculation to the _informed_ variety,
please?

If you would _really_ like to know the maximum supported on your
installation (and it is dependent on which version of whose TCP stack you've
installed), then just create a small program that calls
setsockopt(...SO_SNDBUF...) and then getsockopt(...SO_SNDBUF...). The
former call is allowed to return success even though the socket stack has
set a smaller buffer - the success in this instance is to indicate that you
can change the SO_SNDBUF option, not to indicate that you got exactly what
you asked for.

You might want to do the matching call for SO_RCVBUF, just to be certain.

Alun.
~~~~

--
Texas Imperial Software | Try WFTPD, the Windows FTP Server. Find it
1602 Harvest Moon Place | at web site http://www.wftpd.com or email
Cedar Park TX 78613 | us at al...@texis.com. VISA / MC accepted.
Fax +1 (512) 378 3246 | NT based ISPs, be sure to read details of
Phone +1 (512) 378 3246 | WFTPD Pro, NT service version - $100.
*WFTPD and WFTPD Pro now available as native Alpha versions for NT*

Ari Lukumies

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to
nattythread wrote:
>
> I'm looking for the value or the constant that defines the maximum size
> of the
> WindockBuffer supported by the Winsock dll when using recv(...) and
> send(..) functions.
>
> I saw it once, but I'm unable to find it again ... If someone still
> remebers this data , please post an answer ....

(This applies to NT4, but it might also be for w9x, I dunno...)

The NT4 RKit docs (Networking Guide, chapter 6, TCP/IP Implementation
Details) shows two values: the size of TCP receive window
(TCPWindowSize) defaults to 8k rounded up to the nearest MSS (maximum
segment size) increment for the connection; if that's not at least 4
times MSS, it's adjusted to 4 times MSS, with a max. size of 64kB (which
is max. window size, since the field in the TCP header is 16 bits in
length. RFC 1323 describes a TCP window scale option usable to obtain
larger windows, but NT TCP/IP doesn't implement that option yet). The
second value is: For ethernet, the window will notmally be set to 8760
bytes (8192 rounded up to six 1460-byte segments, 1460 bytes being the
MSS). In practice, this means that you *can* use one send to send out,
say, a 64kB message, but your receiver must make more than one call to
recv in order to receive it (the message will be divided into more than
one packet). The division into packets seems also to be affected by so
called delayed acknowledgement (per RFC 1122) implementation in NT's
TCP/IP (when a delay of 200ms expires after a segment, a delayed ACK is
sent). In NT, there's no way to disable delayed ACKs.

The docs say that these are the default values and it's not generally
advisable to alter them, but they can be changed via the registry
parameter TcpWindowSize which affects global settings for the computer
(don't ask me where in the registry this resides - I'm yet to find it
myself) or using setcockopt() to change the setting on a per-socket
basis (this I've tried, but, alas, without much success).

AriL
--
A multiverse is figments of its own imaginations.
Homepaged at http://www.angelfire.com/or/lukumies

nattythread

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to
First of all , thanks for those explications !

You have guessed right. I was looking for this information because I'd like
to avoid multiple recv calls on my clients. I noticed that there was
sometimes a necessity of several recv() calls to get a data send by a single
send() instruction [ in my context the sockets are in blocking mode ].

I suspected a timer intervention, because when I made some test in debug mode
of both server and client on the same PC , more than one call to recv() was
necessery. When running without debug mode , it all worked fine. So I
suspected a timer on the sending side ....

Now I'm fixed !!!
Thanks

Ari Lukumies a écrit :

Alun Jones

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to
In article <38455205...@altavista.net>, nattythread
<natty...@altavista.net> wrote:
> You have guessed right. I was looking for this information because I'd like
> to avoid multiple recv calls on my clients. I noticed that there was
> sometimes a necessity of several recv() calls to get a data send by a single
> send() instruction [ in my context the sockets are in blocking mode ].
>
> I suspected a timer intervention, because when I made some test in debug mode
> of both server and client on the same PC , more than one call to recv() was
> necessery. When running without debug mode , it all worked fine. So I
> suspected a timer on the sending side ....

In TCP, there's always a chance that this behaviour will happen. TCP is
capable of splitting or reassembling data in any number of ways, and its
only requirement is that it should eventually deliver the data in the
correct order. Note "order" here does not mean that any concept of 'packet'
is preserved. A send of two bytes may arrive at the receiver in any
configuration, so long as byte one is given to the application before byte
two. Your code _must_ be able to cope with this.

Typically, the advice given to programmers is to write their code with the
hope that they'll get all the data that's ever been sent to their
application in one recv call, but with the ability to deal with it if all
they ever get is one byte per call to recv. Since the real world behaviour
will be somewhere between the two, you'll be covered.

r...@recon.org

unread,
Dec 2, 1999, 3:00:00 AM12/2/99
to
On Wed, 01 Dec 1999 11:24:46 GMT, Ari Lukumies
<ari.lu...@elmont.fi> wrote:
>In practice, this means that you *can* use one send to send out,
>say, a 64kB message, but your receiver must make more than one call to
>recv in order to receive it (the message will be divided into more than
>one packet).

You can attempt to increase the size of individual
recv/ReadFile calls, and thus reduce their frequency, by
turning off the TCP/IP stack's receive buffer entirely and
always having your own buffer available to the stack (via a
prior recv, ReadFile, or WSARead call). Zeroing out the
receive buffer is more efficient because it eliminates an
extra buffer copy - the stack can drop the data directly
into your user space memory.

HOWEVER, this can yield a huge REDUCTION in performance if
not done properly. You MUST keep pending receive buffers
available to the TCP/IP stack at all times, or TCP
backpressure will slow your effective throughput to a crawl.
A common way to insure this is to have multiple receive
buffers and have them all pending against the socket... the
stack will use them one after another, and while your app is
figuring out what to do with the first buffer's worth of
data the TCP/IP stack still has another buffer into which it
can write data.

There are other tradeoffs here, too. For example, how big a
buffer do you supply? If you request too many bytes, the
buffer will never fill and you won't return until 1) a TCP
packet comes in with its Push bit set, or 2) 500mS elapses
with no more incoming data. Thus your large, multiple input
buffers can yield a net _slowdown_ in throughput because of
wire and TCP/IP stack delays... the data just sits in your
buffers in kernel mode, while your code sits in user mode
with nothing to work on.

I understand your desire to minimize recv/ReadFile calls.
Each one incurs multiple kernel transitions plus all the
usual overhead. But unless you really understand your
environment, messing with those receive buffers is likely to
yield a reduction in performance. Better to leave them alone
and optimize your code to handle returned buffers quickly
and efficiently.

Hope this helps.


Alun Jones

unread,
Dec 2, 1999, 3:00:00 AM12/2/99
to
In article <3845d250....@129.250.35.86>, r...@recon.org wrote:
> I understand your desire to minimize recv/ReadFile calls.
> Each one incurs multiple kernel transitions plus all the
> usual overhead. But unless you really understand your
> environment, messing with those receive buffers is likely to
> yield a reduction in performance. Better to leave them alone
> and optimize your code to handle returned buffers quickly
> and efficiently.

I'd note as well that unless the processor is truly the bottleneck here,
then improving use of the processor will not improve overall speed.
Whenever you optimise, always start with the area that slows you down the
most.

S. Wendy Cheng

unread,
Dec 2, 1999, 3:00:00 AM12/2/99
to

nattythread <natty...@altavista.net> wrote in message
news:383D77E1...@altavista.net...
> Hello,

>
> I'm looking for the value or the constant that defines the maximum size
> of the
> WindockBuffer supported by the Winsock dll when using recv(...) and
> send(..) functions.
>
> I saw it once, but I'm unable to find it again ... If someone still
> remebers this data , please post an answer ....
>
>
> Thanks
>

I'm looking for the same type of info for UDP. Could someone help out also ?
More specifically, could someone explain the relationship between SO_SNDBUF
and SO_MAX_MSG_SIZE ? Thanks.


Wendy Cheng (swc...@us.ibm.com)

Alun Jones

unread,
Dec 3, 1999, 3:00:00 AM12/3/99
to
In article <826hjk$1lt6$6...@rtpnews.raleigh.ibm.com>, "S. Wendy Cheng"
<swc...@us.ibm.com> wrote:
> I'm looking for the same type of info for UDP. Could someone help out also ?
> More specifically, could someone explain the relationship between SO_SNDBUF
> and SO_MAX_MSG_SIZE ? Thanks.

They're not related strictly as such.

SO_SNDBUF is the amount of memory reserved by the system for holding data
yet to be sent out on that socket.

SO_MAX_MSG_SIZE is the maximum datagram that you can send on the current
socket. For UDP, this would/should be a few bytes below 64k.

For UDP access, you might find any number of relations between SO_SNDBUF and
the size of packet you can send, but you'll definitely find that you can't
send more than SO_MAX_MSG_SIZE in one send()/sendto() call.

0 new messages