Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

TCP/IP Services and SNDBUF/RCVBUF

349 views
Skip to first unread message

Mark Daniel

unread,
Oct 5, 2007, 4:09:52 AM10/5/07
to
Trying to understand the TCPIP> SHOW DEVICE values against

RECEIVE SEND
Socket buffer bytes ... ...
Socket buffer quota ... ...

and the relationship with the setsockopt()/QIO-SETMODE,
getsockopt()/QIO-SENSEMODE
SO_SNDBUF/SO_RCVBUF/TCPIP$C_SNDBUF/TCPIP$C_RCVBUF values.

Also the equivalent in other VMS TCP/IP stacks.

A WASD user reports his MultiNet environment previously with Apache and
now with WASD improve throughput significantly (particularly of large
transfers) by performing a setsockopt(SO_SNDBUF:65534) and a
QIO-SETMODE(TCPIP$C_SNDBUF:65534) respectively (hope my abbreviated
descriptions make enough sense).

I do not see this using TCP/IP Services. In fact I only seem to be able
to decrease throughput by lowering (to circa 2,000 bytes) the default
values which seem to be very high (>60,000). When a SETMODE is used
against a socket this is reflected in the SHOW DEVICE socket bytes/quota.

The default values seem to be more than adequately large under TCP/IP
Services (at least in the systems I have access to - see samples below)
but under Multinet (at least) are reported to benefit by some tweaking
upwards.

Sample 1 (SNDBUF 1,048,576 bytes?):

HP TCP/IP Services for OpenVMS Alpha Version V5.4
on a AlphaServer DS25 running OpenVMS V7.3-2

Physical Memory Usage (pages): Total Free In Use Modified
Main Memory (1024.00MB) 131072 46202 82218 2652

Device_socket: bg1308 Type: STREAM
LOCAL REMOTE
Port: 80 0
Host: * *
Service:
RECEIVE SEND
Queued I/O 0 0
Q0LEN 0 Socket buffer bytes 0 0
QLEN 0 Socket buffer quota 1048576 1048576
QLIMIT 65535 Total buffer alloc 0 0
TIMEO 0 Total buffer limit 8388608 8388608
ERROR 0 Buffer or I/O waits 6 0
OOBMARK 0 Buffer or I/O drops 0 0
I/O completed 5 0
Bytes transferred 0 0

Options: ACCEPT REUSEADR
State: PRIV
RCV Buff: WAIT
SND Buff: None

Sample 2 (SNDBUF 61,440 bytes?):

HP TCP/IP Services for OpenVMS Alpha Version V5.6
on an AlphaServer 1200 5/533 4MB running OpenVMS V8.3

Physical Memory Usage (pages): Total Free In Use Modified
Main Memory (256.00MB) 32768 5288 23658 3822

Device_socket: bg10055 Type: STREAM
LOCAL REMOTE
Port: 80 0
Host: 131.185.30.4 *
Service:
RECEIVE SEND
Queued I/O 0 0
Q0LEN 0 Socket buffer bytes 0 0
QLEN 0 Socket buffer quota 61440 61440
QLIMIT 8 Total buffer alloc 0 0
TIMEO 0 Total buffer limit 491520 491520
ERROR 0 Buffer or I/O waits 1 0
OOBMARK 0 Buffer or I/O drops 0 0
I/O completed 0 0
Bytes transferred 0 0

Options: ACCEPT REUSEADR
State: PRIV
RCV Buff: ASYNC
SND Buff: ASYNC

Sample 3 (SNDBUF 61,440 bytes?):

HP TCP/IP Services for OpenVMS Alpha Version V5.6
on a Digital Personal WorkStation running OpenVMS V8.3

Physical Memory Usage (bytes): Total Free In Use Modified
Main Memory (MB) 576.00 124.70 418.21 33.07

Device_socket: bg23873 Type: STREAM
LOCAL REMOTE
Port: 80 0
Host: * *
Service:
RECEIVE SEND
Queued I/O 0 0
Q0LEN 0 Socket buffer bytes 0 0
QLEN 0 Socket buffer quota 61440 61440
QLIMIT 8 Total buffer alloc 0 0
TIMEO 0 Total buffer limit 491520 491520
ERROR 0 Buffer or I/O waits 1 0
OOBMARK 0 Buffer or I/O drops 0 0
I/O completed 0 0
Bytes transferred 0 0

Options: ACCEPT REUSEADR
State: PRIV
RCV Buff: ASYNC
SND Buff: ASYNC

Sample 4 (SNDBUF 61,440 bytes?):

HP TCP/IP Services for OpenVMS Alpha Version V5.4 - ECO 5
on a AlphaServer ES40 running OpenVMS V7.3-2

Physical Memory Usage (bytes): Total Free In Use Modified
Main Memory (GB) 4.00 0.91 2.99 0.08

Device_socket: bg7838 Type: STREAM
LOCAL REMOTE
Port: 80 0
Host: * *
Service:
RECEIVE SEND
Queued I/O 1 0
Q0LEN 0 Socket buffer bytes 0 0
QLEN 0 Socket buffer quota 61440 61440
QLIMIT 65535 Total buffer alloc 0 0
TIMEO 0 Total buffer limit 491520 491520
ERROR 0 Buffer or I/O waits 37132 0
OOBMARK 0 Buffer or I/O drops 0 0
I/O completed 19319 0
Bytes transferred 0 0

Options: ACCEPT REUSEADR
State: PRIV
RCV Buff: WAIT
SND Buff: None

I fail to see a pattern in these (as related to memory) but it may be
another relationship, perhaps to a SYSGEN parameter or some deeper
TCP/IP Services parameter.

Any assistance or pointers gratefully received.

--
"They're trying to kill me," Yossarian told him calmly.
"No one's trying to kill you," Clevinger cried.
"Then why are they shooting at me?" Yossarian asked.
"They're shooting at everyone," Clevinger answered. "They're trying to
kill everyone."
"And what difference does that make?"
[Joseph Heller; Catch-22]

Mark Daniel

unread,
Oct 12, 2007, 4:44:31 AM10/12/07
to
Just thought a revised subject might increase the chance of some sort of
response to this manifestly incongruous posting.

Mark Daniel wrote:
> Trying to understand the TCPIP> SHOW DEVICE values against
>
> RECEIVE SEND
> Socket buffer bytes ... ...
> Socket buffer quota ... ...
>
> and the relationship with the setsockopt()/QIO-SETMODE,
> getsockopt()/QIO-SENSEMODE
> SO_SNDBUF/SO_RCVBUF/TCPIP$C_SNDBUF/TCPIP$C_RCVBUF values.
>
> Also the equivalent in other VMS TCP/IP stacks.
>
> A WASD user reports his MultiNet environment previously with Apache and
> now with WASD improve throughput significantly (particularly of large
> transfers) by performing a setsockopt(SO_SNDBUF:65534) and a
> QIO-SETMODE(TCPIP$C_SNDBUF:65534) respectively (hope my abbreviated
> descriptions make enough sense).
>
> I do not see this using TCP/IP Services. In fact I only seem to be able
> to decrease throughput by lowering (to circa 2,000 bytes) the default
> values which seem to be very high (>60,000). When a SETMODE is used
> against a socket this is reflected in the SHOW DEVICE socket bytes/quota.

8< snip 8<

Just to note what I've come up with so far ...

I was looking for was some comment on the default values for socket send
and receive buffers under TCP/IP Services and MultiNet and an
explanation of how they might influence throughput.

I did receive private correspondence (not from an 'internals' Engineer)
which in part reads:

> I'm not sure what it is you're asking. Is it simply an explanation of why
> MultiNet benefits from tweaking the SO_SNDBUF? The MultiNet Programmer's
> Reference says that the default value is 6144 for TCP and 2048 for UDP.
> Those remarks are made in the context of documenting the MultiNet-specific
> socket library but it wouldn't surprise me if they were driver defaults that
> applied even in UCX mode.


>
>>I do not see this using TCP/IP Services. In fact I only seem to be able
>>to decrease throughput by lowering (to circa 2,000 bytes) the default
>>values which seem to be very high (>60,000). When a SETMODE is used
>>against a socket this is reflected in the SHOW DEVICE socket bytes/quota.
>

> That's not inconsistent with the default values documented for MultiNet.

Interesting. I don't have MultiNet on my development system though do
have access to it on the VSM site

Process Software MultiNet V5.1 Rev A-X

so I could experiment.

The original correspondent was quite specific about the performance
improvement:

> One of the things I have on my web server are some largish flash
> videos. WASD was downloading them at a rate of about 55 KB/sec
> whereas Apache was downloading them at about 400 KB/sec. I had a
> similar problem when I first deployed Apache and, to improve the
> situation, I used an option Apache has, called SendBufferSize. This
> is a configuration item tied to the SO_SNDBUF socket option. So, I
> modified WASD to set this socket option and hardcoded in the value I
> had been using with Apache (65534). With this change, WASD is now
> downloading the large files at a rate of around 600 KB/sec (all tests
> done to my DSL modem).

An order of magnitude! It was definitely worth investigating. As a
consequence my in-development WASD baseline now has two additional
configuration parameters:

[SocketSizeRcvBuf] <integer>
[SocketSizeSndBuf] <integer>

WATCH now also allows easy observation of unconfigured buffer sizes on
the actual BG device. On MultiNet these roughly correspond to the
quantities referred to by my private correspondent:

|03:06:57.20 ... NETWORK SENSE %X00000001 sndbuf:7100 rcvbuf:7300|

Unconfigured buffer sizes under TCP/IP Services (V5.6-9):

|03:14:28.89 ... NETWORK SENSE %X00000001 sndbuf:62780 rcvbuf:62780|

Quite a difference!

I have not performed throughput tests with changed buffer sizes on VSM
as it's a production system but the original correspondence is
convincing enough.

I wonder how many MultiNet (TCPware?) sites could obtain similar
improvements (10x) to their Apache, WASD, or other demanding TCP/IP
application throughput using an appropriate
setsockopt()/SETMODE/configuration on the buffer sizes?

Am I the only one (again) who was unaware of this?

I'm also curious as to the why and how of TCP/IP Services dramatic
contrast with MultiNet, and why PSC has not taken a similar approach.

--
Milo Minderbinder: Frankly, I'd like to see the government get out of
war altogether and leave the whole field to private industry. If we pay
the government everything we owe it, we'll only be encouraging
government control and discouraging other individuals from bombing their
own men and planes.
[Joseph Heller; Catch-22]

Richard Maher

unread,
Oct 12, 2007, 7:28:34 AM10/12/07
to
Hi Mark,

It's not that I'm not interested, it's just that I don't know the answer to
your questions, but thanks for sharing your observations and results. For
the record this is where you'd apply Mark's findings to a Tier3 Application
Server (See Chpt 2 in the Tier3 Client/Server Development Manual for
more): -

2.1.1.7 TCP/IP Options
If you have not chosen TCP/IP as one of possible network transports for your
server application then
the TCP/IP Options parameters will be protected against user input

2.1.1.7.1 Socket
Enter the TCP/IP Port Number that you wish Tier3 to listen on for connection
requests to your server
application. Valid port numbers are in the range 1 to 65535.

2.1.1.7.2 OOB Inline
When this parameter is set to "Y", out-of-band data is placed in the normal
input queue. No User
interrupt data will be delivered to your INTERRUPT User Action Routine. This
option corresponds to
the Socket option OOBINLINE.

2.1.1.7.3 Delay
This parameter corresponds to the TCP option NODELAY. When set to "N" the
Delay parameter
instructs TCP/IP not to delay sending any data in order to be able to merge
packets.

2.1.1.7.4 Linger
The Linger parameter specifies the number of seconds to delay closure of a
socket if there are unsent
messages in the queue. Corresponds to the Socket option LINGER.

2.1.1.7.5 Probe Idle
Probe Idle specifies the time interval, in seconds, between Keepalive
probes. Corresponds to the TCP
option PROBE_IDLE

2.1.1.7.6 Drop Idle
Drop Idle specifies the time interval, in seconds, after which an idle
connection will be dropped.
Corresponds to the TCP option DROP_IDLE

2.1.1.7.7 Send Quota
The default for Send Quota is the larger of Buffer Size and 4096. This
parameter corresponds to the
socket option SNDBUF

2.1.1.7.8 Recv Quota
The default for Recv Quota is the larger of Buffer Size and 4096. This
parameter corresponds to the
socket option RCVBUF.

Cheers Richard Maher

"Mark Daniel" <mark....@vsm.com.au> wrote in message
news:13gud17...@corp.supernews.com...

Mark Daniel

unread,
Nov 2, 2007, 2:28:33 PM11/2/07
to
More follow-up on (FWIW) ... TCP/IP Services and SNDBUF/RCVBUF

Using Lynx Version 2.8.5dev.16 to download but not save to disk a 4861kB
file via WASD v9.2, from the MultiNet system out via a 100Mbps DE500,
into a shared ISP, through my (currently) 7.9Mbps DSL to my test-bench
PWS500. Five runs with no significant variation in duration.

The MultiNet system:

Process Software MultiNet V5.1 Rev A-X,
COMPAQ AlphaServer DS10L 466 MHz, OpenVMS AXP V7.3

Using the default SNDBUF size reported above (7100 bytes):

38 seconds for a throughput of 128kB/S

Adjusting the SNDBUF to 65000:

9 seconds for a throughput of 540kB/S

This is in excess of 4x the throughput.

Moral of the story appears to be that if using MultiNet (and perhaps
TCPware - I don't have access to a system using it) to source large data
sets you should adjust socket SNDBUF up from the default (and possibly
RCVBUF if sinking it).

--
Catch-22 did not exist, he was positive of that, but it made no
difference. What did matter was that everyone thought it existed, and
that was much worse, for there was no object or text to ridicule or
refute, to accuse, criticize, attack, amend, hate, revile, spit at, rip
to shreads, trample upon or burn up.
[Joseph Heller; Catch-22]

JF Mezei

unread,
Nov 2, 2007, 4:19:11 PM11/2/07
to
Mark Daniel wrote:
> More follow-up on (FWIW) ... TCP/IP Services and SNDBUF/RCVBUF
> The MultiNet system:

> Using the default SNDBUF size reported above (7100 bytes):
>
> 38 seconds for a throughput of 128kB/S
>
> Adjusting the SNDBUF to 65000:
>
> 9 seconds for a throughput of 540kB/S
>


One TCPIP Services, I adjusted
tcp_recvspace = 129904
tcp_sendspace = 129904

(In SYS$STARTUP:TCPIP$SYSTARTUP.COM :
$ SYSCONFIG = "$SYS$SYSTEM:TCPIP$SYSCONFIG.EXE"
$ IFCONFIG = "$SYS$SYSTEM:TCPIP$IFCONFIG.EXE"
$!
$SYSCONFIG -r inet tcp_recvspace=129904
$SYSCONFIG -r inet tcp_sendspace=129904


This affects what is more commonly called the window size/scaling.

I am not sure if this is the same as the sndbuf thing on multinet.

Mark Daniel

unread,
Nov 2, 2007, 7:19:40 PM11/2/07
to
JF Mezei wrote:
> Mark Daniel wrote:
>
>> More follow-up on (FWIW) ... TCP/IP Services and SNDBUF/RCVBUF
>> The MultiNet system:
>> Using the default SNDBUF size reported above (7100 bytes):
>>
>> 38 seconds for a throughput of 128kB/S
>>
>> Adjusting the SNDBUF to 65000:
>>
>> 9 seconds for a throughput of 540kB/S
>>
>
>
> One TCPIP Services, I adjusted
> tcp_recvspace = 129904
> tcp_sendspace = 129904
>
> (In SYS$STARTUP:TCPIP$SYSTARTUP.COM :
> $ SYSCONFIG = "$SYS$SYSTEM:TCPIP$SYSCONFIG.EXE"
> $ IFCONFIG = "$SYS$SYSTEM:TCPIP$IFCONFIG.EXE"
> $!
> $SYSCONFIG -r inet tcp_recvspace=129904
> $SYSCONFIG -r inet tcp_sendspace=129904

Thanks. I had forgotten Gerard Labadie years ago showed me that this
could be done (merci) in an effort to improve earlier versions of TCP/IP
Services performance. An adoption of a U*x tool

http://www.sysgroup.fr/conan/sys$common/syshlp/TCPIP$UCP_HELP.HLB?key=sysconfig

The sysconfig command is used to query or modify the [TCP/IP] kernel
subsystem configuration. You use this command to reconfigure
subsystems already in the kernel and to ask for information about
(query) subsystems in the kernel.

> This affects what is more commonly called the window size/scaling.
>
> I am not sure if this is the same as the sndbuf thing on multinet.

Both adjust the size of the buffer allocated against the particular
socket data transfer direction (send or receive). The SYSCONFIG
mechanism you relate does it on a system-wide basis while the
setsockopt(SO_SNDBUF)/QIO-SETMODE(TCPIP$C_SNDBUF) performs it
programmatically on a per-socket basis.

This post continues a thread where I observed and queried why were (at
least the reported, later versions of) TCP/IP Services default values so
much larger (and by default providing greater throughput) than for
MultiNet. [No input from the PSC camp.]

Google groups for "multinet sndbuf wasd":

http://groups.google.com/group/comp.os.vms/browse_thread/thread/38de10e7a47173bb/743dfb2638c6304f

--
The cure for a fallacious argument is a better argument, not the
supression of ideas.
[Carl Sagan; The Demon Haunted World]

0 new messages