On one server, an apllication (Java) need to work in client/server
mode across the LAN.
For that, the loopback is fully utilized!
When the client is on a foreign windows machine, the time to run this
application is good (following the LAN type in use).
When the client is on the same machine than the server (AIX), the time
to run it, is multipled per 100 (and too slow).
When I analyze the throughput on the loopback on my AIX Machine, I
reach around 50KBps !
I'm far of GigaEthernet...
Test realized:
I've created a file with 50KB capacity. I need to send it 1000x on
the LAN.
I do a rcp with a loop to simulate that. The time command give me
1000 sec to do that. It's too long!
My machine is a 595 (one partition into), I run AIX5.3TL5CSP. I
haven't any virtual adapters, only physical gigabit.
I've already tried to modify no parameters like tcp_nodelayack,
sb_max, mtu, tcp_sendcpace, tcp_recvspace, without any great success!
If someone has a good idea, feel free to send me.
Regards,
Charles
?!? If you are sending over the LAN, there should be nothing on
loopback.
> When the client is on a foreign windows machine, the time to run
> this application is good (following the LAN type in use).
> When the client is on the same machine than the server (AIX), the
> time to run it, is multipled per 100 (and too slow).
Do you have a system call trace of the application? I'll go out on a
limb and guess that it is making a series of "small" writes to the
socket, which happen to be > MSS, or often sum to MSS (TCP Maximum
Segment Size) when running over the LAN, but not when running over
loopback, isn't getting to the likely rather larger MSS over loopback.
Compare the MTU (Maximum Transmission Unit) for the LAN interface and
for loopback.
> When I analyze the throughput on the loopback on my AIX Machine, I
> reach around 50KBps !
> I'm far of GigaEthernet...
> Test realized:
> I've created a file with 50KB capacity. I need to send it 1000x on
> the LAN.
> I do a rcp with a loop to simulate that. The time command give me
> 1000 sec to do that. It's too long!
Is that the way this Java application works? It is establishing a
connection, sending a measly 50KB and then closing the connection?
What does a netperf TCP_STREAM test show? How about netperf TCP_RR?
rick jones
--
No need to believe in either side, or any side. There is no cause.
There's only yourself. The belief is in your own precision. - Jobert
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...
Thanks for your answer.
For the first point, I agree tottaly with you : loopback isn't used
when LAN is used.
Second point, sorry, I have taken no trace with system call during
this operation!
And if it's the case you wrote hereunder, what's the solution for the
loopback?
For the third point, I've already compared the MTU and that's true
I've 1500 for the whole ethernet adapters and 16896 for the loopback.
But I've tried to modify the MTU loopback to 1500 without any
intersting results.
In fact, for the java application, I only have a feedback from the
application guy. He said me the java program use this kind of
communication with a large amount of connection with each time a small
packet around 50KB.
To begin, I've tried with 1000 connections, but in the future, this
java will reach around 100 000 connections...That's the reason I need
a huge throughput on the loopback.
I'm going to try a new check with a netperf, as soon as possible and I
will post it.
Since my last modification on nodelayack parameter, I've increased the
througput to reach 10sec rather 200sec to send 1000x 50KB.
For the moment, I keep this parameter on.
But what I can't understand is this throughput. When I calculate it,
previously, I've only 120KB/s and now with this new parameter
modification, I reach 384KB/s. No more...
Why I can't get something like a gigabit ethernet on the loopback? I
would like to have some MB/s on it!
Many thanks for your feedback.
Cheers,
Charles
On 11 jan, 20:22, Rick Jones <rick.jon...@hp.com> wrote:
1 - Unless you have a bug, loopback has great performance.
2 - nodelayack looks cute until you find out that for large ammounts of
traffic it duplicates the number of packets (or worse), increases the
time spent on irq servicing and chokes the LAN.
3 - If you want high throughput, you can't do a 50k ping-pong. You have
to push as much data at once as possible.
4 - To push large data chunks, you should have large data buffers. ser
rfc1323=1, tcp send&recv space = 131072
Test the performance:
dd if=/dev/zero bs=1m | rsh loopback dd ibs=1m obs=1m of=/dev/null