Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

RMI connection refused... eventually

11 views
Skip to first unread message

Qu0ll

unread,
Dec 10, 2007, 2:37:51 PM12/10/07
to
I have a simple RMI server and a stress testing application is able to
connect to it about 400 times and then suddenly future connection attempts
result in a connection refused exception.

What would be the possible reasons for refusing connection when obviously
connection is permitted by the security manager and firewall initially? Is
there some parameter that controls the maximum number of RMI connections? I
don't think it is memory related as the server is running with a 1GB heap
and varying it makes no difference.

This is the exception:

java.rmi.ConnectException: Connection refused to host: 10.1.1.3; nested
exception is:
java.net.ConnectException: Connection refused: connect
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601)
at
sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198)
at
sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110)
at com.qu0ll.ServerDaemon_Stub.registerApplet(Unknown Source)
at com.qu0ll.StressTester$AppletThread.run(StressTester.java:73)
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at
java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:519)
at java.net.Socket.connect(Socket.java:469)
at java.net.Socket.<init>(Socket.java:366)
at java.net.Socket.<init>(Socket.java:180)
at
sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22)
at
sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128)
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595)
... 5 more

--
And loving it,

-Q
_________________________________________________
Qu0llS...@gmail.com
(Replace the "SixFour" with numbers to email me)

Esmond Pitt

unread,
Dec 10, 2007, 6:16:46 PM12/10/07
to
Qu0ll wrote:
> I have a simple RMI server and a stress testing application is able to
> connect to it about 400 times and then suddenly future connection
> attempts result in a connection refused exception.

Is the server a Windows box and the client a Unix box? Windows has a
nasty habit of issuing resets when the listen backlog queue fills, and a
compensating behaviour in the Windows implementation of connect(). AFAIK
Unix connects() don't have that so when they get a reset you get it.

Nigel Wade

unread,
Dec 11, 2007, 5:35:23 AM12/11/07
to
Qu0ll wrote:

> I have a simple RMI server and a stress testing application is able to
> connect to it about 400 times and then suddenly future connection attempts
> result in a connection refused exception.
>
> What would be the possible reasons for refusing connection when obviously
> connection is permitted by the security manager and firewall initially? Is
> there some parameter that controls the maximum number of RMI connections? I
> don't think it is memory related as the server is running with a 1GB heap
> and varying it makes no difference.
>
> This is the exception:
>
> java.rmi.ConnectException: Connection refused to host: 10.1.1.3; nested
> exception is:
> java.net.ConnectException: Connection refused: connect


I saw exactly the same thing when I was doing applet/servlet comms. My applet on
startup had the option to load the last 24hrs of data. It was programmed to
read each data record on a new socket. If I did this then watched the network
connections using netstat I could see that Windows wasn't closing the sockets
when they were actually closed, instead it seemed to "batch" the closes. There
would be hundreds of sockets in the TIME_WAIT state, then a whole load of them
would all close together.

This caused problems when the sockets were being opened faster than the
"batched" closes shut them down. The system quickly reached its limit on open
sockets. Attempting a new connection whilst the system is in this state results
in connection refused.

This situation didn't occur on Linux, it shuts down sockets when they are
actually closed. I presume it's just the way Microsoft have implemented their
TCP/IP stack (or if you are believer in conspiracies, they've done it
deliberately to encourage you to buy the much more expensive server versions of
Windows).

--
Nigel Wade, System Administrator, Space Plasma Physics Group,
University of Leicester, Leicester, LE1 7RH, UK
E-mail : n...@ion.le.ac.uk
Phone : +44 (0)116 2523548, Fax : +44 (0)116 2523555

Qu0ll

unread,
Dec 11, 2007, 6:13:50 AM12/11/07
to
"Nigel Wade" <n...@ion.le.ac.uk> wrote in message
news:fjlp5c$g2s$1...@south.jnrs.ja.net...

Thanks Nigel - I figured it had something to do with a limitation of
Windows. It's actually a Vista x64 machine that I am running it on at the
moment. So, I guess then I should be hosting this server on a Linux
machine? Is that the only way around this problem?

Lew

unread,
Dec 11, 2007, 9:19:20 AM12/11/07
to
Qu0ll wrote:
> Thanks Nigel - I figured it had something to do with a limitation of
> Windows. It's actually a Vista x64 machine that I am running it on at
> the moment. So, I guess then I should be hosting this server on a Linux
> machine? Is that the only way around this problem?

Certainly not. There's Solaris, FreeBSD, ...

--
Lew

Nigel Wade

unread,
Dec 11, 2007, 10:40:12 AM12/11/07
to
Qu0ll wrote:

The problem I had was on the client, not the server. The server was/is running
Linux. I have no idea how a server process running under Windows (or Windows
itself) would react to that level of socket creation/destruction. You can
easily see if this is the problem by running netstat in a command shell. This
should show you every network connection, and its current state. If you see
lots of sockets in one of the shutdown states (TIMED_WAIT, CLOSE_WAIT,
FIN_WAIT_1/2 etc.) then you have a similar problem to mine.

Is your stress test a realistic load? Is there a way to change the load so that
it doesn't require the creation/destruction of sockets at such a high rate? I
thought one of the improvements in Vista was the TCP/IP stack, although it may
well be that this method of closing sockets down is more efficient for a client
which isn't expected to be opening/closing sockets at such a high rate. After
all, this is not a normal load for a client and the Windows most people have is
the client version.

I worked around the problem by changing the protocol so the client didn't need
to create (and destroy) so many sockets.

0 new messages