Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

tcp ports telnet,ftp,ssh hang in close wait state

385 views
Skip to first unread message

sandy

unread,
Feb 24, 2006, 1:46:29 AM2/24/06
to
There is an issue while I scans the ports with nmap tools. Telnet, FTP
and HTTP port are unreachable and switch hangs. To make all the port
reachable I need to reboot the switch.

I initiated around 50 multiple scanning scripts from the nmap utility
and scannned all the ports at a very high rate.After some time the
device hangs and none of the telnet , ftp or ssh sessions could be
initiated.

When i debugged the system for more infomration i could see that the
dumping tcp port states "inetstatShow" showed that there were 8 tcp
sockets each on ftp(port 21), telnet (port 23) & ssh(port 23) which
were stuck up in CLOSE_WAIT state. I suppose that this condition is a
result of some sw timing issue in synchronizing up with the opening up
and closing of tcp connections. Is there any possibility that this can
happen when tcp connections are opened and closed very quickly???

All other ports 80, and others are not affected maybe because the
timing problem is alleviated by the amount of processing involved
during http or any other session.

Moreover i tried writing the code to dump the corresponding FD's of
the the sessions that were in hung CLOSE_WAIT states. They all show 0.
So i think all the fd's were cleared and only the stale PCB's remain
witholding the sockets. I dunno but possibly.

I was able to remove the stale pcb's by in_pcbdetach function.However i
really do not know the consequences on this.will be happy if any body
could throw light on this ???

The problem statement in one line could be " when tcp ports
telnet,ftp,ssh are scanned using a port map tool at a high rate, some
of the tcp socket connections are not closed
properly and hang up in CLOSE_WAIT state and are never released, the
limit for
such hanging connections seems to be 8 and after this limit is reached,
the ports
21,22, & 23 are no more reachable".

Does anybody have any clue on this . I am really in need of some help
from this tech group .

tage kristensen

unread,
Feb 24, 2006, 2:06:54 PM2/24/06
to
I have observed a similar problem on the system we are using.(VxWorks 5.5.1)

Try to check the net system buffer pool with the netStackSysPoolShow
command.

You may fix this problem by increasing the number of buffers in the net
system buffer pools.


"sandy" <sandeep....@wipro.com> wrote in message
news:1140763589....@z34g2000cwc.googlegroups.com...

sandy

unread,
Feb 27, 2006, 12:35:29 AM2/27/06
to
hi thanks for your reply.
I am not able to figure out this problem till now

the output shows as follows

Working: [Kernel]->netStackSysPoolShow
type number
--------- ------
FREE : 3008
DATA : 0
HEADER : 0
SOCKET : 0
PCB : 314
RTABLE : 0
HTABLE : 0
ATABLE : 0
SONAME : 0
ZOMBIE : 0
SOOPTS : 0
FTABLE : 0
RIGHTS : 0
IFADDR : 0
CONTROL : 0
OOBDATA : 0
IPMOPTS : 0
IPMADDR : 6
IFMADDR : 0
MRTABLE : 0
TEMP : 0
SECA : 0
FTABLE : 0
IPMADDR : 0
IFADDR : 0
SONAME : 0
IP6RR : 0
RR_ADDR : 0
IP6FW : 0
MRTABLE : 0
IPMOPTS : 0
IP6OPT : 0
IP6NDP : 0
PCB : 0
STF : 0
NETADDR : 0
GIF : 0
TOTAL : 3328
number of mbufs: 3328
number of times failed to find space: 0
number of times waited for space: 0
number of times drained protocols for space: 0
__________________
CLUSTER POOL TABLE
_______________________________________________________________________________
size clusters free usage
-------------------------------------------------------------------------------
64 128 125 3
128 512 348 2852
256 512 512 29
512 512 362 2822
-------------------------------------------------------------------------------
value = 0 = 0x0

ake, 0)

on monday today
Working: [Kernel]->netStackSysPoolShow
type number
--------- ------
FREE : 3008
DATA : 0
HEADER : 0
SOCKET : 0
PCB : 314
RTABLE : 0
HTABLE : 0
ATABLE : 0
SONAME : 0
ZOMBIE : 0
SOOPTS : 0
FTABLE : 0
RIGHTS : 0
IFADDR : 0
CONTROL : 0
OOBDATA : 0
IPMOPTS : 0
IPMADDR : 6
IFMADDR : 0
MRTABLE : 0
TEMP : 0
SECA : 0
FTABLE : 0
IPMADDR : 0
IFADDR : 0
SONAME : 0
IP6RR : 0
RR_ADDR : 0
IP6FW : 0
MRTABLE : 0
IPMOPTS : 0
IP6OPT : 0
IP6NDP : 0
PCB : 0
STF : 0
NETADDR : 0
GIF : 0
TOTAL : 3328
number of mbufs: 3328
number of times failed to find space: 0
number of times waited for space: 0
number of times drained protocols for space: 0
__________________
CLUSTER POOL TABLE
_______________________________________________________________________________
size clusters free usage
-------------------------------------------------------------------------------
64 128 125 3
128 512 348 2852
256 512 512 29
512 512 362 2822
-------------------------------------------------------------------------------
value = 0 = 0x0


i dunno know how to proceed from here . Could you please give a brief
idea of what this would mean and how to increase the buffer space.
It will also be helpful if you could share more information of the
problem faced by you.

tage kristensen

unread,
Feb 28, 2006, 2:16:27 PM2/28/06
to
The numbers from your netStackSysPoolShow looks okay to me.

In our case the system pools was configured with to few buffers to handle
all the TCP connections we used.

The system pools were therefore totally drained for buffers.

The symptoms we observed were TCP connections stuck in the CLOSE_WAIT state.

Some of the connections stayed in the CLOSE_WAIT state forever.

This problem disappeared from our platform when the number of buffers in
system pools was increased.

Please note that we use VxWorks 5.5.1 with an Ip4 TcpIp stack.


sandy

unread,
Mar 1, 2006, 1:53:31 AM3/1/06
to
Actually i am not able to narrow down on the cause of the problem and
since you say that the buffers also seem to be fine i have just tried
to write a hack which would detect this state and try to remove the
stale pcb by calling in_pcbdetach for those stale pcb's.


I also found that when my system reaches around 8 stale pcb's in
close_wait states further ftp or telnet could not be initiated.Once i
remove the stale pcbs the system restores to normal working condition
since in the hung state further socket is not able to be created.

0 new messages