Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Running out of ports

515 views
Skip to first unread message

root

unread,
Jan 14, 2011, 12:17:49 AM1/14/11
to
If I run a command such as:
for file in *
do
rsh $file somewhere
done

and the number of files is large, somewhere
along the line I get a message such as:
"no available ports"

and the file transfers fail thereafter.

If I wait a little while I can resume file
transfers.

It seems that the system is opening a new port
for each file.

Is there anything I can do to avoid the problem?
I don't want to put a sleep command in the loop
since that would greatly delay the operation.

TIA

root

unread,
Jan 14, 2011, 6:50:50 AM1/14/11
to
root <NoE...@home.org> wrote:
> If I run a command such as:
> for file in *
> do
> rsh $file somewhere

Correction, that should have been:

rcp $file somewhere

Lew Pitcher

unread,
Jan 14, 2011, 8:26:21 AM1/14/11
to
On January 14, 2011 00:17, in alt.os.linux.slackware, NoE...@home.org
wrote:

> If I run a command such as:
> for file in *
> do
> rsh $file somewhere

rcp $file somewhere


> done
>
> and the number of files is large, somewhere
> along the line I get a message such as:
> "no available ports"

Yes, I'd expect that.

> and the file transfers fail thereafter.
>
> If I wait a little while I can resume file
> transfers.
>
> It seems that the system is opening a new port
> for each file.

Sort of.
Actually, each new rcp process opens it's own new client port, thus
exhausting the number of ports available. Remember, TCP ports stay in the
TIME_WAIT state for 2MSL, thus reserving the port for a while, even after
the process has closed it.

> Is there anything I can do to avoid the problem?

Yes, Don't use up all the available ports.

> I don't want to put a sleep command in the loop
> since that would greatly delay the operation.

OK, sleep(1)'ing for 2MSL is one way to do it, but not the only way

What you want to do is 'batch' your files into the rcp command, so that the
command looks like...
rcp $file1 $file2 $file3 destination

You probably can get xargs(1) to do the batching for you

HTH
--
Lew Pitcher
Master Codewright & JOAT-in-training | Registered Linux User #112576
Me: http://pitcher.digitalfreehold.ca/ | Just Linux: http://justlinux.ca/
---------- Slackware - Because I know what I'm doing. ------


D Herring

unread,
Jan 14, 2011, 8:29:04 PM1/14/11
to
On 01/14/2011 06:50 AM, root wrote:
> root<NoE...@home.org> wrote:
>> If I run a command such as:
>> for file in *
>> do
>> rsh $file somewhere
>
> Correction, that should have been:
>
> rcp $file somewhere

I'd recommend taking a look at rsync; it can recursively handle
directories, among other tricks.

>> It seems that the system is opening a new port
>> for each file.

As Lew said, each RCP process needs its own port. Your command might
be condensed to a single
# rcp * somewhere

The find command is also useful
# find . -exec rcp {} +
where {} matches what was found, and + means that multiple matches can
be processed at once.

- Daniel

Morten L

unread,
Jan 15, 2011, 3:40:18 AM1/15/11
to

>
I have had the same problem with
rsh, and found this somewhere on the net:

---------------------
The rsh / rcp protocol only has 512 IP ports available, so if you
launch more than 512 rsh or rcp commands during a 2MSL (120 second)
period, the resulting zombie TCP connections hanging around in TIME_WAIT
state (you can observe this with netstat -pant) will cause rsh or rcp to
fail with error "rcmd: socket: All ports in use".

To prevent this, issue the following command
both locally and on the target (only works on Linux):

echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
---------------------

For me it worked quite well.

But it is said, that it might have some side effects,
so normally it is not recommented.

Enjoy

--
Morten L

Thomas Overgaard

unread,
Jan 15, 2011, 4:13:50 AM1/15/11
to
root wrote:

> and the number of files is large

If they are all in the same directory it might be an idea to push them
through 'tar':
tar zcvf - somedir | ssh somewhere tar zxvf -
--
Thomas O.

This area is designed to become quite warm during normal operation.

root

unread,
Jan 15, 2011, 1:29:07 PM1/15/11
to
Morten L <ml47s...@gspmailam.com> wrote:
>
> To prevent this, issue the following command
> both locally and on the target (only works on Linux):
>
> echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
> ---------------------
>
> For me it worked quite well.
>
> But it is said, that it might have some side effects,
> so normally it is not recommented.
>
> Enjoy
>

Thanks. I guess the trick is to echo 1 before you start
then echo 0 when you are done.

root

unread,
Jan 15, 2011, 1:32:28 PM1/15/11
to
Thomas Overgaard <tho...@post2.tele.dk> wrote:
> root wrote:
>
>> and the number of files is large
>
> If they are all in the same directory it might be an idea to push them
> through 'tar':
> tar zcvf - somedir | ssh somewhere tar zxvf -

This suggestion, as well as packing more stuff on
the line is good. However my problem is more complicated
than I indicated. I want to be able to transfer entire
directory trees with one command. I have written a script
that tests for simple file or directory, sends a simple
file, but descends into a directory and calls itself.

I'll try the /proc/sys/net/ipv4/tcp_tw_recycle trick.

root

unread,
Jan 15, 2011, 2:00:46 PM1/15/11
to

Well that didn't work for me. After some files were
transferred I got:
poll: protocol failure in circuit setup

and the system couldn't find files to send.

Henrik Carlqvist

unread,
Jan 15, 2011, 6:25:53 PM1/15/11
to
root <NoE...@home.org> wrote:
> I want to be able to transfer entire directory trees with one command.

What about a simple command like:

scp -rp /some/local/directory remotemachine:/target/directory

> I have written a script that tests for simple file or directory, sends a
> simple file, but descends into a directory and calls itself.

Something making a call to itself is usually called recursion. That word
almost sounds like "recursively" that the -r switch to scp does stand for.

From the man page of scp:

-8<------------------------------------------
-r Recursively copy entire directories. Note that scp follows sym-
bolic links encountered in the tree traversal.
-8<------------------------------------------

regards Henrik
--
The address in the header is only to prevent spam. My real address is:
hc123(at)poolhem.se Examples of addresses which go to spammers:
root@localhost postmaster@localhost

root

unread,
Jan 15, 2011, 10:24:24 PM1/15/11
to
Henrik Carlqvist <Henrik.C...@deadspam.com> wrote:
> root <NoE...@home.org> wrote:
>> I want to be able to transfer entire directory trees with one command.
>
> What about a simple command like:
>
> scp -rp /some/local/directory remotemachine:/target/directory
>
>> I have written a script that tests for simple file or directory, sends a
>> simple file, but descends into a directory and calls itself.
>
> Something making a call to itself is usually called recursion. That word
> almost sounds like "recursively" that the -r switch to scp does stand for.
>
> From the man page of scp:
>
> -8<------------------------------------------
> -r Recursively copy entire directories. Note that scp follows sym-
> bolic links encountered in the tree traversal.
> -8<------------------------------------------
>
> regards Henrik


I'm sorry Henrik, my response to your message seems to have
been lost.

In my orginal response I mentioned that my current copy
problem is very messy: there are 47,000 files in 6700
directories. All these are under some root directory,
say this-dir.

So under, say, current-dir I have an ordinary-file
and a directory this-dir. When I am in current-dir
I type:

rcp -rp ordinary-file this-dir target:/current-dir

The ordinary file was copied over, but the command
hung. It didn't even create /current-dir/this-dir
on the target.

I modified my recursive script to transfer more
than one file at a time and it seems to work.
It would have taken about 20 hours to finish
so I killed it after 10% of the files were
transferred. I copied the whole thing to an
external usb drive and carried that over to
the target.

Henrik Carlqvist

unread,
Jan 16, 2011, 5:17:36 AM1/16/11
to
root <NoE...@home.org> wrote:
> It would have taken about 20 hours to finish
> so I killed it after 10% of the files were
> transferred. I copied the whole thing to an
> external usb drive and carried that over to
> the target.

Using a network might sometimes be convenient, but:

"Never underestimate the bandwidth of a station wagon full of tapes"

Morten L

unread,
Jan 16, 2011, 6:42:33 AM1/16/11
to

No. That worked for you, otherwise you would'nt get that new error :-)

inetd does rate-limiting. By default it stops at 40 connections per
minute and then "punishes" you with 10 minutes of not listening anymore.

In /etc/inetd.conf, try this:

shell stream tcp nowait.1000 root /usr/sbin/tcpd in.rshd -L

(i.e add the .1000 to nowait)

For xinetd, there are more options (and more explicit)

Enjoy


--
Morten L

Sidney Lambe

unread,
Jan 16, 2011, 7:26:55 AM1/16/11
to
On alt.os.linux.slackware, Henrik Carlqvist <Henrik.C...@deadspam.com> wrote:
> root <NoE...@home.org> wrote:
>> It would have taken about 20 hours to finish
>> so I killed it after 10% of the files were
>> transferred. I copied the whole thing to an
>> external usb drive and carried that over to
>> the target.
>
> Using a network might sometimes be convenient, but:
>
> "Never underestimate the bandwidth of a station wagon full of tapes"
>

:-))


Sid

root

unread,
Jan 16, 2011, 7:42:28 AM1/16/11
to

Thanks again. I had tried changing inetd.conf before my
post. First I tried the line exactly as you wrote it,
(I had found the earlier post as well):

shell stream tcp nowait.1000 root /usr/sbin/tcpd in.rshd -L

That did not change anything. After 512 files were transferred
the poll: protocol failure in circuit setup
message came up. After changing the inetd.conf I did:
killall -HUP inetd
and that didn't fix the problem. I then did a reboot
and tried again, the problem remained.

Then I found this:

The maximum number of outstanding child processes (or "threads") for a
"nowait" service may be explicitly specified by appending a "/" followed
by the number to the "nowait" keyword. Normally (or if a value of zero
is specified) there is no maximum. Otherwise, once the maximum is
reached, further connection attempts will be queued up until an existing
child process exits. This also works in the case of "wait" mode,
although a value other than one (the default) might not make sense in
some cases. You can also specify the maximum number of connections per
minute for a given IP address by appending a "/" followed by the number
to the maximum number of outstanding child processes. Once the maximum
is reached, further connections from this IP address will be dropped
until the end of the minute. In addition, you can specify the maximum
number of simultaneous invocations of each service from a single IP
address by appending a "/" followed by the number to the maximum number
of outstanding child processes. Once the maximum is reached, further
connections from this IP address will be dropped.

So I changed the line in inetd.conf to:

shell stream tcp nowait/0/2000 root /usr/sbin/tcpd in.rshd -L

At this point I had the echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
and the modified inetd.conf. Still the transfer failed after 512 files.

I screwed around some more with inetd.conf, changing the nowait command
to nowait/1000, nowait/1000/5000, etc, etc. I did the illall -HUP inetd
after each change and always the system limited me to 512 files.

Morten L

unread,
Jan 16, 2011, 9:53:41 AM1/16/11
to
[cut]... long lines

>
> So I changed the line in inetd.conf to:
>
> shell stream tcp nowait/0/2000 root /usr/sbin/tcpd in.rshd -L
>
> At this point I had the echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
> and the modified inetd.conf. Still the transfer failed after 512 files.
>
> I screwed around some more with inetd.conf, changing the nowait command
> to nowait/1000, nowait/1000/5000, etc, etc. I did the illall -HUP inetd
> after each change and always the system limited me to 512 files.
>

Just a thought!

do you have this
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
on both client and server!


--
Morten L

root

unread,
Jan 16, 2011, 11:04:29 AM1/16/11
to
Morten L <ml47s...@gspmailam.com> wrote:
>
> Just a thought!
>
> do you have this
> echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
> on both client and server!
>
>


No, I had only been working on one machine. I left
the target machine as is. The contents of the target
machine are much more important to me than the
machine I was using. Your original suggestion of
using the echo...... hinted that there might be
adverse effects so I didn't want to risk experimenting
on the target machine.

Morten L

unread,
Jan 16, 2011, 12:46:05 PM1/16/11
to

Then you do *NOT* get it to work with rsh!

Remember my first answer to you:

To prevent this, issue the following command
both locally and on the target (only works on Linux):

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle

The change to inetd.conf must be done
at the Server end of the connection.

The possible side effect is about reusing a "closed" connection
before all packets in the previous connection are sent and received.
So the TIME_WAIT state is to ensure that any delayed packets
are caught and not treated as new connection requests."

You can also try this instead of "tcp_tw_recycle":
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse

It allows reusing sockets in TIME_WAIT state for
new connections when it is safe from protocol viewpoint.

So - if there is a lot of latency in your network connections,
then using these two settings may result in a new connection
getting disturbed by packets from an old connection?

Enjoy


--
Morten L

root

unread,
Jan 16, 2011, 4:05:37 PM1/16/11
to
Morten L <ml47s...@gspmailam.com> wrote:
>
> The possible side effect is about reusing a "closed" connection
> before all packets in the previous connection are sent and received.
> So the TIME_WAIT state is to ensure that any delayed packets
> are caught and not treated as new connection requests."
>
> You can also try this instead of "tcp_tw_recycle":
> echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
>
> It allows reusing sockets in TIME_WAIT state for
> new connections when it is safe from protocol viewpoint.
>
> So - if there is a lot of latency in your network connections,
> then using these two settings may result in a new connection
> getting disturbed by packets from an old connection?
>
> Enjoy
>
>

You are correct, resetting tcp_tw_recycle allows more
than 512 files to be sent. The transfer speed is sorely
degraded however. Given your caveats I won't be
experimenting any more with these switches.

I do have a script which works now, at least for the
purpose I originally intended.

I am open to suggestions for tests that I can run
which might reveal why rsh -rp failed to work.

Douglas Mayne

unread,
Jan 17, 2011, 9:24:47 AM1/17/11
to
On Sat, 15 Jan 2011 18:32:28 +0000, root wrote:

> Thomas Overgaard <tho...@post2.tele.dk> wrote:
>> root wrote:
>>
>>> and the number of files is large
>>
>> If they are all in the same directory it might be an idea to push them
>> through 'tar':
>> tar zcvf - somedir | ssh somewhere tar zxvf -
>
> This suggestion, as well as packing more stuff on the line is good.
> However my problem is more complicated than I indicated. I want to be
> able to transfer entire directory trees with one command.

I am guessing that rsync can do anything your script can do. rsync
supports ssh as a transport layer for secure file transfers.

> I have written
> a script that tests for simple file or directory, sends a simple file,
> but descends into a directory and calls itself.
>
> I'll try the /proc/sys/net/ipv4/tcp_tw_recycle trick.

Note: comment inline.

--
Douglas Mayne

0 new messages