Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

áòðééï: maximum command line length <tcsh>

93 views
Skip to first unread message

Keith Thompson

unread,
Oct 5, 2000, 3:00:00 AM10/5/00
to
"levi_al" <lev...@netvision.net.il> writes:
> a very convenient workaround is to use xargs, which takes its standard input
> and passes every word of it as command-line argument to the program you
> want.
> if you have many files, for example, this would work:
>
> echo * | xargs rm
>
> when "rm *" could not.

Not likely. If "rm *" would fail, "echo *" probably would for the
same reason.

You could do the nearly equivalent

ls | xargs rm

(In this particular case, depending on how you want to handle dot
files and subdirectories, "rm -r" might make more sense.)

--
Keith Thompson (The_Other_Keith) k...@cts.com <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://www.sdsc.edu/~kst>
Welcome to the last year of the 20th century.

levi_al

unread,
Oct 5, 2000, 10:19:57 PM10/5/00
to
a very convenient workaround is to use xargs, which takes its standard input
and passes every word of it as command-line argument to the program you
want.
if you have many files, for example, this would work:

echo * | xargs rm

when "rm *" could not.


Evgeny Shumsky <evg...@breezecom.co.il> כתב בהודעה לקבוצת
דיון:8rbr0f$eqf$1...@news.netvision.net.il...
> Hello,
> I've payed attention that there is a maximum length of a command line,
> in tcsh its more then 1000 characters, can I change this value, if yes how
?
> I would like it to be 10k characters, if not, how can I possibly run
> programs, which take long arguments ?
>
> Thank you
>
>


Sean Stalker

unread,
Oct 6, 2000, 3:00:00 AM10/6/00
to
This works for me
for file in (*)
do
rm $file
done

"levi_al" <lev...@netvision.net.il> wrote in message
news:8rjcvc$7j0$1...@news.netvision.net.il...

BugSpray

unread,
Oct 6, 2000, 3:00:00 AM10/6/00
to
echo * will not fail as it is an internal command and therefor does not have
the same limitations as ls.


try it.


"Keith Thompson" <k...@cts.com> wrote in message
news:yec7l7m...@king.cts.com...


> "levi_al" <lev...@netvision.net.il> writes:
> > a very convenient workaround is to use xargs, which takes its standard
input
> > and passes every word of it as command-line argument to the program you
> > want.
> > if you have many files, for example, this would work:
> >
> > echo * | xargs rm
> >
> > when "rm *" could not.
>

Chuck Dillon

unread,
Oct 6, 2000, 3:00:00 AM10/6/00
to

BugSpray wrote:
>
> echo * will not fail as it is an internal command and therefor does not have
> the same limitations as ls.

Unless it is aliased to say /usr/5bin/echo on a solaris box for example.
Also, the limit may not be the same but there is still likely to be a
limit. You can't claim that for every shell it's built in echo can handle
any number of args.

-- ced

--
Chuck Dillon
Senior Software Engineer
Genetics Computer Group, a subsidiary of Pharmacopeia, Inc.

Tim

unread,
Oct 6, 2000, 3:00:00 AM10/6/00
to

"Chuck Dillon" <dil...@gcg.com> wrote in message
news:39DDDD4E...@gcg.com...

>
>
> BugSpray wrote:
> >
> > echo * will not fail as it is an internal command and therefor does not
have
> > the same limitations as ls.
>
> Unless it is aliased to say /usr/5bin/echo on a solaris box for example.
> Also, the limit may not be the same but there is still likely to be a
> limit. You can't claim that for every shell it's built in echo can
handle> any number of args.
>


Probably not, but on sco openserver 5 this command has never let me down.
And believe me I have put it to the test !

laura fairhead

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
On Fri, 6 Oct 2000 22:11:01 +0200, "Tim" <t> wrote:

>
> "Chuck Dillon" <dil...@gcg.com> wrote in message
> news:39DDDD4E...@gcg.com...
> >
> >
> > BugSpray wrote:
> > >
> > > echo * will not fail as it is an internal command and therefor does not
> have
> > > the same limitations as ls.
> >
> > Unless it is aliased to say /usr/5bin/echo on a solaris box for example.
> > Also, the limit may not be the same but there is still likely to be a
> > limit. You can't claim that for every shell it's built in echo can
> handle> any number of args.
> >
>
>
> Probably not, but on sco openserver 5 this command has never let me down.
> And believe me I have put it to the test !
>

I have tried;

ECHO \*\*\*\*\*

In Solaris 7 ksh, and in SuSe Linux ksh. I used the \*\*\*\*\* to guarentee
a huge number of files. On both systems the line caused bad problems.
Linux had a kernel crash and on Solaris the terminal box crashed and
caused the system to start performing sluggishly and intermitantly until
I killed the process.

It seems to me that we are talking about more than just a single limit
here;

(i) command line buffer size
(ii) number and total length of arguements to external command

If tcsh has a limited command line buffer size then surely the line
echo * |xargs rm
will still fail (at a certain point) after the '*' has been expanded?
That failure would come at almost exactly the same point that
rm * would fail for the same reason, of course it is not necessarily
true that this could be the only cause for the command to fail.
The way I understand it 'rm *' can fail if the number of arguements
is to great a number, whereas the 'echo * |' will not suffer this
problem.

BTW: This is one point that I would appreciate if somebody
could clarify, since the way I understood it is that xargs consecutively
reads lines seperated by newline from stdin, appending each in turn
to the command line arguement before executing that command; doesn't
'*' expand to a list of matched filenames seperated by SPACE?

Bye,

L

kms...@ix.netcom.com

unread,
Oct 15, 2000, 3:00:00 AM10/15/00
to
Followups trimmed to comp.unix.admin.

On or about Fri, 6 Oct 2000 10:14:03 +0100, BugSpray <m...@ucs.co.za> scrivened:


> "Keith Thompson" <k...@cts.com> wrote in message
> news:yec7l7m...@king.cts.com...
>> "levi_al" <lev...@netvision.net.il> writes:

>> > a very convenient workaround is to use xargs, which takes its standard
> input
>> > and passes every word of it as command-line argument to the program you
>> > want.
>> > if you have many files, for example, this would work:
>> >
>> > echo * | xargs rm
>> >
>> > when "rm *" could not.
>>

>> Not likely. If "rm *" would fail, "echo *" probably would for the
>> same reason.
>>
>> You could do the nearly equivalent
>>
>> ls | xargs rm
>>
>> (In this particular case, depending on how you want to handle dot
>> files and subdirectories, "rm -r" might make more sense.)
>>

> echo * will not fail as it is an internal command and therefor does not have


> the same limitations as ls.

> try it.

Actually, I've had the experience of "echo *" failing, in the instance
of having some 120,000 files in a single directory (a website
messageboard archive, tar'ed by someone else, that I was trying to
untar).

"echo *" requires a shell expansion of "*" to all files globbed.
"ls | xargs" instead sets up IPC between the 'ls' command and xargs.
Though "echo *" may be effective in most cases, "ls | xargs" is the
safer alternative.

--
Karsten M. Self <kms...@ix.netcom.com> http://www.netcom.com/~kmself
Evangelist, Opensales, Inc. http://www.opensales.org
What part of "Gestalt" don't you understand? There is no K5 cabal
http://gestalt-system.sourceforge.net/ http://www.kuro5hin.org
GPG fingerprint: F932 8B25 5FDD 2528 D595 DC61 3847 889F 55F2 B9B0

Logan Shaw

unread,
Oct 15, 2000, 3:00:00 AM10/15/00
to
In article <6ttbs8...@kmself.nntp.ix.netcom.com>,

<kms...@ix.netcom.com> wrote:
>Actually, I've had the experience of "echo *" failing, in the instance
>of having some 120,000 files in a single directory (a website
>messageboard archive, tar'ed by someone else, that I was trying to
>untar).
>
>"echo *" requires a shell expansion of "*" to all files globbed.
>"ls | xargs" instead sets up IPC between the 'ls' command and xargs.
>Though "echo *" may be effective in most cases, "ls | xargs" is the
>safer alternative.

Yes, but even "ls | xargs", is going to be extraordinarily slow on a
directory with 120,000 files because "ls" sorts its output.

And that's why I'd use

perl -e 'opendir(D,"."); print map ("$_\n", readdir D)'

Actually, I'd probably remove the files beginning with characters other
than ".":

perl -e 'opendir(D,"."); print map ("$_\n", grep (/^[^.]/, readdir D))'

And if you have so many files that you're worried about fitting the list
in memory:

perl -e 'opendir(D,"."); while (defined ($_ = readdir D))
{ print "$_\n" unless /^[.]/; }'

And for lots of tasks, like removing files, there would be no pipe, and
I'd do the whole thing in Perl.

- Logan

Ken Pizzini

unread,
Oct 15, 2000, 3:00:00 AM10/15/00
to
On Sat, 14 Oct 2000 19:57:11 GMT,
laura fairhead <laura_f...@my-deja.com> wrote:
>I have tried;
>
>ECHO \*\*\*\*\*
>
>In Solaris 7 ksh, and in SuSe Linux ksh. I used the \*\*\*\*\* to guarentee
>a huge number of files.

Somehow I think that you must have typed that wrong here...
First off, the command is "echo", not "ECHO"; second the string
\*\*\*\*\* will be expanded by the shell to the string "*****",
which echo would just output verbatum, and be a rather
uninteresting experiment. Based on the description below I'll
assume that you really entered:
echo /*/*/*/*/*

> On both systems the line caused bad problems.
>Linux had a kernel crash

!?!?!? What version of the kernel was this? There is zero
reason for the kernel to crash from such a command! If the
problem was memory exhaustion then the kernel ought to have
killed the greedy process and protected itself and the rest
of the system...


> and on Solaris the terminal box crashed and
>caused the system to start performing sluggishly and intermitantly until
>I killed the process.

Because the system started thrashing due to memory starvation,
no doubt.


>It seems to me that we are talking about more than just a single limit
>here;
>
>(i) command line buffer size
>(ii) number and total length of arguements to external command

Limit (i) is often "as much virtual memory as the kernel will
give to the shell process"; limit (ii) is a kernel limit.

>If tcsh has a limited command line buffer size then surely the line
>echo * |xargs rm
>will still fail (at a certain point) after the '*' has been expanded?

If echo is a shell builtin, then it is limit (i) that is
relevant; if echo is an external command, then it it limit
(ii) that applies (in real-world shells; toy shells might
hit limit (i) first).

>That failure would come at almost exactly the same point that
>rm * would fail for the same reason,

Only if the invoked echo command is an external command, which
is seldom true these days.


> of course it is not necessarily
>true that this could be the only cause for the command to fail.
>The way I understand it 'rm *' can fail if the number of arguements
>is to great a number, whereas the 'echo * |' will not suffer this
>problem.

External commands, such as rm, are subject to both limit (i) and
limit (ii); internal commands, such as is typical of echo, is
subject only to limit (i). For non-toy shells limit (i) is
going to be substantially higher than limit (ii), so in effect
these are two completely different behaviors.


>BTW: This is one point that I would appreciate if somebody
>could clarify, since the way I understood it is that xargs consecutively
>reads lines seperated by newline from stdin, appending each in turn
>to the command line arguement before executing that command; doesn't
>'*' expand to a list of matched filenames seperated by SPACE?

A questionable design "feature" of xargs is that it splits the
input text into arguments on "unquoted whitespace", not just
newlines. This (and the fact that newlines are legitimated
components of command-line parameters) is why more "recent"
implementations of xargs support a -0 flag.

--Ken Pizzini

James T. Dennis

unread,
Oct 15, 2000, 8:01:16 PM10/15/00
to
In comp.unix.admin laura fairhead <laura_f...@my-deja.com> wrote:
> On Fri, 6 Oct 2000 22:11:01 +0200, "Tim" <t> wrote:
>> "Chuck Dillon" <dil...@gcg.com> wrote in message
>> news:39DDDD4E...@gcg.com...

>>> BugSpray wrote:

>>>> echo * will not fail as it is an internal command and therefor does not
>>>> have the same limitations as ls.

>>> Unless it is aliased to say /usr/5bin/echo on a solaris box for example.


>>> Also, the limit may not be the same but there is still likely to be a
>>> limit. You can't claim that for every shell it's built in echo can
>> handle> any number of args.

>> Probably not, but on sco openserver 5 this command has never let me down.
>> And believe me I have put it to the test !

> I have tried;


> ECHO \*\*\*\*\*

> In Solaris 7 ksh, and in SuSe Linux ksh. I used the \*\*\*\*\* to guarentee

> a huge number of files. On both systems the line caused bad problems.
> Linux had a kernel crash and on Solaris the terminal box crashed and


> caused the system to start performing sluggishly and intermitantly until
> I killed the process.

The command as you've typed it makes no sense (unless you
have some sort of external ECHO or alias or shell function, in
which case I have no idea what it's supposed to mean).

I tried:
echo /* /*/* /*/*/* /*/*/*/* /*/*/*/*/*

(five layers of directories) under bash and pdksh on my wife's
S.u.S.E. system. bash waited about 15 to 20 seconds and spit out
*LOTS* of filenames, pdksh took about 60 seconds and spit out
*LOTS* of filenames.

Neither of these had any noticeable effect on load average or system
response (she's using it in X, probably doing GIMP work).

I noticed (using find / | wc) that the full list of files on the
whole system is only about 1.5 million characters. So, I shouldn't
get two close to our core or swap limits regardless.

Doing that with

time echo /* /*/* /*/*/* /*/*/*/* /*/*/*/*/* /*/*/*/*/*/*

(six levels) took five minutes and did alloc enough memory that
Heather noticed a glitch in responsiveness.

> It seems to me that we are talking about more than just a single limit
> here;

> (i) command line buffer size
> (ii) number and total length of arguements to external command

Yes. That's definitely true.

A shell can have an internal limit on the command line that it
can parse; and the OS/kernel will have a different limit on the
maximum length of a set of parameters to the exec*() family of
system calls.

Some shells (such as bash) don't have a set limit on the command
line length --- but will malloc() and realloc() until the
hit your rlimit or exhaust core and swap. So this example could be
an OOM (out of memory) stress test in some cases.

Many versions of UNIX do not deal gracefully with OOM (which is
why we often see such excessive swap space recommendations). This
includes many versions of the Linux kernel, though that team is
improving things with each version and it is a recurring topic.

(There are a variety of unofficial OOM patches for Linux and
some of the embedded and real-time kernel developers have their
own little forks in the code to meet their needs. Some of these
are sifting their way up through the ranks for Linus to look at).

> If tcsh has a limited command line buffer size then surely the line
> echo * |xargs rm
> will still fail (at a certain point) after the '*' has been expanded?

> That failure would come at almost exactly the same point that

> rm * would fail for the same reason, of course it is not necessarily

No. It would not necessarily come at anywhere *near* the same
point. The internal command line length is probably two orders
of magnitude larger than the kernel's MAXARGS limit for most
shell/kernel combinations.

> true that this could be the only cause for the command to fail.
> The way I understand it 'rm *' can fail if the number of arguements
> is to great a number, whereas the 'echo * |' will not suffer this
> problem.

echo * can fail for "this" problem (meaning that the line is too
long to handle). Of course "this" problem has two forms: internal
parsing length or malloc resource limit exceeded *or* kernel's
MAXARGS limit exceeded.

> BTW: This is one point that I would appreciate if somebody
> could clarify, since the way I understood it is that xargs consecutively
> reads lines seperated by newline from stdin, appending each in turn
> to the command line arguement before executing that command; doesn't
> '*' expand to a list of matched filenames seperated by SPACE?

> Bye,
> L

In any (non-trivial) case you'll do better with:

find . -maxdepth 1 ... -print | xargs rm

then with:

echo * | xargs rm

Both will fork() (any | on a command line implies a fork()
since a sub process must either write to or read from the
pipe). Remember a | is an *inter-process* communication;
ergo there must be multiple processes involved.

The first example will exec*() a find command. The other
should (in most shells) simply be processes by the (sub)shell.

Note that different shells will put the subshell on different
sides of the |. Newer Korn shells and zsh will put the subshell
to the left of the pipe (meaning that the current process; reads
from the subprocess. I think this is the right way to do it.

Bash, and older Bourne and Korn shells, will put the subshell on
the right. The current process will write into the shell. This
is the bad way to do it (but not so bad as to make them unusable.

Here's the easiest test:

unset $bar; echo foo | read bar; echo $bar

... if this echo's "foo" then your shell reads from subshells
otherwise it writes to them.

So, the primary different between these two examples is one
exec() call (which should be essentially insignificant).

This is offset by the fact that you can precisely control
many aspects of the files that you want to find (using the
various predicate options to the find command). Using GNU
find (and xargs) you can also deal with possible degenerate
filenames (those containing whitespace, control-characters, etc.
Thus you can use a slight variation of the ealier command:

find . -maxdepth 1 ... -print0 | xargs -0 rm

... which will ensure that the filenames are passed as
ASCIIZ (null-terminatred) strings.

Note that xargs will exec() several, possibly hundreds or
even thousands of commands if you you have enough files
present. An alternative to that can easily be written as a
PERL one-liner:

find . -maxdepth 1 -not -type d -print0 | perl -ln0e 'unlink;'

... this will be more efficient since perl is acting as a filter,
processing each line of input as it reads it. (note: *those are
zero's in -print0 and -ln0e*).

Incidentally since we've degenerated into perl one-liners, here's
an ugly little one that will remove empty directories under a
a tree (not-counting the one we're in; even if that was empty):

find . -type d -mindepth 1 -print0 | \
perl -ln0e '@x = glob($_ . "/*") if -d; rmdir if -d and $#x < 0;'

That works though the -type d and the if -d; are redundant (one of
them could be ommitted). I'm sure there's a shorter way to get
just the number of elements from glob(), but I don't know how do
I assign a whole list and I use $# on that. That's the ugly part!
(Suggestions on improving that are welcome).

I'm probably also being excessive with my switch bundle (-ln0e) ...
I know I need -n (or -p) and -0 and -e. I'd normally need -l since
I'm using -0 for forcing the input to be NULL terminated strings and
I would want to have normal line terminators on any output; but I don't
need it here.

Getting back to the topic (slightly) it also seems that my
glob($_ . "/*") *might* alloc LOTS of core if we tripped across
one of these fat directories. Is there a way to eval glob or
readdir in some sort of scalar context to return the number of
entries without actually trying to build a list of them? Is there a
different function that would do this?

Ken Pizzini

unread,
Oct 16, 2000, 12:56:24 AM10/16/00
to
On 16 Oct 2000 00:01:16 GMT, James T. Dennis <jade...@idiom.com> wrote:
> Incidentally since we've degenerated into perl one-liners, here's
> an ugly little one that will remove empty directories under a
> a tree (not-counting the one we're in; even if that was empty):
>
> find . -type d -mindepth 1 -print0 | \
> perl -ln0e '@x = glob($_ . "/*") if -d; rmdir if -d and $#x < 0;'

> Getting back to the topic (slightly) it also seems that my

> glob($_ . "/*") *might* alloc LOTS of core if we tripped across
> one of these fat directories. Is there a way to eval glob or
> readdir in some sort of scalar context to return the number of
> entries without actually trying to build a list of them? Is there a
> different function that would do this?

Skip the glob altogether. Just have perl attempt to rmdir, and
if it fails, silently go on to the next input line. It may feel
a little funny doing this, but it is of the same nature as
"don't stat() the file to check and see if you can open() it,
just open() it and see of an error comes back" --- if it
succeeds, great, and if not, that's fine too. It's both more
robust and less expensive than trying the glob first.

--Ken Pizzini

laura fairhead

unread,
Oct 17, 2000, 3:00:00 AM10/17/00
to
On 15 Oct 2000 20:50:18 GMT, k...@halcyon.com (Ken Pizzini) wrote:

> On Sat, 14 Oct 2000 19:57:11 GMT,
> laura fairhead <laura_f...@my-deja.com> wrote:

> >I have tried;
> >
> >ECHO \*\*\*\*\*
> >
> >In Solaris 7 ksh, and in SuSe Linux ksh. I used the \*\*\*\*\* to guarentee
> >a huge number of files.
>

> Somehow I think that you must have typed that wrong here...
> First off, the command is "echo", not "ECHO"; second the string
> \*\*\*\*\* will be expanded by the shell to the string "*****",
> which echo would just output verbatum, and be a rather
> uninteresting experiment. Based on the description below I'll
> assume that you really entered:
> echo /*/*/*/*/*

Obviously I've had maybe too many hours sitting at the cmd prompt
of a certain other OS... I pretty new to UNIX, go eazy on me!
Yes, of course you are correct in your assumption; it was a typo.

>
> > On both systems the line caused bad problems.
> >Linux had a kernel crash
>

> !?!?!? What version of the kernel was this? There is zero
> reason for the kernel to crash from such a command! If the
> problem was memory exhaustion then the kernel ought to have
> killed the greedy process and protected itself and the rest
> of the system...

I've still got the bug report;

===============================
From: root
To: bug-...@gnu.org
Subject:
FILENAME EXPANSION BUG IN BASH SHELL

Configuration Information [Automatically generated, do not change]:
Machine: i686
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS: -DPROGRAM='bash' -DHOSTTYPE='i686' -DOSTYPE='linux-gnu'
-DMACHTYPE='i686-pc-linux-gnu' -DSHELL -DHAVE_CONFIG_H -I. -I.
-I./lib -O2 -pipe
uname output: Linux tarquin 2.2.5 #1 Tue Apr 13 16:33:46 MEST 1999 i586 unknown
Machine Type: i686-pc-linux-gnu

Bash Version: 2.02
Patch Level: 1
Release Status: release
===============================

Only just found this; so it was actually BASH ... also I found it because
I couldn't remember where a directory was in the tree and typed;

cd /*/*/*/*/pathto/directory

The system died. Got an OOPS message, and a stack dump, had to reboot.
When I rebooted I tried the 'echo' variation and found that did the
same thing. I filled out the bug report and sent it to them but never
even recieved so much as an ACK back. That and other things put me
right off Linux for some time.

>
>
> > and on Solaris the terminal box crashed and
> >caused the system to start performing sluggishly and intermitantly until
> >I killed the process.
>

> Because the system started thrashing due to memory starvation,
> no doubt.
>
>

> >It seems to me that we are talking about more than just a single limit
> >here;
> >
> >(i) command line buffer size
> >(ii) number and total length of arguements to external command
>

> Limit (i) is often "as much virtual memory as the kernel will
> give to the shell process"; limit (ii) is a kernel limit.
>

> >If tcsh has a limited command line buffer size then surely the line
> >echo * |xargs rm
> >will still fail (at a certain point) after the '*' has been expanded?
>

> If echo is a shell builtin, then it is limit (i) that is
> relevant; if echo is an external command, then it it limit
> (ii) that applies (in real-world shells; toy shells might
> hit limit (i) first).
>

> >That failure would come at almost exactly the same point that
> >rm * would fail for the same reason,
>

> Only if the invoked echo command is an external command, which
> is seldom true these days.
>

By "that failure" I was referring to the failure due to command line
buffer overflow; both rm & echo would be subject to that limit, like
you explain below.

>
> > of course it is not necessarily

> >true that this could be the only cause for the command to fail.
> >The way I understand it 'rm *' can fail if the number of arguements
> >is to great a number, whereas the 'echo * |' will not suffer this
> >problem.
>

> External commands, such as rm, are subject to both limit (i) and
> limit (ii); internal commands, such as is typical of echo, is
> subject only to limit (i). For non-toy shells limit (i) is
> going to be substantially higher than limit (ii), so in effect
> these are two completely different behaviors.
>
>

> >BTW: This is one point that I would appreciate if somebody
> >could clarify, since the way I understood it is that xargs consecutively
> >reads lines seperated by newline from stdin, appending each in turn
> >to the command line arguement before executing that command; doesn't
> >'*' expand to a list of matched filenames seperated by SPACE?
>

> A questionable design "feature" of xargs is that it splits the
> input text into arguments on "unquoted whitespace", not just
> newlines. This (and the fact that newlines are legitimated
> components of command-line parameters) is why more "recent"
> implementations of xargs support a -0 flag.
>
> --Ken Pizzini

Surely the differencial between quoted/unquoted character does not
exist at the xargs side of the pipe? So there is no way to xargs
greater than one single arguement?


See Ya,

L


Kjetil Torgrim Homme

unread,
Oct 17, 2000, 8:35:07 PM10/17/00
to
[laura fairhead]

> Only just found this; so it was actually BASH ... also I found it because
> I couldn't remember where a directory was in the tree and typed;
>
> cd /*/*/*/*/pathto/directory
>
> The system died. Got an OOPS message, and a stack dump, had to
> reboot. When I rebooted I tried the 'echo' variation and found
> that did the same thing. I filled out the bug report and sent it
> to them but never even recieved so much as an ACK back. That and
> other things put me right off Linux for some time.

I'm afraid you reported the bug to the wrong people. A system crash
can not be the fault of bash, so you should have sent it to
linux-...@vger.redhat.com

(I tested on a 2.2.16. After two hours I gave up the experiment.
bash had grown to 55 MB resident...)


Kjetil T.

Jefferson Ogata

unread,
Oct 18, 2000, 12:32:19 AM10/18/00
to

Possibly she was using a computer that had a corrupted filesystem or bad
memory, and the bash exercise elicited the bad behavior.

--
Jefferson Ogata : Internetworker, Antibozo
<og...@antibozo-u-spam-u-die.net> http://www.antibozo.net/ogata/
whois: jo...@whois.networksolutions.com

James T. Dennis

unread,
Oct 19, 2000, 1:58:41 AM10/19/00
to
Logan Shaw <lo...@cs.utexas.edu> wrote:

> In article <6ttbs8...@kmself.nntp.ix.netcom.com>,
> <kms...@ix.netcom.com> wrote:
>> Actually, I've had the experience of "echo *" failing, in the instance
>> of having some 120,000 files in a single directory (a website
>> messageboard archive, tar'ed by someone else, that I was trying to
>> untar).

>> "echo *" requires a shell expansion of "*" to all files globbed.
>> "ls | xargs" instead sets up IPC between the 'ls' command and xargs.
>> Though "echo *" may be effective in most cases, "ls | xargs" is the
>> safer alternative.

> Yes, but even "ls | xargs", is going to be extraordinarily slow on a
> directory with 120,000 files because "ls" sorts its output.

There are options to ls to avoid that (sorting).
See the man page fore details. (It might differ from
one *nix to another, I don't know if that's part of the
POSIX command utilities spec).

find does not sort, so I usually use

find . -maxdepth 1 ... -print0 | xargs -0 ...

... or

find . -maxdepth 1 ... | while read i; do
...
done

However, sometimes I really wish that bash and
other shells add a read0 or read -0 option to
accept NUL terminated (ASCIIZ) lines of input.

It's possible (with some extra care) to pipe
find ... -print0 | tar -c "[:print:]" "?" | ...
(then your while loop would have to check for
multiple matches to each of these globs).

> And that's why I'd use

> perl -e 'opendir(D,"."); print map ("$_\n", readdir D)'

I agree about using PERL. These days perl -ln0
is my friend. However I still use find for finding.

0 new messages