Thanks
Alberto
thanks/atishay
cat filename | \
while read currentline
do
echo $currentline
done
And why do you propose
cat filename | loop
when you can do
loop < filename
?
Last but not least,
echo "$currentline"
should be "quoted", so shell does variable substitution but
no other expansions.
--
Michael Tosch @ hp : com
\ means the line has not ended and continues on next line
i.e., splitting a single line across more than 1 lines
i could have written
cat filename | while read currentline
do
...
done
while read aLine; do ...; done < your.file
example:
function mycat {
count=0
while read line; do
count=$((count + 1))
printf "%6d: %s\n" $count "$line"
done < $1
}
--
Glenn Jackman
Ulterior Designer
The backslash is unnecessary.
>> > while read currentline
>> > do
>> > echo $currentline
>> > done
>> >
>> > Alberto wrote:
>> >
>> >>I want to read 1 by 1 a line of a file in a bash shell script. What
>> >>could I do ?
>> >>I know I can see the file with eg. cat, but: what could I do to
>> >>"process" each line of the file ?
>>
>> And why do you propose
>>
>> cat filename | loop
>>
>> when you can do
>>
>> loop < filename
> i agree, this ia another improvement possible in the script
>>
>> ?
>>
>> Last but not least,
>>
>> echo "$currentline"
>>
>> should be "quoted", so shell does variable substitution but
>> no other expansions.
> i would like to know what other expansions could the shell do when
> there is no quote(") ?
Try it when the variable contains " * ".
--
Chris F.A. Johnson, author <http://cfaj.freeshell.org>
Shell Scripting Recipes: A Problem-Solution Approach (2005, Apress)
===== My code in this post, if any, assumes the POSIX locale
===== and is released under the GNU General Public Licence
while read line
do
echo $line
--
--
--
done < filename
The variable line will read line one at a time from the file "filename"
Hope this helps!!
No.
while IFS= read <&3 -r line; do
printf '%s\n' "$line"
done 3< filename
A shell is not meant to be used like that, that why you need all
those options and tweaks for it to do properly what it has not
been meant to do.
Avoid loops in shell when possible.
--
Stéphane
Hi Guys,
I have to ask a question of Stéphane -
Why the remark about avoiding shell loops whenever possible? Are you
saying that shell languages should only be used in combination with a
linear programming model?
LarryB
Yes - in most cases.
>
> while IFS= read <&3 -r line; do
> printf '%s\n' "$line"
> done 3< filename
This is the rock-solid bullet-proof cover-all variant,
supposed your shell supports it.
>
> A shell is not meant to be used like that, that why you need all
> those options and tweaks for it to do properly what it has not
> been meant to do.
>
> Avoid loops in shell when possible.
>
Avoid shells if you want to code software for a Mars mission.
A shell is the tool you use to enter commands. The looping
statements are an afterthought and don't fit well in that
scheme.
You generally enter commands that perform loops (though the loop
is hidden behind more general paradygms) rather than start
several commands for each pass in a shell loop.
For instance, you run a text filter command that performs some
actions on every line of a file, rather than write a shell loop
that runs several commands, first to get those lines, and then
to process them.
sed 's/.//' < file
rather than:
while IFS= read <&3 -r line; do
printf '%s\n' "${line#?}"
done 3< file
If you want a loop, you need a programming language, not a
shell.
--
Stéphane
And to write programs.
> The looping statements are an afterthought and don't fit well in
> that scheme.
Looping has been part of the shell since the beginning. They fit
perfectly.
> You generally enter commands that perform loops (though the loop
> is hidden behind more general paradygms) rather than start
> several commands for each pass in a shell loop.
When that's appropriate, yes.
> For instance, you run a text filter command that performs some
> actions on every line of a file, rather than write a shell loop
> that runs several commands, first to get those lines, and then
> to process them.
>
> sed 's/.//' < file
>
> rather than:
>
> while IFS= read <&3 -r line; do
> printf '%s\n' "${line#?}"
> done 3< file
>
> If you want a loop, you need a programming language, not a
> shell.
The shell is a full-fledged programming language. You are talking
about the shell as if it hadn't progressed past the Bourne shell --
and even that was capable enough for a great deal of serious
programming.
The Korn shell and later variants (POSIX, bash, etc.) have added a
great deal, making the shell my language of choice for most
programming tasks.
When it's a table of static data to be processed, I've sometimes put
the data in the same file as the script. Rather like the "read" and
"data" commands in BASIC. It works in bash and ksh, although it
probably counts as "evil" scripting.
while read sel dat1 dat2; do
case $sel in
math-add) echo $(( $dat1 + $dat2 ));;
math-sub) echo $(( $dat1 - $dat2 ));;
esac
done << ends
# Data table
math-add 2 3
math-sub 7 3
ends
--
Dave Farrance
I often execute a set of commands on a group of files (not file
contents), such as:
for file in *txt; do mv $file ${file}~; done
Are you saying that would be more clear/efficient in, say, perl?
ITYM
for file in *txt; do mv -- "$file" "$file~"; done
I'd use mmv or zmv, which are tools designed for the task:
zmv '*txt' '$f~'
mmv '*txt' '#1txt~'
This way, you avoid the problems you've not thought of in your
loop solution.
Shell loops may be useful sometimes, but often are not or are
dangerous and/or inefficient, in particular to do text
processing.
--
Stephane
Indeed. Thanks for the correction.
>
> I'd use mmv or zmv, which are tools designed for the task:
And which are not available where I am. Although I learned from another
post in this thread that xargs is applicable too:
ls -1 *txt | xargs -i mv {} {}~
ls -1 *.txt | xargs -i mv -- {} {}~
or
ls -- *.txt | xargs -i mv -- {} {}~
work as well.
With all due respect:
Not big on The Unix Philosophy, are you? (small programs that do one
thing well, standard mechanisms for putting them together to solve
larger problems, etc.)
Something about that "tools designed for the task" just rubs me
the wrong way, I guess; part of the usefulness (maybe) and the fun
(certainly, at least for some of us) of typical Unix command-line
environments is getting to do these little bits of shell programming
pretty regularly.
--
B. L. Massingill
ObDisclaimer: I don't speak for my employers; they return the favor.
That's not correct, xargs expected input format is different
from ls output format. And you're missing some "--"s and "-d" for
ls.
--
Stéphane
Seems to work okay here. Did you read the first command as
"ls -l" rather than "ls -1"?
I think I explained it in another email.
ls output is one file per line.
xargs input is a list of blank separated words (blanks being any
of space, tab or newline) where the backslash, the single and
double quotes can be used to escape the separators.
So, that kind of command only works if none of the file names
contain any tab, space, newline, backslash, single or double
quote characters.
And you need the -d option to ls, and you need to specify where
the list of option ends in both ls and mv, or ls or mv will take
files whose name start with a "-" as options.
To convert from the ls output format to the xargs expected input
format, you can use sed 's/./\\&/g', i.e. escape every character
but the newline.
Of course, if some file names contain newline characters, you're
screwed.
The mmv and zmv solutions I gave don't have that problem.
--
Stephane
e-mail, or post? I'll assume the latter.
Anyway, yes, I misunderstood your earlier post, assuming
that you were thinking of the output of "ls -l". Sorry about
that.
>ls output is one file per line.
With -1, yes.
>xargs input is a list of blank separated words (blanks being any
>of space, tab or newline) where the backslash, the single and
>double quotes can be used to escape the separators.
>
>So, that kind of command only works if none of the file names
>contain any tab, space, newline, backslash, single or double
>quote characters.
Well .... The man page for the xargs I have (says it's the GNU
version) says that one effect of the "-i" option, used in the example
here, is that only newlines terminate input items, not blanks.
However, the "mv" part of the example is still problematical.
Adding double quotes around the {} and {}~ seems to make things
work with filenames containing blanks.
>And you need the -d option to ls, and you need to specify where
>the list of option ends in both ls and mv, or ls or mv will take
>files whose name start with a "-" as options.
All valid objections. I'm not quite interested enough to try all
possible cases, but your idea of using "sed" seems to help with
many cases.
>To convert from the ls output format to the xargs expected input
>format, you can use sed 's/./\\&/g', i.e. escape every character
>but the newline.
>
>Of course, if some file names contain newline characters, you're
>screwed.
>
>The mmv and zmv solutions I gave don't have that problem.
>
Probably the moral of this story is that it's trickier than it
might seem to write a shell script that will function properly
with all legal filenames. Whether shell scripts that only
function properly for "non-pathological" filenames are still
useful, and what constitutes a "non-pathological" filename,
might be matters of opinion. Mine, I guess, is that if you
set the standards too high, you lose a lot of the functionality
that IMO is one of the most useful and fun aspects of using --
well, maybe I should mention only tcsh and bash since those
are the only Unix shells with which I have nontrivial experience.
(The Fedora Core Linux system on which I'm writing this doesn't
seem to have either mmv or zmv as executables. Another data
point, maybe.)
zmv is a zsh feature (autoloadable zsh function).
mmv is a non standard command that you'll find in some optional
package.
--
Stephane
Even without -1, the output is one file per line if the output is
not going to a terminal.