Hello, thanks for the very detailed and interesting response. I ended
up using "line" this time, because I needed to fix the script before the
next cron run and because I don't care very much about portability, or
even performance :) Nonetheless, I'm thinking about improvements.
On 2015-09-25 10:27 +0100, Stephane Chazelas wrote:
> You generally don't want to process files line by line in
> shells. See for instance:
[Read that.] Hmm. A lot of my tasks seems to involve the following
pattern: Read stdin, for each $line of stdin do foobar($line), where
foobar includes _multiple_ external programs (so connecting and
checking after them is most easily and naturally done with the shell).
Of course I am _not_ talking about text processing or formatting
programs like cut or sed. I know those cases are best handled with awk.
For instance, the script this time takes over after an existing program
foo which dumps many files in RFC 822 format into ~/.foo/. For each
file ~/.foo/bar, I need to add a header or two (most naturally done with
printf and cat in a sequence), and mail the result of that using
sendmail -i to myself.
The number of the files is not bounded, so I cannot just match them all
with a glob.
How would you do that without the evil "read -r" or "line"? I also
don't want to scatter the job among multiple script files, so please
don't suggest rewriting just the loop part in python.