Re: Is it good/bad practice to check if a number of programs are on the system in a shell file?

28 views
Skip to first unread message

Janis Papanagnou

unread,
Mar 22, 2022, 11:12:20 AMMar 22
to
On 22.03.2022 13:57, Ottavio Caruso wrote:
> I mean, if an executable is not there, the shell will still complain,

Starting a post with "I mean" - what does that mean?

"not there" - where?

The shell complains about what?

> but I would like to make sure that users of my script have the "right"

Define "the right".

> executables, for example not aliases, custom functions, etc. Or should I
> just leave the shell do its job?
>
> A template would be like this:
>
> for each of $PROGRAMS LIST
> check if $PROGRAM it's installed in $PATH and it's not an alias
> && echo $PROGRAM has been found at $LOCATION
> || echo $PROGRAM has not been found; exit (which code?)
>
> and so on.
>
> (I'm using "for each" not literally here, it's not a csh script)

Why not just using standard shell syntax to describe what you want?

>
> I am experimenting with "which" but I am having random results.
>
> "command -v" is too tolerant and will accept aliases, which I don't want.

You didn't tell us what shell you are using, so I suggest to use ksh's
'whence' command that allows you to exclude functions, aliases, etc.;
'whence --man' will show you the details (or search in 'man ksh').

Janis

Grant Taylor

unread,
Mar 22, 2022, 5:09:31 PMMar 22
to
On 3/22/22 6:57 AM, Ottavio Caruso wrote:
> I mean, if an executable is not there, the shell will still complain,

I think it's important to define where "there" means /explicitly/. E.g.
what if the command you want is in /usr/bin instead of /bin. There is
also a possibility that /bin is a sym-link to /usr/bin, but earlier in
the PATH.

> but I would like to make sure that users of my script have the "right"
> executables,

What defines the "right" vs "wrong" executable?

What if it's the version that you are wanting but it's located in ~/bin
instead of the system's (/usr(/local))/bin directory? E.g. GNU tools on
a *BSD system existing in ~/bin.

> for example not aliases, custom functions, etc. Or should I just
> leave the shell do its job?

Why do you want to ignore user preferences?

I could be arrogant and ask "what gives you the audacity to specify
which `ls` is run on /my/ system?

Especially if you don't provide the version you want run.

Also, aliases tend to be interactive and don't function in
non-interactive execution of scripts. Functions are their own critter.

> A template would be like this:
>
> for each of $PROGRAMS LIST
>      check if $PROGRAM it's installed in $PATH and it's not an alias
>     && echo $PROGRAM has been found at $LOCATION
>     || echo $PROGRAM has not been found; exit (which code?)

So you aren't differentiating between /bin/bla, /usr/bin/bla,
/usr/local/bin/bla. So which of those is the right version of bla?

That also fails to take into account ~/bin being the first directory in
$PATH.

> and so on.
>
> (I'm using "for each" not literally here, it's not a csh script)
>
> I am experimenting with "which" but I am having random results.
>
> "command -v" is too tolerant and will accept aliases, which I don't want.

Try using a full path to the command that you want or prefixing commands
with a backslash to disable alias support.

Again, alias / function support in interactive shells is different than
in non-interactive scripts.





--
Grant. . . .
unix || die

Christian Weisgerber

unread,
Mar 22, 2022, 5:30:09 PMMar 22
to
On 2022-03-22, Ottavio Caruso <ottavio2006...@yahoo.com> wrote:

> I mean, if an executable is not there, the shell will still complain,
> but I would like to make sure that users of my script have the "right"
> executables, for example not aliases, custom functions, etc.

I suspect your premises are wrong. How would the shell executing
your script pick up "aliases, custom functions, etc."?

Nobody ever does what you are proposing.

--
Christian "naddy" Weisgerber na...@mips.inka.de

Grant Taylor

unread,
Mar 22, 2022, 9:25:27 PMMar 22
to
On 3/22/22 2:40 PM, Christian Weisgerber wrote:
> Nobody ever does what you are proposing.

I question the veracity of that statement.

I've been known to create functions and / or scripts with the name of
other commands and arrange for them to be executed in place of the other
commands.

E.g. I have ~/bin at the start of my path and a script named `ifconfig`
and another named `ip` that is a wrapper to run the command(s) through
sudo. As such, I can run `ifconfig` / `ip` at the command line, in a
function, in a script and have it use sudo without changing what I do.

David W. Hodgins

unread,
Mar 22, 2022, 10:50:20 PMMar 22
to
It's also common practice to use aliases with some options specified to reduce
typing.

Eg. alias diskdrake='diskdrake --expert &'

Regards, Dave Hodgins

Bit Twister

unread,
Mar 23, 2022, 2:26:29 AMMar 23
to
Yep and quite possible for the user to alias bash key words and breaking
the script.

Christian Weisgerber

unread,
Mar 23, 2022, 11:30:09 AMMar 23
to
On 2022-03-23, Grant Taylor <gta...@tnetconsulting.net> wrote:

>> Nobody ever does what you are proposing.
>
> I question the veracity of that statement.
>
> I've been known to create functions and / or scripts with the name of
> other commands and arrange for them to be executed in place of the other
> commands.

I meant that nobody checks in their scripts that standard commands
are in fact those standard commands.

Grant Taylor

unread,
Mar 23, 2022, 4:33:29 PMMar 23
to
On 3/23/22 8:59 AM, Christian Weisgerber wrote:
> I meant that nobody checks in their scripts that standard commands
> are in fact those standard commands.

I question the accuracy of that.

How would one check?

Simply checking / using the path to a /bin/ls vs ~/bin/ls in the PATH
doesn't suffice because the /bin/ls file can be replaced.

So what is standard vs non-standard?

I feel like using the full path alleviates any need to check which
instance of ls is being used.

Janis Papanagnou

unread,
Mar 24, 2022, 1:23:08 AMMar 24
to
On 23.03.2022 21:33, Grant Taylor wrote:
> On 3/23/22 8:59 AM, Christian Weisgerber wrote:
>> I meant that nobody checks in their scripts that standard commands are
>> in fact those standard commands.
>
> I question the accuracy of that.
>
> How would one check?
>
> Simply checking / using the path to a /bin/ls vs ~/bin/ls in the PATH
> doesn't suffice because the /bin/ls file can be replaced.

Replaced files in the standard bin directories would indicate to
be a compromised system, where it's not really relevant whether a
well meaning sysadmin replaced it or whether it's an effect of a
malicious system attack. I'd thus rely on these tools (not "check"
them during runtime, which I really consider to be a sick idea).

>
> So what is standard vs non-standard?
>
> I feel like using the full path alleviates any need to check which
> instance of ls is being used.

Indeed, to have assurance with basic programs (rm, mv, cp, ...) you
can use the absolute paths, or, set the PATH explicitly in scripts
to call the intended tools. (That was something we had also defined
in our coding standard in the 1990's. Now there's also env(1).)

Janis

Jorgen Grahn

unread,
Mar 24, 2022, 3:22:26 AMMar 24
to
On Tue, 2022-03-22, Ottavio Caruso wrote:
> I mean, if an executable is not there, the shell will still complain,
> but I would like to make sure that users of my script have the "right"
> executables, for example not aliases, custom functions, etc. Or should I
> just leave the shell do its job?

What executables are we speaking of?

If I had users, I'd do it like this:
- For core Unix commands (however you define that) I'd just assume
they're there. (If I targeted both Linux and the BSDs, I'd worry
about whether the commands were really equivalent.)
- For exotic dependencies, if I was distributing my software as a package
(RPM, deb ...) I'd list them as dependencies.
- If I distributed my software as source code, I'd list the dependencies
in the documentation, and maybe have a "check for dependencies" build
step.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Grant Taylor

unread,
Mar 24, 2022, 3:41:39 PMMar 24
to
On 3/23/22 11:23 PM, Janis Papanagnou wrote:
> Replaced files in the standard bin directories would indicate to be
> a compromised system, where it's not really relevant whether a well
> meaning sysadmin replaced it or whether it's an effect of a malicious
> system attack.

Would you please elaborate on why you say "Replaced files in the
standard bin directories would indicate to be a compromised system..."?

I feel like there are legitimate replacements. The first thing that
comes to mind is Kerberized or S/Key version of things like passwd,
telnet, etc.

I'm particularly interested in why "not really relevant whether a well
meaning sysadmin". E.g. a sysadmin installing a new version of telnet
that supports Kerberos based authentication that the site has chosen to
use across their network.

Does it matter if the new version of the binary comes from in-house
(ostensibly compiled locally), the vendor, or a 3rd party?

Does it matter if the file from the vendor is an updated / patched
version of the binary in question?

I'll agree that replacing binaries will render a system no longer /pure/
to the original installed version. But I don't consider a system that
has had vendor provided files patched by vendor provided patches to be
/compromised/.

Does a new installation of the new patched version differ from an old
install using the previous version and updated to the same version. Is
the old install any different than the fresh install when all the files
are exactly the same?

Janis Papanagnou

unread,
Mar 25, 2022, 1:38:14 AMMar 25
to
I think there is a difference between a system kept safe in case of the
detection of a security incident in a professionally managed environment
and a "well meaning sysadmin" (as I called it). In a managed environment
the established security processes should be the relevant measure, and
the individual opinion of a sysadmin - to formulate it defensively - of
lesser importance. (I'm sure that for smaller companies it's certainly
more a challenge than for larger, process-driven companies.)

A special case, BTW, is a certified system. In the past I had worked in
a company that developed and sold security solutions. Systems had been
security tested and evaluated and then certified. Any change of the
system and the software versions invalidated the certificate.

Other experiences from lesser managed company contexts with more casual
handling of issues were less rewarding (for the company and customers).

This is where I am coming from. YMMV.

Janis

Grant Taylor

unread,
Mar 25, 2022, 2:38:21 PMMar 25
to
On 3/24/22 11:38 PM, Janis Papanagnou wrote:
> I think there is a difference between a system kept safe in case
> of the detection of a security incident in a professionally managed
> environment and a "well meaning sysadmin" (as I called it).

Okay.

I now think that what you originally meant by "well meaning sysadmin" is
what my coworkers refer to as a "cowboy" who's know for saying things
like "hold my beer" and "it works for me".

That is a distinctly different subset than a sys-admin knowingly
installing an alternate security package (e.g. Kerberized telnet) or
vendor updates pursuant to the organization's wishes and change control.

Yes, I acknowledge certified systems. I'm not quite sure how standard
software updates work in that arena. Mostly because I've not dealt with it.
Reply all
Reply to author
Forward
0 new messages