I noticed a bit of an odd issue with maintaining `known_hosts` when the
target machine is behind a bastion using `ProxyJump` or `ProxyCommand`
with host key clashes.
Client for me right now is OpenSSH_9.3p1 on Gentoo Linux/AMD64. I'm a
member of a team, and most of us use Ubuntu (yes, I'm a rebel). Another
team who actually maintain this fleet often access the same machines via
Windows 10/11 boxes (not sure if they use native OpenSSH or WSL). I
rather suspect this issue actually is not platform-specific.
Target machines are using OpenSSH on Debian/ARMHF (the exact version
varies with the exact OS version) -- hardware is essentially
industrialised Raspberry Pis.
The bastions are typically OpenWRT-based (Teltonica) routers with
Dropbear SSHd.
We share a configuration tree via a git repository which contains `Host`
entries for each of the target machines and the intermediate bastion hosts.
The target machines are mostly using "private" address space in the
172.16.0.0/12 subnet (although some are using 172.40.0.0/16 addresses,
because some goose thought all 172.0.0.0/8 subnets were "private"). The
bastion hosts run `dhcpd` and in many cases, the target acquires its
address via DHCP (yes, super bad idea).
That means that in many cases, multiple _different_ OpenSSH servers,
have the "same" IPv4 local address. Since when using `ProxyJump`, this
address is recorded along side the host's public key in the user's
`known_hosts` file, you can imagine when one logs into one server via
one bastion, then tries to log in to a different server via a different
bastion if that second server has the same local IPv4 address.
The crux of this is that we cannot assume the local IPv4 address is
unique, since it's not (and in many cases, not even static).
In the case of `ProxyCommand`, the IPv4 address of the target may not
even be obvious to the SSH client process.
-- Possible solutions / work-arounds using existing OpenSSH client --
I looked around for a solution, I ruled out turning off
`StrictHostKeyChecking` (as seen on ServerFault) as a terrible idea
asking for a Man-In-The-Middle attack.
DNS might "solve" the problem, but is likely to be messy to implement
(bastion hosts are resource constrained, not sure if they do dynamic DNS
for their LAN clients).
I know I'll get push-back from the other team if I try to mandate unique
local IPv4 addresses. (This is the same team that unwittingly decided
to rely on DHCP static assignment to "do the right thing".)
Link-local IPv6 is a tempting prospect: I have used this to get into a
target node when its DHCP client has gone AWOL leaving IPv4
unconfigured, and being derived from the MAC address, *should* be
globally unique, but this assumes a dual-stack LAN. (And the
engineering team who look after these are likely to baulk at this. They
barely understand IPv4!)
Port forwarding will require a lot of manual piss-farting around on the
router's config webpage… and will likely break if the embedded DHCPd
decides to not assign the static IP the target machine was supposed to get.
In the `ssh_config` man page, I see there is a `KnownHostsCommand`
option, which could possibly be employed here, however since the files
are "shared" by multiple users, there's the issue of paths, since I'll
bet the `KnownHostsCommand` is relative to ${PWD} and not ~/.ssh/config
or any config file imported by it.
User or Global `known_hosts` won't work due to the format of the file
used (it assumes a unique endpoint IP address, which we know is not
unique). (I have `HashKnownHosts` turned off on this Gentoo machine, my
workplace laptop has it turned on due to Ubuntu's default. Not sure if
this hash takes into account `ProxyJump` paths or `ProxyCommand` options.)
-- Possible solutions that require OpenSSH client changes --
One way that might work would be to embed the effective
`ProxyJump`/`ProxyCommand` path in the "host name" stored in
`known_hosts` -- will look ugly as sin, but at least the client can
"uniquely" identify each server, and determine which key to use for
validation.
e.g. you might have in known_hosts
172.16.1.2{ProxyJump us...@10.20.30.40,us...@192.168.123.45} <algo> <key>
172.16.1.3{ProxyCommand us...@10.20.30.40:nc 192.168.234.56 22} <algo> <key>
the {} part encodes the path by which you reach the host.
Alternative might be an "ExpectHostKey" option that can be put in
~/.ssh/config or specified with "-o ExpectHostKey=…" that tells the SSH
client "ignore your known_hosts file, the host *will* be using this
key". So if you know the public key (e.g. you did a `ssh_keyscan`), you
can either:
put in .ssh/config:
Host mytarget
Hostname 172.16.1.2
ProxyJump user2@bastion2
ExpectHostKey ecdsa-sha2-nistp256 AAAA…=
Host bastion2
Hostname 192.168.123.45
ProxyJump user@bastion1
ExpectHostKey ecdsa-sha2-nistp256 AAAA…=
Host bastion1
Hostname 10.20.30.40
ExpectHostKey ecdsa-sha2-nistp256 AAAA…=
OR, you might specify it on the command line (assuming the bastions are
"known")
ssh -o ExpectHostKey="ecdsa-sha2-nistp256 AAAA…=" \
-J user@bastion user@target
Bonus with this latter approach is that in a config sharing environment
using a SCM (whether it be git, Subversion, CVS… whatever), assuming
that repository was protected and "trusted", it would enable all members
of a team to automatically "trust" the host key with minimal
infrastructure set-up.
Are any of the above ideas feasible? Did I miss an obvious solution to
this?
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
_______________________________________________
openssh-unix-dev mailing list
openssh-...@mindrot.org
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev
That's a handy little service… not sure of its long-term stability
though for production use, but one to have a closer look at and keep in
the memory bank.
It's not so much the DNS admin frowning on its use. I think the subnets
involved are /24s and our public DNS infrastructure is Amazon AWS
managed via Terraform, so it could be scripted if we wanted such detail
to be publicly visible. (And we do have a couple of private IPs visible
on our domain -- mostly so Let's Encrypt can validate the host exists.)
The biggest impediment is the constrained nature of the routers that
we're using as bastion hosts on site. We'd have to deploy the DNS
server either on the router itself, or at a static address within reach
of it (and configure the router to use that resolver).
From what I understand of ProxyJump:
ssh -J proxyuser@proxyhost targe...@targethost.domain
targethost.domain would need to be resolved by proxyhost, not the local
client.
Another approach would be to set up /etc/hosts on the bastion, if it
were a conventional Linux machine I'd have little issue with this. I'm
not sure OpenWRT (or at least Teltonica's flavour of it, which is an
older release) would maintain /etc/hosts changes persistently.
> Otherwise, and assuming a *manageable* (mainly, enumerable) population
> of remote sites, I wonder whether this approach might work, too?
>
> Host Perth-47
> HostName 172.23.45.47
> ProxyJump Perth-GW
> GlobalKnownHostsFile /dev/null
> UserKnownHostsFile ~/.ssh/known-in-Perth
> Host Adelaide-11
> HostName 172.45.67.11
> ProxyJump Adelaide-GW
> GlobalKnownHostsFile /dev/null
> UserKnownHostsFile ~/.ssh/known-in-Adelaide
>
> (Yes, I realize that with target IPs being *potentially dynamic* per
> DHCP, having known hostkeys indexed by site *and IP* might still turn
> out to be bothersome.)
Ahh okay, so you can have a separate `UserKnownHostsFile` per host entry.
The situation we have is our workstations' .ssh/config actually imports
config files from elsewhere (git repo):
Include /home/me/workplace/ops/config/ssh/prod/*
Include /home/me/workplace/ops/config/ssh/dev/*
Include /home/me/workplace/ops/eng-ssh/*-config
So assuming one of those files was
/home/me/workplace/ops/eng-ssh/bigcust-config
# Bastion router on the site, VPNing back to the office
Host bigcustomer-00123-bne-md01
HostName 10.20.34.5
UserKnownHostsFile bigcustomer-00123-bne-md01-hosts
Host bigcustomer-00123-bne-br01
HostName 172.30.0.100
ProxyJump user@bigcustomer-00123-bne-md01
UserKnownHostsFile bigcustomer-00123-bne-md01-hosts
Host bigcustomer-00123-bne-md02
HostName 10.20.34.6
UserKnownHostsFile bigcustomer-00123-bne-md02-hosts
Host bigcustomer-00123-bne-br02
HostName 172.30.0.100
ProxyJump user@bigcustomer-00123-bne-md02
UserKnownHostsFile bigcustomer-00123-bne-md02-hosts
Would the UserKnownHostsFile be relative to the current working
directory of the `ssh` process at the time of its call, or would it
figure out that these files are relative to
/home/me/workplace/ops/eng-ssh/bigcust-config?
If the latter, I could then store that in the git repository (as a
*signed* git commit, so it can be authenticated later) which would offer
similar benefits to the `ExpectHostKey` I made earlier.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
_______________________________________________
Nope… just tried it, at this time it's relative to whatever directory
you call `ssh` from.
Which if everybody who used this directory kept it in the same place,
wouldn't be a big issue… but since I'll bet everyone I'm working with
keeps this repository in a different place, there is no "stable" path
that will work for everyone. Short of getting everyone to set an
environment variable in ~/.profile, I can't configure this in a seamless
manner.
Agreed… 2001-era OpenSSH is positively ancient. I have to contend with
hosts that don't support ED25519 (yeah, I had to be "trendy" when I last
set up the YubiKey didn't I?) and some that use ssh-rsa public keys, but
nothing quite that ancient thankfully.
By far using `HostKeyAlias` is the closest to achieving what I'm after.
Downside being the client will "forget" the host keys (because it
doesn't know what IP corresponds to what alias) and have to be told to
accept them again. From that point though, there should be no clashes.
One can set `StrictHostKeyChecking accept-new` for that -- which whilst
far from ideal, in practice it's no worse than blindly typing 'yes' at
each prompt.
I think I'll gather up what host keys I can and dump those in a
reference 'known_hosts' file that people can concatenate to their own
`~/.ssh/known_hosts`, which will solve that other issue. Best I can do
until such time as we can make the hosts key file 'portable' (in terms
of absolute paths).
Regards,
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
_______________________________________________
You could mandate people having a ~/.ssh/config-workplace.d symlink
pointing
to the right place (the git checkout directory), and use that in
a (static) ~/.ssh/config file:
Include ~/.ssh/config-workplace.d/*
That's a one-time setup cost.
Optionally you could even try hiding that in a Match block:
Match bigcustomer-*-bne-*
Include ~/.ssh/config-workplace.d/*
so that only these nodes are influenced by the redirections.