HardenedBSD March 2022 Status Report

1 view
Skip to first unread message

Shawn Webb

unread,
Mar 31, 2022, 5:50:51 PM3/31/22
to HardenedBSD Users
Hey all,

We made some cool progress in March. Though I, Shawn, am writing this
report, I'll refer to myself in the third person for the rest of the
report. :-)

In src:

1. Shawn reverted a potential LPE race condition in ZFS introduced by
OpenZFS. Only 14-CURRENT was affected.
2. Shawn merged in zlib's CVE-2018-25032 fix.
3. Shawn enabled -ftrivial-auto-var-init=zero in base clang by
default. This now means we don't need to pass in any compiler flags
to support this feature. All ports that rely on the system compiler
will now automatically benefit.
4. FreeBSD introduced a change that enables dumpon to use the
underlying non-encrypted swap device when encrypted swap is used.
Shawn reverted this change since users who choose to encrypt their
swap encrypt it for a reason--HardenedBSD should proactively
protect users by respecting their encryption preferences.
5. Loic found and fixed an issue with the linuxulator in HardenedBSD,
with the default stack permissions.
6. Coming up soon: sponsored work by BlackhawkNest, Inc that provides
support in base for (more) easily building HardenedBSD 13-STABLE
based versions of OPNsense. This is in the hopes to provide the
wider community with the ability to produce their own builds. Some
of this work has landed in a special feature branch.

In ports:

1. In tandem with src change #3, Shawn modified the ports tree to rely
on the system compiler's application of
-ftrivial-auto-var-init=zero. There's no need to apply that feature
via CFLAGS injection.
2. Loic removed `stackautoinit:off` USE_HARDENING flag from a very
large number of ports. This was a huge lift and his work on this is
very much appreciated. He and Shawn worked a lot on this.
3. Ibrahim Kaikaa (Mr.UNIX) has helped fix a number of ports. We still
have a number of outstanding merge requests that I need to verify.

Infrastructure:

1. The HardenedBSD GitLab server had a drive failure. I had already
planned to rebuild the pool from a bunch of older 1TB spinning rust
drives to a bunch of 2TB SSDs. The drive failure accellerated the
pool rebuild, which completed successfully.
2. After a large number of months of downtime, our arm64 package
building server has come back online! We're now building
14-CURRENT/arm64 packages.
3. After src change #3 landed, all of the build infrastructure servers
were updated.

Cool projects:

1. Loic released an unofficial livecd of HardenedBSD that boots into
XFCE[0]. I've started the discussion with him to convert that from
from an "unofficial" project to an "official" one. :-)

Special notes:

1. As a reminder, support for HardenedBSD 12-STABLE will be delegated
to the community. As such, binary updates and package builds will
cease. The hardened/12-stable/master branch will no longer be
auto-synced.
2. Please remember to let us know if you have any thoughts to share on
whether HardenedBSD should support the linuxulator by 15 Apr 2022.
3. Please remember to let us know if you have any thoughts to share on
the proposed changes to the default sshd configuration[0].

[0]: https://git.hardenedbsd.org/hardenedbsd/HardenedBSD/-/merge_requests/60

Thanks,

--
Shawn Webb
Cofounder / Security Engineer
HardenedBSD

https://git.hardenedbsd.org/hardenedbsd/pubkeys/-/raw/master/Shawn_Webb/03A4CBEBB82EA5A67D9F3853FF2E67A277F8E1FA.pub.asc
signature.asc

Shawn Webb

unread,
Mar 31, 2022, 5:58:23 PM3/31/22
to HardenedBSD Users
On Thu, Mar 31, 2022 at 05:50:48PM -0400, Shawn Webb wrote:
> Cool projects:
>
> 1. Loic released an unofficial livecd of HardenedBSD that boots into
> XFCE[0]. I've started the discussion with him to convert that from
> from an "unofficial" project to an "official" one. :-)

I just realized I left this link out. Sorry about that! Here it is:

https://groups.google.com/a/hardenedbsd.org/g/users/c/QUTUJfm30Dg/m/0VNKUeVhHgAJ
signature.asc

Carlos López

unread,
Apr 1, 2022, 8:38:28 AM4/1/22
to Shawn Webb, HardenedBSD Users
Buff ... really impressive .... many thanks for your efforts.

--
Regards,
C. L. Martinez

Reply all
Reply to author
Forward
0 new messages