On 2018-01-08 16:03:20 +0000,
dgordo...@gmail.com said:
> On Sunday, January 7, 2018 at 9:34:26 PM UTC-5, Simon Clubley wrote:
>
>> When looking at the vulnerability I found, there are times when I have
>> wondered if it was a backdoor deliberately planted by the NSA.
>
> No. It was simple programming mistake. I can even identify the edit in
> which it was inserted and the responsible engineer.
Best case, you now know around when and who might get blamed, but not
who made the change, nor why the changes were made.
If I or any other privileged users were inclined, the checkin could
have been adjusted to track back to some other or some made-up user.
BTW: I'd be happy to explain how to edit source code check-ins while
they're pending review, or there's the simple expedite of making the
change in the original sources in the developer's own directories
before the checkin and without the developer likely realizing there's
even been a change. There are no cryptographic checksums in use in VDE
or CMS, no checksum chains and no blockchains of the changes, and few
(no?) folks extract and diff the changes they've made. Or that they
intended to make. I'd be happy to show you how to forge the records in
the database. It's not at all difficult. That's all presuming the
developer isn't explicitly making the changes with the intention of
introducing a backdoor, of which no database records will indicate.
It's not at all difficult to do any of this.
At its core, this is also why few folks will trust host-local logs on
compromised systems, and also why some folks are using or are seeking
to add distributed logging and checksum chains into logging
implementations.
Now before somebody suggests it, do I think that blockchains are
appropriate here? No. If that level of auditing and change-tracking
were a new requirement, I'd first look to use chained SHA-3 checksums
or digital signatures and with embedded timestamps, and with
distributed logging. Maybe with write-only auditing, but that's
problematic for many applications and environments. As part of
upgrading auditing and change tracking, I'd also look at what git and
other tools provide, and change and update and ignore that where
appropriate.
https://gist.github.com/masak/2415865 Use of SHA-1
wouldn't be my preferred choice any more, either.
> Please recalibrate your tinfoil hat.
Do I think that's what happened here? No. Why would an agency
deliberately introduce intentional vulnerabilities before the
incidental and accidental vulnerabilities become more difficult to
locate? I'd expect attackers have already looked for existing holes
too, and this particular hole is a local privilege escalation and not a
rather more desirable remote command execution (RCE) flaw. As an
attacker intent on invoking clandestine means to gain access, I'd also
want something that led to an RCE, if I was going to go to the effort
involved.
Nor do I believe that the NSA is the only organization that might seek
to embed a backdoor, but that's fodder for another discussion or three.
Some related background on why some folks are paranoid:
https://falkvinge.net/2013/11/17/nsa-asked-linus-torvalds-to-install-backdoors-into-gnulinux/
https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-attempt-of-2003/
Etc.
FWIW, VSI/HPE/HP/Compaq/DEC C should flag the source code construct
behind that 2003-era backdoor, if the compiler check was not disabled.
Do I think government agency backdoors are the biggest concern for VSI?
No. Not even remotely. Finishing the port and building up the
general system security; server software updates, better APIs,
encryption everywhere, LDAP in place of SYSUAF/RIGHTSLIST, the woefully
outdated installation and patch and crash infrastructures, system and
app logging, etc. Lots of work. Lots and lots of work.
--
Pure Personal Opinion | HoffmanLabs LLC