Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

NSA Intercepted Windows Bug Reports, was: Re: VMS security, was: Re: OpenVMS books

295 views
Skip to first unread message

Simon Clubley

unread,
Aug 8, 2017, 4:55:37 PM8/8/17
to
On 2017-08-08, Stephen Hoffman <seao...@hoffmanlabs.invalid> wrote:
>
> To the folks that have already decided to port off of OpenVMS?
> They're either going, or they're gone. For various reasons.
>

Including the reason they may not even know there's a x86 port in progress.

> Beyond this work ? and as your other reply to Arne's
> comments ? there's a whole lot of work around isolating the problems
> that are inevitable; access violations, making successful exploits of
> vulnerabilities more difficult, contending with application and server
> failures, patch deployments, crash uploads, logging, etc.

Just make sure that when doing crash uploads they are encrypted during
transport and secured at the other end:

https://tech.slashdot.org/story/17/08/05/236227/the-nsa-intercepted-microsofts-windows-bug-reports

This has been known about for some time but Bruce Schneier now believes
this may be an additional way for the NSA to find new previously unknown
Windows vulnerabilities and I agree.

What ? Did you think Simon was just being paranoid when he started going
on a few months ago about the distinct possibility the NSA were monitoring
the communications of various OS vendors, including VSI ?

As Kerry has pointed out, VMS is still used in a lot of high profile
environments. This is definitely a double-edged sword when it comes to
VMS security.

Based on what we know about NSA/GCHQ activities, there's a really good
chance that when you send your unencrypted bug reports to VSI via email,
you are helping to compromise the security of all VMS systems (from both
VSI and HPE).

VMS becoming available on x86-64 in a couple of years has nothing to
do with the security of _today's_ VMS systems; attacks are carried
out against the installed base, not against systems still to be installed.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world

David Froble

unread,
Aug 8, 2017, 11:19:25 PM8/8/17
to
Oh, this is good ...

:-)

How will this compare with making public a security vulnerability that perhaps
the OS vendor is not ready to publish a fix, or, those who cannot use any
provided fixes?

NSA and such intercepting bug reports ... bad!

Simon going public with a security vulnerability ... good!

I see how it is ...

Paul Sture

unread,
Aug 9, 2017, 11:21:28 AM8/9/17
to
A reminder from 2014:

"NSA targets sysadmin personal accounts to exploit networks"

<http://www.zdnet.com/article/nsa-targets-sysadmin-personal-accounts-to-exploit-networks/>


--
Everybody has a testing environment. Some people are lucky enough to
have a totally separate environment to run production in.


Simon Clubley

unread,
Aug 9, 2017, 1:57:50 PM8/9/17
to
On 2017-08-09, Paul Sture <nos...@sture.ch> wrote:
>
> A reminder from 2014:
>
> "NSA targets sysadmin personal accounts to exploit networks"
>
><http://www.zdnet.com/article/nsa-targets-sysadmin-personal-accounts-to-exploit-networks/>
>

Thank you Paul.

VSI (and VMS people in general): please read this if you still think
I'm being paranoid. This is the level of monitoring that the intelligence
services are willing to go to.

GCHQ already did this some innocent Belgium telecommunications sysadmins
so they could get access to the systems those sysadmins control.

Do you really still think the NSA/GCHQ would pass up a list of VMS bugs
which it may be possible to turn into vulnerabilities if all they had
to do was monitor some unencrypted email addresses ?

Simon Clubley

unread,
Aug 9, 2017, 2:24:08 PM8/9/17
to
On 2017-08-08, David Froble <da...@tsoft-inc.com> wrote:
>
> Oh, this is good ...
>
>:-)
>

You seem to be easily amused David. :-)

> How will this compare with making public a security vulnerability that perhaps
> the OS vendor is not ready to publish a fix, or, those who cannot use any
> provided fixes?
>

If the OS vendor needs a little bit more time then the researcher will
probably grant that time if there's a good reason. I know I certainly
would.

As for the latter, their systems were insecure anyway and the researcher
has done them a favour by proving it. Depending on the industry or other
area they are in, there may also be severe punishment from a regulator
somewhere if they are running systems which are known to be out of date
and cannot be fixed.

> NSA and such intercepting bug reports ... bad!
>

Very bad. For example, you never know when an NSA exploit might end
up shutting down an entire country's national health system.

> Simon going public with a security vulnerability ... good!
>

Very good. (If it's done properly.) The end users now know their
systems are insecure and can do something about it like apply the
vendor provided patches. If the vendor won't provide a fix after
a reasonable amount of time, then the end users can use the knowledge
to try and protect themselves.

Don't forget that releasing the vulnerability details a couple of months
or so after notifying the vendor is the best way we have come up with so
far to force the vendor to provide fixes and to provide them in a
reasonable amount of time.

As I have mentioned before, we tried the "vendor controls the release
schedule" thing. There's a very good reason why it was replaced with
the "security researcher controls the release schedule" thing.

David Froble

unread,
Aug 9, 2017, 6:02:06 PM8/9/17
to
Simon Clubley wrote:
> On 2017-08-08, David Froble <da...@tsoft-inc.com> wrote:
>> Oh, this is good ...
>>
>> :-)
>>
>
> You seem to be easily amused David. :-)

Actually, yes. Better to be amused than just about anything else.

>> How will this compare with making public a security vulnerability that perhaps
>> the OS vendor is not ready to publish a fix, or, those who cannot use any
>> provided fixes?
>>
>
> If the OS vendor needs a little bit more time then the researcher will
> probably grant that time if there's a good reason. I know I certainly
> would.
>
> As for the latter, their systems were insecure anyway and the researcher
> has done them a favour by proving it. Depending on the industry or other
> area they are in, there may also be severe punishment from a regulator
> somewhere if they are running systems which are known to be out of date
> and cannot be fixed.
>
>> NSA and such intercepting bug reports ... bad!
>>
>
> Very bad. For example, you never know when an NSA exploit might end
> up shutting down an entire country's national health system.

I could take your above paragraph about favors and apply it to the NSA.

Note that it wasn't an NSA exploit that shut down anything.

"We're from the NSA, and we're here to help you."

BA HA HA HA !!!

>> Simon going public with a security vulnerability ... good!
>>
>
> Very good. (If it's done properly.) The end users now know their
> systems are insecure and can do something about it like apply the
> vendor provided patches. If the vendor won't provide a fix after
> a reasonable amount of time, then the end users can use the knowledge
> to try and protect themselves.

Maybe it was a Simon exploit that shut down a countries national health system?

> Don't forget that releasing the vulnerability details a couple of months
> or so after notifying the vendor is the best way we have come up with so
> far to force the vendor to provide fixes and to provide them in a
> reasonable amount of time.
>
> As I have mentioned before, we tried the "vendor controls the release
> schedule" thing. There's a very good reason why it was replaced with
> the "security researcher controls the release schedule" thing.

I don't think you should lump VSI in with the snake oil people and HP. In fact,
I could resent that.

Now, there is the recent case of the "security researcher" who took out the
domain name that stopped the recent "troubles". Seems at some prior time he was
asking for examples of another "trouble", perhaps to research it, and now the
"powers that be" have arrested him for "distributing malware", or some such. I
could suggest that not everyone shares your high opinion of "security
researchers", or, perhaps some people are complete blithering idiots. I'll
leave as an exarcise for you to consider my opinion on that.

:-)

Arne Vajhøj

unread,
Aug 9, 2017, 9:30:41 PM8/9/17
to
On 8/9/2017 1:53 PM, Simon Clubley wrote:
> On 2017-08-09, Paul Sture <nos...@sture.ch> wrote:
>> A reminder from 2014:
>>
>> "NSA targets sysadmin personal accounts to exploit networks"
>>
>> <http://www.zdnet.com/article/nsa-targets-sysadmin-personal-accounts-to-exploit-networks/>
>
> VSI (and VMS people in general): please read this if you still think
> I'm being paranoid. This is the level of monitoring that the intelligence
> services are willing to go to.
>
> GCHQ already did this some innocent Belgium telecommunications sysadmins
> so they could get access to the systems those sysadmins control.
>
> Do you really still think the NSA/GCHQ would pass up a list of VMS bugs
> which it may be possible to turn into vulnerabilities if all they had
> to do was monitor some unencrypted email addresses ?

Is it harder for them to monitor the web site where you want to publish
vulnerabilities than an VSI email address?

:-)

Arne


Simon Clubley

unread,
Aug 10, 2017, 3:18:38 AM8/10/17
to
The basic idea is to make sure that by the time the NSA/GCHQ gets the
vulnerability details the vendor has already released a patch and
hence the information is obsolete.

If they instead get access to the details while the vendor is working on
a patch, then that's a couple of months the NSA have to use the information.

Simon Clubley

unread,
Aug 10, 2017, 3:28:25 AM8/10/17
to
On 2017-08-09, David Froble <da...@tsoft-inc.com> wrote:
>
> Note that it wasn't an NSA exploit that shut down anything.
>

Yes, it most certainly was:

https://en.wikipedia.org/wiki/EternalBlue

The code from the NSA exploit was directly copied into WannaCry.

Don't forget that even the NSA code that checks if the exploit is being
examined in a sandbox was present in WannaCry. This is what allowed
the researcher to register the address used to perform that check
and hence stop the original version of WannaCry in it's tracks.

>
> Maybe it was a Simon exploit that shut down a countries national health system?
>

No, because Simon has already given the vendor a couple of months (at least)
to fix the problem before going public so by the time the details become
public the end users should have already applied the patches.

May I also remind you that it's not an uncommon practice to reverse
engineer patches to determine what was fixed. Therefore, the details
can become public any time after the patch is released even if the
original researcher doesn't reveal the details.

Stephen Hoffman

unread,
Aug 10, 2017, 11:30:02 AM8/10/17
to
On 2017-08-10 07:14:24 +0000, Simon Clubley said:

> The basic idea is to make sure that by the time the NSA/GCHQ gets the
> vulnerability details the vendor has already released a patch and hence
> the information is obsolete.
>
> If they instead get access to the details while the vendor is working
> on a patch, then that's a couple of months the NSA have to use the
> information.

If any of y'all have all of the sorts of stupid server and stupid
network mistakes addressed and locked down, can actually provide a
meaningful challenge to the sorts of well-funded folks with pervasive
network access short of black-bagging or bribery or NSLs or social
engineering and phishing, more power to you. Y'all are far past what
most of us have funded and implement.

For most of the rest of us, getting rid of DECnet, telnet, ftp, rlogin,
and the rest, of overhauling the security and adding mechanisms to
better isolate app failures, and getting certificate chains installed
and working and updated... of getting the current server versions of
Apache and SMB and ISC BIND and the rest... of getting current TLS and
deprecating TLS older than 1.2.... that's where most of us OpenVMS
folks are.

Of VSI hopefully replacing SCS with something encrypted and
authenticated, of adding sandboxing, adding telemetry, and of all of us
and VSI included getting the time to rummage through our current and
particularly our older code looking for the sorts of design and
implementation flaws that attackers rummage for, and of getting
frameworks and tools that make that "most secure operating system on
the planet" marketing claim rather less egregious.

This is not to imply that VSI isn't a target. Whether or not they
have internalized it, they are. They have been. If the VSI folks do
get OpenVMS where they want it, they will be an even bigger target.

I see that self-issued HPE cert with the ~23 year lifetime that we're
more than halfway through, and I really start to wonder about some of
this.



--
Pure Personal Opinion | HoffmanLabs LLC

seasoned_geek

unread,
Aug 14, 2017, 8:43:35 PM8/14/17
to
On Thursday, August 10, 2017 at 10:30:02 AM UTC-5, Stephen Hoffman wrote:
> For most of the rest of us, getting rid of DECnet, telnet, ftp, rlogin,
> and the rest

I like telnet and ftp. I simply keep my DS-10 away from the Internet so I can use my terminal emulator. <Grin>

In truth, if any of those things are actually a problem, your systems architect should be taken out and shot.

Never ever plug a production system directly into the Internet. Always place a sacrificial lamb, like a Websphere server, outside the main firewall allowing only fixed format proprietary data messages to flow between it and the real production systems.

The day you allow a Web page to directly connect to a production database you have failed as a systems architect.

Stephen Hoffman

unread,
Aug 16, 2017, 2:57:22 PM8/16/17
to
On 2017-08-15 00:43:32 +0000, seasoned_geek said:

> On Thursday, August 10, 2017 at 10:30:02 AM UTC-5, Stephen Hoffman wrote:
>> For most of the rest of us, getting rid of DECnet, telnet, ftp, rlogin,
>> and the rest
>
> I like telnet and ftp. I simply keep my DS-10 away from the Internet so
> I can use my terminal emulator. <Grin>

I'm aware of a number of folks doing exactly that; of using telnet and
FTP and DECnet. With various of those folks using emulator client
tools that are fully capable of creating more secure connections, too.
Which is bad news.

> In truth, if any of those things are actually a problem, your systems
> architect should be taken out and shot.

I'm aware of a number of OpenVMS servers that are running cleartext protocols.

> Never ever plug a production system directly into the Internet. Always
> place a sacrificial lamb, like a Websphere server, outside the main
> firewall allowing only fixed format proprietary data messages to flow
> between it and the real production systems.

I'm skeptical that the folks that have invented their own proprietary
formats and protocols have all spent the time and effort and resources
to review and reverse-engineer and to look for security vulnerabilities
in their custom protocols and their libraries. Of becoming familiar
with Burp Suite and Kali and other common tools, too. Of fuzzing the
APIs and protocols. Then there's the discussion of depending on
990s- or Y2K-era application and server isolation, and of assuming that
firewalls provide more than a handy network demarcation. This also
assuming the organization can forever maintain the intended and
necessary isolation, the firewall rules and the rest. Worse for app
security, getting apps using OpenVMS-provided secure protocols working
correctly and securely with the provided APIs is non-trivial for those
folks that are or have migrated away from telnet and FTP and DECnet,
too.

> The day you allow a Web page to directly connect to a production
> database you have failed as a systems architect.

I'm aware of a number of OpenVMS servers that are both web server and
back-end server. More than a few other servers running other software
in similar configurations, too. That's before discussing extending
any existing breaches, too; of breaching one client or one server
within the target environment, and the attackers working toward further
access within and across other network-connected devices, and of
seeking additional vulnerabilities. Such as folks occasionally using
telnet or SCS or insecure revisions of TLS, and that then expose
information useful to the attackers.

While VSI is working to address various security vulnerabilities,
there's much more work to be done both at VSI and by end-users of
OpenVMS. This because OpenVMS itself has been maintained to be
upward compatible, and — irrespective of the VSI marketing — efforts to
upgrade the default system and app security and APIs have been
problematic. VSI have the leadership role around upgrading the
default OpenVMS system and app security and APIs, and this particularly
given VSI security marketing. VSI IP will help, though the effort is
much larger than that and it's very much ongoing for both VSI and
customers. And for now, VSI has to get the port out the door, and to
work to increase revenues.

The current environment is a whole lot less forgiving than the Y2K era.
Trying to "outrun" attackers — the sorts of folks that are using
continuously-running for loops, scanning for vulnerabilities — with the
current OpenVMS patch process is a security strategy that has already
failed. Keeping insecure tools such as telnet, FTP and DECnet around
in the default configurations is simply not living up to current
marketing, nor where we'll be in 2022 and 20217.

Put another way, VSI can continue work toward and can implement changes
toward making their marketing slogan rather more believable, and VSI
can update and in a few spots can replace problematic APIs, and can
provide updated and new tools and APIs that make both updating existing
apps and writing new apps much easier, or the VSI folks can use the
existing "most secure operating system on the planet" marketing to
continue to generate skepticism.

Yes, users can and will do unwise things. Users can and will make
mistakes. Developers and operations folks can make mistakes, too. A
secure platform works toward preventing folks — ISVs, end-users, even
the the platform's own developers — from making many of those mistakes.


BTW: For those of you updating to operating system releases that might
have eliminated the telnet client entirely and as can happen with
operating systems that don't presently market themselves as "the most
secure operating system on the planet", the netcat tool (where
available) can be used to test access. With the BSD nc variant, use:
nc -z {host} {port} Or s_client for testing a secure connection, of
course. Yes, OpenVMS lacks nc, though does offer s_client.

seasoned_geek

unread,
Aug 16, 2017, 4:36:23 PM8/16/17
to
On Wednesday, August 16, 2017 at 1:57:22 PM UTC-5, Stephen Hoffman wrote:
> On 2017-08-15 00:43:32 +0000, seasoned_geek said:
>
> > On Thursday, August 10, 2017 at 10:30:02 AM UTC-5, Stephen Hoffman wrote:
> >> For most of the rest of us, getting rid of DECnet, telnet, ftp, rlogin,
> >> and the rest
> >
> > I like telnet and ftp. I simply keep my DS-10 away from the Internet so
> > I can use my terminal emulator. <Grin>
>
> I'm aware of a number of folks doing exactly that; of using telnet and
> FTP and DECnet. With various of those folks using emulator client
> tools that are fully capable of creating more secure connections, too.
> Which is bad news.

You lost me there. Either there are too many words or not enough. My DS-10 is local network access only. In order for someone to penetrate it they have to first penetrate one of my Linux desktops (which are periodically being wiped for new distros) then they have to wait until I have it turned on.

>
>
> I'm skeptical that the folks that have invented their own proprietary
> formats and protocols have all spent the time and effort and resources
> to review and reverse-engineer and to look for security vulnerabilities
> in their custom protocols and their libraries.

Not protocols, data. Typically a data packet in an MQ Queue or a direct port packet to a service sitting on that port. The service or queue reader pulls in a maximum size and that's it. Everything else is thrown away. Every column is a fixed width and the executable processing the data _NEVER_ builds dynamic SQL.

Only the ports assigned to a service/queue are open on the machine. Everything else disabled or running a NullBot.

>
> > The day you allow a Web page to directly connect to a production
> > database you have failed as a systems architect.
>
> I'm aware of a number of OpenVMS servers that are both web server and
> back-end server. More than a few other servers running other software
> in similar configurations, too.

Just because they exist does not mean their system architect should not be taken out and shot.

Stephen Hoffman

unread,
Aug 16, 2017, 5:23:21 PM8/16/17
to
On 2017-08-16 20:36:21 +0000, seasoned_geek said:

> You lost me there. Either there are too many words or not enough. My
> DS-10 is local network access only. In order for someone to penetrate
> it they have to first penetrate one of my Linux desktops (which are
> periodically being wiped for new distros) then they have to wait until
> I have it turned on.
....
> Just because they exist does not mean their system architect should not
> be taken out and shot.

You clearly prefer to blame the user. Users — even experienced ones
— inevitably make mistakes, too. I prefer to guide the user; to
avoid allowing the user to make the mistake, to make a better approach
easier to use, or to force the user to work harder to make the mistake.
My approach won't cure all the mistakes. Your approach doesn't
address where we are now, nor where we're likely headed, nor how we'll
reduce our current problems.

As for the network... You're clearly confident that your internal
network is and will remain secure. I don't trust networks to remain
secure. Here's a little related reading:
https://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/43231.pdf


Based on what I'm seeing on the networks I deal with, I don't trust
firewalls, nor printers or the rest, and while I'd prefer to avoid
folks clicking on links or opening macro-infested documents I know
they're going to do that anyway and which means figuring out how to
deal with the consequences of that happening. Some of the
widely-available printer-based attacks are quite clever, persistent,
and nasty, for instance. And I don't expect that everybody has
current firmware loaded, nor even has current firmware patches
available for all their printers and IoT devices and the rest.

I prefer APIs and protocols be available or be created that seek to
make secure and authenticated traffic easier to establish and maintain,
and preferably without requiring more folks to become experts in
security, authentication and encryption, and preferably also allowing
the connections to be upgraded as the older protocols age out while
avoiding making changes to apps. OpenVMS utterly stinks at this area.

But then I also don't consider a platform that even ships with telnet,
ftp and DECnet in the default configuration to be "the most secure
operating system on the planet", either. Make folks work at it to
load those clients and servers; force end users through extra steps to
downgrade their own security. There are certainly still cases where
folks need one or more of those, but having insecure protocols in the
default mix is just asking for adverse events.

VAXman-

unread,
Aug 17, 2017, 9:15:33 AM8/17/17
to
In article <on2cs5$f1f$1...@dont-email.me>, Stephen Hoffman <seao...@hoffmanlabs.invalid> writes:
>On 2017-08-16 20:36:21 +0000, seasoned_geek said:
>
>> You lost me there. Either there are too many words or not enough. My
>> DS-10 is local network access only. In order for someone to penetrate
>> it they have to first penetrate one of my Linux desktops (which are
>> periodically being wiped for new distros) then they have to wait until
>> I have it turned on.
>.....
>> Just because they exist does not mean their system architect should not
>> be taken out and shot.
>
>You clearly prefer to blame the user. Users — even experienced ones
>— inevitably make mistakes, too. I prefer to guide the user; to
>avoid allowing the user to make the mistake, to make a better approach
>easier to use, or to force the user to work harder to make the mistake.
> My approach won't cure all the mistakes. Your approach doesn't
>address where we are now, nor where we're likely headed, nor how we'll
>reduce our current problems.
>
>As for the network... You're clearly confident that your internal
>network is and will remain secure. I don't trust networks to remain
>secure. Here's a little related reading:
>https://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/43231.pdf

'research' and 'google' in the same URL. I nearly peed myself.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
0 new messages