On 2016-07-01 12:43:28 +0000, John E. Malmberg said:
> On 6/30/2016 10:07 AM, Stephen Hoffman wrote:
>> On 2016-06-30 06:06:08 +0000,
lawren...@gmail.com said:
>>
>>> I saw some discussion elsewhere on why Microsoft Windows is prone to
>>> “DLL Hell”, and whether Linux had a similar problem.
>>
>> DLL Hell is a solved problem. Has been for many years. Apps are
>> adopting the available solutions, too. Slowly.
>
> Only for some definitions of "Solved".
>
> Basically the problem is solved as long as your Windows system disk is
> a minimum of 250 GB.
>
> The current solution is to track in a special mostly hidden directory
> the private and usually redundant copies of the DLLs that are in
> potential conflict, and in another mostly hidden directory keep all
> past versions of the DLLs that are replaced by updates.
> ...
My reply was referencing "DLL Hell", a mess which can and does exist on
some platforms, and particularly with older applications. But if you
believe that "DLL Hell" is still a problem for recent approaches and
applications that are new or have been overhauled — and in the case of
OpenVMS, yes, things are still stuck in "DLL Hell" whenever upward
compatibility isn't available or isn't feasible — maybe look around at
what other approaches and what other solutions are available?
Can what Microsoft does now with their frameworks be done better?
Certainly. Can disk or file de-dup or clever run-time tricks or
compression or other techniques help somewhat with the bloat? Sure.
Has software bloat been discussed over the past several decades? Most
definitely, and more than a few folks here remember the 2GL/3GL/4GL
debates, and OMG how big did application code get from VAX to Alpha to
Itanium, and many other discussions and contexts. is it even
appropriate to be optimizing application software that's targeting
desktop and laptop environments for 250 GB hard disk drives or for 2016
computing in general? If that's your current installed base and you're
squeezed, sure, some investment is warranted, but — if your changes are
not going to be ready far enough ahead of the next-generation
deployment to really matter — then again maybe not.
OpenVMS development and even OpenVMS application developers needs to
stop looking at the worst of other platforms — except as a lesson in
mistakes to avoid — and start looking at the best of the other
platforms, and — where it's feasible — how to make OpenVMS competitive
with that, and how to make your applications better.
If your user interfaces are character cell using terminal emulators, or
if your operations are based on serial terminals for your management,
or if your applications require folks to hand-edit configuration files,
or if your periodic management requirements are not already automated,
maybe your front-end or your application or your environment needs some
work? If you are headed toward your own "DLL Hell", maybe learn how
others have avoided similar problems?
Look at other platforms. Learn what works, and why. Apply that to
your own environment, and your own approaches and applications. If
you're in VSI, apply the best of that learning and those other
platforms to OpenVMS itself. If you haven't figured out how to
version and manage code upgrades, then definitely take the time to
learn from the missteps of "DLL Hell" and even of the trade-offs
inherent in server uptime, upward compatibility and otherwise.