On 2015-02-26 19:09:54 +0000,
dodecah...@gmail.com said:
> Rapid development comes down to libraries and frameworks, at least for
> new stuff
>
> My vote goes towards implementing something like MS's .net framework
> where they created their CIL layer for which all languages compile down
> to as an intermediary language.
That'd be the calling standard and the common language environment, on VMS.
More typical on VMS is shipping new language-specific run-times, as
some of the compiler kits do.=
> This can then be shipped to run anywhere, theoretically, independent of
> underlining Os version and hardware architecture
The theoretical has a nasty habit of slamming into reality.
> Windows created it's .net framework (whether they stole from the Java
> world isn't being debated here) but I think they did a brilliant job as
> they effectively unified their languages so that it didn't matter what
> language you ultimately code in
>
> They did away with the code and compile to a specific architecture
> cycle and added a third logical layer that has some real benefits in
> terms of code portability and run-time
>
> Their model is, Language -> CIL (common intermediate language) -> CLR
> (common language run-time)
>
> This adds a logical layer between the application and the OS. It then
> doesn't matter if the OS changes as long as the version of .net
> relevant to the application assembly exists, the code can run, somewhat
> virtualised
This is one approach to avoiding DLL hell. Sort of. Everybody then
gets to deal with shipping and maintaining different versions of .NET.
<
https://msdn.microsoft.com/en-us/library/ff602939(v=vs.110).aspx>.
VMS tried to make all RTL libraries and system services
upward-compatible, which means building on an other version usually
works. (Alas, it also means that there's a whole lot of effort
involved with maintaining that upward-compatibility, and VMS
engineering has been loathe to remove the oldest and gnarliest of the
code, for reasons of compatibility. See my references to the
compatibility millstone, posted earlier today.) On other platforms,
the frameworks are packaged into application bundles, which is akin to
packaging the app with its own DLLs or its own .NET or its own VMS
shareable images. On Android, Google used the Davlek bytecode scheme
with JIT akin to Java, but is now adding updates outside of the OS and
via an add-on package of Android Runtime (ART) routines. Different
solutions. All workable.
> Contrast this to having your compiler generate code specific to the OS
> version and OS architecture (already we have some system calls that are
> alpha specific! ).
Usual on VMS is to either a back-build, or to build for the oldest version.
There are architectural details that can derail Alpha and Itanium
source compatibility, but the two can generally work from the exact
same code-base. That's what VMS does, as do many applications.
> I'm certainly no programmer
That's somewhat of a handicap in this discussion.
> but I think what MS did with it's whole .net framework was visionary
> and certainly moved windows on from a one language beast (visual basic
> was pushed because Bill Gates used it but now C# is the main flavour,
> contrast this with unix that is still pretty much C locked to this day)
VMS isn't Unix. Or Windows. The VMS calling standard already permits
mixed-language programming, and entirely within the same application
executable, and VMS provides source portability for most operations
across platforms. It's common to see applications involving C,
assembler and other languages all mixed together in one executable
image, too. More than a little code is recompile and relink and run on
Alpha and Itanium, and there's little reason to assume that won't also
be the case with any VMS port to x86-64.
> MS did however have to bite the bullet and bring OO (in some cases,
> force) extensions into their languages. I remember the cry from many a
> basic programmer about how their language was being changed and made
> more complex, however, they did eventually move to a more modern
> construct of the language
Strictly for application platform portability across platforms and
versions, OO is not a central factor.
OO is a factor here when you push OO features into the OS APIs, such as
what Apple has done with OS X and iOS with Objective C and Swift.
Pushing OO into the APIs does make sense for various environments, too
— the final linkages between the various components are performed at
run-time, and modifications and "patches" and extensions become
available.
In practical terms, if you're going to go to the effort of creating a
bytecode portability layer, then having OO capabilities within that
layer is effectively a necessity. You're going to be interoperating
with OO languages, and Microsoft has more than a little C++ code around.
> Having some type of CIL code in VMS would free us (or go a long way
> towards) allowing us to change architectures (I'm thinking, Application
> code here) more easily going forward without the need to always recode,
> recompile, retest over and over again when VMS moves onto a new
> architecture. This whole recoding cycle is why, where I work, VMS is
> left to die in the back corner running an application that they refuse
> to modernise because of the huge recoding and retesting effort involved
> just because HP insisted on loving Itanium for too long
Might want to ponder what was done during the last two ports of VMS,
and toward the simplicity or the complexity of recompiling the source
code for the new architecture, then.
For running on older versions, you can back-link (unsupported, but
usually works) or can just build your code on the oldest release. Not
having to use lib$find_image_symbol to use newer system services or
newer RTL features would be nice, though.
> Running an existing application inside an emulated environment (charon
> etc) to escape hardware obsolescence is ok for a while but as the OS
> moves onwards and changes architectures then having to maintain several
> versions of an emulator will eventually wind us back in the same spot
> we are in today with the Vax and Alpha and Itanium architectures.
> Having code compile down to something like CIL will (should) make
> porting an application to a totally new architecture much easier as
> your not having to test for functional hardware changes as much
First saw bytecode tools back with the Terak
<
http://en.wikipedia.org/wiki/Terak_8510/a> and UCSD p-System and its
p-Code; back around 1980. The p-Code stuff and the JVM and .NET
intermediate layers are all neat ideas, but — admittedly being somewhat
dense — reasonable native source code portability seems to be a pretty
good alternative to the approach. Not having to debug run-time errors
back up through a bytecode layer is nice, too. p-Code and the JVM —
and the less-than-successful EFI byte code stuff, for that matter —
does mean not having to haul the rest of the run-time environment to
the target platform, but AFAIK the VSI plan here is to have native VMS
on x86-64, and to male available translation tools for applications
where no source code is available.
> Just how far forward do we want to project our compiler and development
> efforts? I know short term we need to get VMS viable again but thought
> I thought it might also be relevant to be thinking about where we want
> to be eventually taking VMS software development to...
I'd rather see more going into the compilers and tools, and not into
trying to emulate Microsoft's attempts to dig themselves out of
maintaining and updating their own products across a gazillion
different APIs and multiple older software versions.
> p.s.. I don't code on windows and I don't code much at all but I sat
> next to someone who used to write stuff for VMS but has moved onto
> windows and had to put up their ear-bashing about the virtues of .net
> for a few years :-) I guess some of it rubbed off on me
There can be reasons to roll out a .NET-like solution. But you haven't
yet sold me on the value of that implementation, here. There's still
a need to test in the various target versions, and somebody's going to
sign up to create and maintain the bytecode engine and the
cross-version compatibility layer, too. VSI probably isn't going to
be hauling much code back to Alpha, and it seems rather unlikely that
they'd create and backport a .NET framework for HP. It seems more
likely they'll be using common sources for their Itanium and x86-64
builds, as will most third-party vendors and customers.