The ParaSail aims to be a SYSTEM programming language, but
unfortunately it is a hell to get it running, if the Ada/GNAT
has not been ported to the operating system, for example,
OpenIndiana (
https://www.openindiana.org/)
A citation from GCC build manual:
https://gcc.gnu.org/install/build.html(archival copy:
http://archive.is/MQ1um )
---citation--start------
In order to build GNAT, the Ada compiler,
you need a working GNAT compiler
(GCC version 4.0 or later). This includes GNAT tools
such as gnatmake and gnatlink, since
the Ada front end is written in Ada and uses
some GNAT-specific extensions.
---citation--start------
Bootstrapping GNAT seems to have been an issue
in year 2002
https://gcc.gnu.org/ml/gcc/2002-03/msg01169.htmland the discussion about how to bootstrap GNAT
seems to continue in year 2017
http://gcc.gnu.org/ml/gcc/2017-01/msg00156.html2017 - 2002 = 15
My 2017_11 view is that if the GNAT bootstrapping issue
does not get solved and the ParaSail implementation
keeps on depending on the GNAT, then the GNAT bootstrapping
issue will be a serious SHOW STOPPER for wider ParaSail
adoption. According to my UN-experienced and UN-educated view
(all I know about compiler design is just a one or few
courses from university and some self learning)
a clean path for getting ParaSail
and any other programming language implementation built is:
step_1)
Port standardized implementation of some C compiler
to the HW and OS. It's OK for the C compiler to lack
optimizations. May be something like the
https://bellard.org/tcc/step_2)
Use the unoptimizing C compiler to compile
something fancier, may be an optimizing C compiler.
step_3)
Use the optimizing C compiler to compile
something even fancier, some system programming language,
may be Ada, C++, ParaSail, Pascal, D, ...
step_4)
Use the system programming languages to compile
the rest of the "Babel": Python, Ruby, C#, Java_and_libs, ...
Not all package collections have both, the LLVM and the GCC,
available. For example, the NetBSD/Minix3 lacks GCC totally,
because the Minix3 is developed as an academic project, where
they do not have the requirement that they should be able
to run popular, but GCC specific, software. The argument
that the Minix3 academics have is that for them the LLVM
is good enough. On the other hand, different parties
have DIFFERENT REQUIREMENTS. The Microsoft/Windows folk
could not care less about reliability/security, while
the hosting service providers/implementers/organizers("Enterprise"
IT departments) seem to love the Solaris strand of operating systems
mainly due to the ZFS file system, which has the
copy-on-write capability, which allows them to create
virtual_machines/jails/containers by "copying" and "deleting"
huge amounts of files very cheaply.
The scientific computing
crowd cares mainly about the correctness of the output
of their scientific software, not the reliability of the
software. If a scientific computing cluster goes down, well,
a week sooner or later does not make that much of a difference
if it takes 6 months to create a scientific paper or to
come up the industrial research results. Security is also
kind of irrelevant for the scientific computing crowd, because
the scientific cluster does not contain anything that
that the spies or "hackers" would find interesting, unless
the spooks become scientists themselves and start to work
on exactly the same topics as the people, who's data they
peek.
The banking sector does not seem to care about any
software development efficiency. The (western, non-Estonian)
banking sector is so loaded (with money) that
in stead of using computers for
analyzing money transfers, they, supposedly, use
humans to manually verify money transfers that go from
one bank to another. In Estonia PIN-calculators were
the norm by year 2000, but what I hear/read from the
wild-wild-web, the Americans and Canadians
still use paper cheques in 2017.
So, clearly the banks will not be the ones that invest
about proper software development, at least not the
western banks, and the cost of software development,
the maximum utilization of hardware, etc. is just
irrelevant for them.
Long story short: the parties, who need reliability and
efficiency, are the ones, who CAN NOT AFFORD the waste
of time and other resources. If the aircraft manufacturers
did not get nasty fines for being sloppy, then they would build
"flying Titanics" and in stead of "safety critical"
software the Titanics would fall down from the BLUE skies
according to the "standards" of the "blue screen of death".
Meaning: any software project that aims for
HIGH TECHNICAL QUALITY must have mainly those people as
their target audience that CAN NOT AFFORD
the SHODDY tecnhical quality.
I probably have not noticed all of the parties
that belong to that group of poor people, but
freelancers like me and small family businesses certainly
can not afford to waste time on the kind of nonsense
that megacorporations and government agencies have no
trouble burning/"spending" money on.
Anyways, the wild ideas that came to my mind, how to
fix the GNAT issue, are:
wild_idea_1_that_probably_does_not_work)
May be the GNAT and all of its dependencies
can be compiled to some
universal "intermediate" code, like the
Java programs are compiled to a
universally executable Java bytecode.
A GCC at some exotic operating system
might then take the bytecode analogue
and complete the compilation by producing
a binary that runs on that exotic operating system.
May be the bytecode analogue might also be
run by some JavaVM analogue for
bootstrapping the GNAT on that exotic operating system.
Linking with the libraries of that exotic
operating system is going to be an interesting
task, which might be solved by having the
bytecode analogues of all of the libraries
available for the GCC instance that runs
on the exotic operating_system/hardware.
If the GCC could generate Java bytecode
as its "target"/"backend", then may be some tweak of the
https://en.wikipedia.org/wiki/GNU_Compiler_for_Java might help. If there is some way to
translate the bytecode to native blobs,
then that idea might work.
There's also the Moxie virtual CPU project
http://moxielogic.org/blog/pages/architecture.html and then there's the
https://www.adacore.com/press/ada-java_interfacing_suite (archival copy:
http://archive.is/Ks1Sg )
which, unfortunately, is not an option, because
it is proprietary software.
If Ada code can be compiled to the Java byte code,
then may be the Ada specific parts of the
GNAT source distribution could be handled
by "replacing" the Ada source with a
set of JVM .class files. That would make
the GNAT source distribution (more) portable
by removing the requirement to have
some previous version of the GNAT
available at the exotic operating_system/HW.
May be the ParaSail Ada implementation could be
compiled to Java bytecode, which then uses the
natively available LLVM implementation during runtime.
I don't know. It's just one of the first wild,
NOT thoroughly thought out, thoughts.
The porting order might be:
1) C compiler.
2) C++ compiler.
3) JavaVM
4) GNAT
5) ParaSail
As of the writing of this comment
I do not know anything about the
http://gcc.gnu.org/onlinedocs/gccint/GIMPLE.html http://gcc.gnu.org/onlinedocs/gcc-4.3.4/gccint/GIMPLE-Example.htmlwild_idea_2_that_probably_does_not_work)
Figure out, how to re-implement the ParaSail
in ParaSail and then use the existing ParaSail
binaries for generating portable C code, which
can be compiled at the new operating system
with any C compiler.
wild_idea_3_that_probably_does_not_work)
Modify GNAT build system by replacing
"localhost" gnat calls with remote procedure
calls to some other machine that has the
GNAT already running. The output of the
remote GNAT instance would be used at
the "localhost" build process.
wild_idea_4_that_probably_does_not_work)
Resort to virtual appliances that run
some Linux distribution that has all that
is needed for cross-compiling the ParaSail.
Porting ParaSail to a new operating_system/HW
would be in the form of upgrading the
cross-compilation capabilities of the
toolset.
wild_idea_5_that_probably_does_not_work)
Given how the Intel and AMD lawyers fend off
all other companies that want to create CPUs
that are compatible with the x86/AMD64 instruction sets
and given the Oracle law suite against Google
https://en.wikipedia.org/wiki/Oracle_America,_Inc._v._Google,_Inc.
the Java bytecode might be monopolized the same
way the x86/AMD64 instruction sets are monopolized.
In theory the European Union "laws"(quotes, because
calling juridical rules as "laws" is an
elegant Public Relations based deception, because
the social arrangements, juridical "laws", are
not actual laws of science, laws of mathematics)
allow any consumer to INTERFACE with the equipment
that they own, but the European DMCA
https://en.wikipedia.org/wiki/Digital_Millennium_Copyright_Actanalogue, the EUCD
https://en.wikipedia.org/wiki/European_Union_Copyright_Directiveforbids the sale and distribution of the
means for breaking "copyright protection" measures.
I have personal reasons for not trusting the Microsoft
https://longterm.softf1.com/biased_history/2005_microsoft_hired_inorek_and_grey_to_lobby_for_software_patents/index.html(archival copy:
http://archive.is/1akLe ),
but it does seem that the people at Microsoft
have learned at least some of the Java lessons, when
architecting the legal position of the
"Microsoft Java", the C#:
https://www.dotnetfoundation.org/(The lesson that they have not learned is that
software projects that can not be kept up to date
with ZERO BUDGET will not survive "financial winters"
https://en.wikipedia.org/wiki/AI_winterand therefore lack longterm availability.)
As long as there are Microsoft lawyers
fending off other parties, who want to
collect tax from C# virtual machine
instruction set users, the C# VM "bytecode"
seems to be a safer option than the Java
bytecode. Although the safest bet from
legal point of view might be the RISC-V
simulator/emulator:
https://bellard.org/riscvemu/(Latest release at the time of the writing
of this comment: 2017_08_06)
May be the way to interface the simulated
computer with the host computer is to
place some "interface card" at the bus
of the simulated/emulated computer and
that "interface card" would be a simulated
network card that is tunneled to some
loop-back IP address of the host computer.
To get rid of the network stack overhead,
the simulated network card might be replaced
with some "shared memory device" that has
some very primitive, but fast, custom
protocol. There might be even a whole cluster
of simulated RISCV computers and since they
all have the same RISCV "hardware", there is
no need for all nodes in the cluster to store
all of the /usr/lib . The cluster might be
a more efficient form of the
https://www.qubes-os.org/Darn. I think that I like the RISCV simulated cluster
approach the best, because it really boots up from
plain C compiler and it also seems to be useful
for applications programming, where from security
perspective it's attractive to separate the applications
from each other by limiting one application from
reading the data of another application.
(I've been thinking, how to do that with web
applications by installing different parts of them
under the rights of different users of the same
operating system instance. At first glance the
RISCV simulated cluster also seems to limit
the security mess that Linux has by requiring
the loading of various application-layer-functionality
related drivers like disc encryption software
straight to the kernel space. If only one of the
cluster nodes has the security mess to make
functionality, like the sshfs, available, then
at least the rest of the nodes are more-or-less OK.
The messy nodes in the cluster can be repeatedly
killed and cloned by some watchdog.)
Thank You for reading my
long-long-long text here.
Hmm. It seems that the pile of (unpaid) work
is so high that from my point of view
nothing will happen at the GNAT portability
front for years. But, I like the long perspective :-D