More of my philosophy about compile time and build time and more of my thoughts..

3 views
Skip to first unread message

Amine Moulay Ramdane

unread,
Sep 21, 2022, 1:58:00 PMSep 21
to
Hello,


More of my philosophy about compile time and build time and more of my thoughts..

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..


I say that Rust new programming language has some limitations like it is complex to programming and difficult to learn and slow in compiling also, and speaking about compile time and build time, read my following thoughts:

More about compile time and build time..

Look here about Java it says:


"Java Build Time Benchmarks

I'm trying to get some benchmarks for builds and I'm coming up short via Google. Of course, build times will be super dependent on a million different things, but I'm having trouble finding anything comparable.

Right now: We've got ~2 million lines of code and it takes about 2 hours for this portion to build (this excludes unit tests).

What do your build times look like for similar sized projects and what did you do to make it that fast?"


Read here to notice it:

https://www.reddit.com/r/java/comments/4jxs17/java_build_time_benchmarks/


So 2 million lines of code of Java takes about 2 hours to build.


And what do you think that 2 millions lines of code takes
to Delphi ?

Answer: Just about 20 seconds.


Here is the proof from Embarcadero, read and look at the video to be convinced about Delphi:

https://community.idera.com/developer-tools/b/blog/posts/compiling-a-million-lines-of-code-with-delphi

C++ also takes "much" more time to compile than Delphi, and Rust takes
more time to compile than C++, and you can for example take a look at the following webpage so that to notice that java compiles around two times faster than C++:

https://www.reddit.com/r/java/comments/q2n3j9/is_javas_compile_time_less_than_c_s_compile_time/


This is why i said previously the following:


I think Delphi is a single pass compiler, it is very fast at compile time, and i think C++ and Java and C# are multi pass compilers that are much slower than Delphi in compile time, but i think that the generated executable code of Delphi is still fast and is faster than C#.

And what about the advantages and disadvantages of single and multi pass compiler?

And From Automata Theory we get that any Turing Machine that does 2 (or more ) pass over the tape, can be replaced with an equivalent one that makes only 1 pass, with a more complicated state machine. At the theoretical level, they the same. At a practical level, all modern compilers make only one pass over the source code. It typically translated into an internal representation that the different phases analyze and update. During flow analysis basic blocks are identified. Common sub expression are found and precomputed and results reused. During loop analysis, invariant code will be moved out the loop. During code emission registers are assigned and peephole analysis and code reduction is applied.


More of my philosophy of the polynomial-time complexity of race detection and more of my thoughts..

I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have quickly understood how Rust
detects race conditions, but i think that a slew of
“partial order”-based methods have been proposed, whose
goal is to predict data races in polynomial time, but at the
cost of being incomplete and failing to detect data races in
"some" traces. These include algorithms based on the classical
happens-before partial order, and those based
on newer partial orders that improve the prediction of data
races over happens-before , so i think that we have to be optimistic
since read the following web page about the Sanitizers:

https://github.com/google/sanitizers

And notice carefully the ThreadSanitizer, so read carefully
the following paper about ThreadSanitizer:

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35604.pdf


And it says in the conclusion the following:

"ThreadSanitizer uses a new algorithm; it has several modes of operation, ranging from the most conservative mode (which has few false positives but also misses real races) to a very aggressive one (which
has more false positives but detects the largest number of
real races)."

So as you are noticing since the very agressive mode doesn't detect
all the data races, so it misses a really small number of real races , so it is like a very high probability of really detecting real races ,
and i think that you can also use my below methodology of using incrementally a model from the source code and using Spin model checker so that to higher even more the probability of detecting real races.


Read my previous thoughts:

More of my philosophy about race conditions and about composability and more of my thoughts..

I say that a model is a representation of something. It captures not all attributes of the represented thing, but rather only those seeming relevant. So my way of doing in software development in Delphi and Freepascal is also that i am using a "model" from the source code that i am executing in Spin model checking so that to detect race conditions, so i invite you to take a look at the following new tutorial that uses the powerful Spin tool:

https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html

So you can for example install Spin model checker so that to detect race conditions, this is how you will get much more professional at detecting deadlocks and race conditions in parallel programming. And i invite you to look at the following video so that to know how to install Spin model checker on windows:

https://www.youtube.com/watch?v=MGzmtWi4Oq0

More of my philosophy about race detection and concurrency and more..

I have just looked quickly at different race detectors, and i think that
the Intel Thread Checker from Intel company from "USA" is also very good since the Intel Thread Checker needs to instrument either the C++ source code or the compiled binary to make every memory reference and every standard Win32 synchronization primitive observable, so this instrumentation from the source code is very good since it also permits me to port my scalable algorithms inventions by for example wrapping them in some native Windows synchronization APIs, and this instrumentation from the source code is also business friendly, so read about different race detectors and about Intel Thread Checker here:

https://docs.microsoft.com/en-us/archive/msdn-magazine/2008/june/tools-and-techniques-to-identify-concurrency-issues

So i think that the other race detectors of other programming languages have to provide this instrumentation from the source code as Intel Thread Checker from Intel company from "USA".

More of my philosophy about Rust and about memory models and about technology and more of my thoughts..


I think i am highly smart, and i say that the new programming language that we call Rust has an important problem, since read the following interesting article that says that atomic operations that have not correct memory ordering can still cause race conditions in safe code, this is why the suggestion made by the researchers is:

"Race detection techniques are needed for Rust, and they should focus on unsafe code and atomic operations in safe code."


Read more here:

https://www.i-programmer.info/news/98-languages/12552-is-rust-really-safe.html


More of my philosophy about programming languages about lock-based systems and more..

I think we have to be optimistic about lock-based systems, since race conditions detection can be done in polynomial-time, and you can notice it by reading the following paper:

https://arxiv.org/pdf/1901.08857.pdf

Or by reading the following paper:

https://books.google.ca/books?id=f5BXl6nRgAkC&pg=PA421&lpg=PA421&dq=race+condition+detection+and+polynomial+complexity&source=bl&ots=IvxkORGkQ9&sig=ACfU3U2x0fDnNLHP1Cjk5bD_fdJkmjZQsQ&hl=en&sa=X&ved=2ahUKEwjKoNvg0MP0AhWioXIEHRQsDJc4ChDoAXoECAwQAw#v=onepage&q=race%20condition%20detection%20and%20polynomial%20complexity&f=false

So i think we can continu to program in lock-based systems, and about
composability of lock-based systems, read my following thoughts about it it:

More of my philosophy about composability and about Haskell functional language and more..

I have just read quickly the following article about composability,
so i invite you to read it carefully:

https://bartoszmilewski.com/2014/06/09/the-functional-revolution-in-c/

I am not in accordance with the above article, and i think that the above scientist is programming in Haskell functional language and it is for him the way to composability, since he says that the way of functional programming like Haskell functional programming is the
the way that allows composability in presence of concurrency, but for him lock-based systems don't allow it, but i don't agree with him, and i will give you the logical proof of it, and here it is, read what is saying an article from ACM that was written by both Bryan M. Cantrill and Jeff Bonwick from Sun Microsystems:

You can read about Bryan M. Cantrill here:

https://en.wikipedia.org/wiki/Bryan_Cantrill

And you can read about Jeff Bonwick here:

https://en.wikipedia.org/wiki/Jeff_Bonwick

And here is what says the article about composability in the presence of concurrency of lock-based systems:

"Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable:

“Locks and condition variables do not support modular programming,” reads one typically brazen claim, “building large programs by gluing together smaller programs[:] locks make this impossible.”9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking.

There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable.

Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized."

Read more here:

https://queue.acm.org/detail.cfm?id=1454462

More of my philosophy about HP and about the Tandem team and more of my thoughts..


I invite you to read the following interesting article so that
to notice how HP was smart by also acquiring Tandem Computers, Inc.
with there "NonStop" systems and by learning from the Tandem team
that has also Extended HP NonStop to x86 Server Platform, you can read about it in my below writing and you can read about Tandem Computers here: https://en.wikipedia.org/wiki/Tandem_Computers , so notice that Tandem Computers, Inc. was the dominant manufacturer of fault-tolerant computer systems for ATM networks, banks, stock exchanges, telephone switching centers, and other similar commercial transaction processing applications requiring maximum uptime and zero data loss:

https://www.zdnet.com/article/tandem-returns-to-its-hp-roots/

More of my philosophy about HP "NonStop" to x86 Server Platform fault-tolerant computer systems and more..

Now HP to Extend HP NonStop to x86 Server Platform

HP announced in 2013 plans to extend its mission-critical HP NonStop technology to x86 server architecture, providing the 24/7 availability required in an always-on, globally connected world, and increasing customer choice.

Read the following to notice it:

https://www8.hp.com/us/en/hp-news/press-release.html?id=1519347#.YHSXT-hKiM8

And today HP provides HP NonStop to x86 Server Platform, and here is
an example, read here:

https://www.hpe.com/ca/en/pdfViewer.html?docId=4aa5-7443&parentPage=/ca/en/products/servers/mission-critical-servers/integrity-nonstop-systems&resourceTitle=HPE+NonStop+X+NS7+%E2%80%93+Redefining+continuous+availability+and+scalability+for+x86+data+sheet

So i think programming the HP NonStop for x86 is now compatible with x86 programming.

And i invite you to read my thoughts about technology here:

https://groups.google.com/g/soc.culture.usa/c/N_UxX3OECX4


More of my philosophy about stack allocation and more of my thoughts..


I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just looked at the x64 assembler
of the C/C++ _alloca function that allocates size bytes of space from the Stack and it uses x64 assembler instructions to move RSP register and i think that it also aligns the address and it ensures that it doesn't go beyond the stack limit etc., and i have quickly understood the x64 assembler of it, and i invite you to look at it here:

64-bit _alloca. How to use from FPC and Delphi?

https://www.atelierweb.com/64-bit-_alloca-how-to-use-from-delphi/


But i think i am smart and i say that the benefit of using a stack comes mostly from "reusability" of the stack, i mean it is done this way
since you have for example from a thread to execute other functions or procedures and to exit from those functions of procedures and this exiting from those functions or procedures makes the memory of stack available again for "reusability", and it is why i think that using a dynamic allocated array as a stack is also useful since it also offers those benefits of reusability of the stack and i think that dynamic allocation of the array will not be expensive, so it is why i think i will implement _alloca function using a dynamic allocated array and i think it will also be good for my sophisticated coroutines library that you can read about it from my following thoughts about preemptive and non-preemptive timesharing in the following web link:


https://groups.google.com/g/alt.culture.morocco/c/JuC4jar661w


And i invite you to read my thoughts about technology here:

https://groups.google.com/g/soc.culture.usa/c/N_UxX3OECX4



Thank you,
Amine Moulay Ramdane.

Reply all
Reply to author
Forward
0 new messages