Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

More about my Diploma and my education and my way of doing..

6 views
Skip to first unread message

amin...@gmail.com

unread,
Feb 29, 2020, 4:28:48 PM2/29/20
to
Hello..


More about my Diploma and my education and my way of doing..

As you have noticed i am a white arab, i live in Quebec Canada since
year 1989.

Now if you ask me how i am making "money" so that to be able to live..

You have to understand my way of doing, I have gotten my Diploma in
Microelectronics and informatics in 1988, it is not a college level
diploma, my Diploma is a university level Diploma, it looks like an
Associate degree or the french DEUG.

Read here about the Associate degree:

https://en.wikipedia.org/wiki/Associate_degree

And after i have gotten my Diploma , I have also succeeded one year of
pure 'mathematics" at the university level of mathematics.

So i have studied and succeeded 3 years at the university level..

Now after that i have come to Canada in year 1989 and i have
started to study more software computing and to study network
administration in Quebec Canada, and after that i have started to work
as a network administrator for many years, after that around years 2001
and 2002 i have started to implement some of my softwares like PerlZip
that looked like PkZip of PKware software company, but i have
implemented it for Perl , and i have implemented the Dynamic Link
Libraries of my PerlZip that permits to compress and decompress etc.
with the "Delphi" compiler, so my PerlZip software product was very fast
and very efficient, in year 2002 i have posted the Beta version on
internet, and as a proof , please read about it here:

http://computer-programming-forum.com/52-perl-modules/ea157f4a229fc720.htm

And after that i have sold the release version of my PerlZip
product to many many companies and to many individuals around the world,
and i have even sold it to many Banks in Europe, and with that i have
made more money.

And after that i have started to work like a software developer
consultant and a network administrator, the name of my company was and is CyberNT Communications, here it is:

Here is my company in Quebec(Canada) called CyberNT Communications,
i have worked as a software developer and as a network administrator,
read the proof here:

https://opencorporates.com/companies/ca_qc/2246777231

Also read the following part of a somewhat old book of O'Reilly called Perl for System Administration by David N. Blank-Edelman, and you will notice that it contains my name and it speaks about some of my Perl modules:

https://www.oreilly.com/library/view/perl-for-system/1565926099/ch04s04.html


And here is one of my new software project that is my powerful Parallel Compression Library was updated to version 4.4

You can download it from:

https://sites.google.com/site/scalable68/parallel-compression-library


And read more about it below:


Author: Amine Moulay Ramdane

Description:

Parallel Compression Library implements Parallel LZ4 , Parallel LZMA , and Parallel Zstd algorithms using my Thread Pool Engine.

- It supports memory streams, file streams and files

- 64 bit supports - lets you create archive files over 4 GB , supports archives up to 2^63 bytes, compresses and decompresses files up to 2^63 bytes.

- Parallel compression and parallel decompression are extremely fast

- Now it supports processor groups on windows, so that it can use more than 64 logical processors and it scales well.

- It's NUMA-aware and NUMA efficient on windows (it parallelizes the reads and writes on NUMA nodes)

- It minimizes efficiently the contention so that it scales well.

- It supports both compression and decompression rate indicator

- You can test the integrity of your compressed file or stream

- It is thread-safe, that means that the methods can be called from multiple threads

- Easy programming interface

- Full source codes available.

Now my Parallel compression library is optimized for NUMA (it parallelizes the reads and writes on NUMA nodes) and it supports processor groups on windows and it uses only two threads that do the IO (and they are not contending) so that it reduces at best the contention, so that it scales well. Also now the process of calculating the CRC is much more optimized and is fast, and the process of testing the integrity is fast.

I have done a quick calculation of the scalability prediction for my Parallel Compression Library, and i think it's good: it can scale beyond 100X on NUMA systems.

The Dynamic Link Libraries for Windows and Dynamic shared libraries for Linux of the compression and decompression algorithms of my Parallel Compression Library and for my Parallel archiver were compiled from C with the optimization level 2 enabled, so they are very fast.

Here are the parameters of the constructor:

First parameter is: The number of cores you have specify to run the compression algorithm in parallel.

Second parameter is: A boolean parameter that is processorgroups to support processor groups on windows , if it is set to true it will enable you to scale beyond 64 logical processors and it will be NUMA efficient.

Just look at the Easy compression library for example, if you have noticed it's not a parallel compression library:

http://www.componentace.com/ecl_features.htm

And look at its pricing:

http://www.componentace.com/order/order_product.php?id=4

My parallel compression library costs you 0$ and it's a parallel compression library..

Also i am an inventor of many scalable algorithms, read my following thoughts to notice it:

Here is my other new invention..

As you have noticed i have just implemented my EasyList here:

https://sites.google.com/site/scalable68/easylist-for-delphi-and-freepascal

But i have just enhanced its algorithm to be scalable in the Add() method and in the search methods, but it is not all , i will use for that my just new invention that is my generally scalable counting networks, also its parallel sort algorithm will become much much more scalable , because i will use for that my other invention
of my fully my scalable Threadpool, and it will use a fully scalable parallel merging algorithm , and read below about my just new invention of generally scalable counting networks:

Here is my previous new invention of a scalable algorithm:

I have just read the following PhD paper about the invention that we call counting networks and they are better than Software combining trees:

Counting Networks

http://people.csail.mit.edu/shanir/publications/AHS.pdf

And i have read the following PhD paper:

http://people.csail.mit.edu/shanir/publications/HLS.pdf

So as you are noticing they are saying in the conclusion that:

"Software combining trees and counting networks which are the only techniques we observed to be truly scalable"

But i just found that this counting networks algorithm is not generally scalable, and i have the logical proof here, this is why i have just come with a new invention that enhance the counting networks algorithm to be generally scalable. And i think i will sell my new algorithm
of a generally scalable counting networks to Microsoft or Google or Embarcadero or such software companies.

So you have to be careful with the actual counting networks algorithm that is not generally scalable.

My other new invention is my scalable reference counting and here it is:

https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references

And my other new invention is my scalable Fast Mutex that is really powerful, and here it is:

About fair and unfair locking..

I have just read the following lead engineer at Amazon:

Highly contended and fair locking in Java

https://brooker.co.za/blog/2012/09/10/locking.html

So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.

I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.

But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:

More about research and software development..

I have just looked at the following new video:

Why is coding so hard...

https://www.youtube.com/watch?v=TAAXwrgd1U8

I am understanding this video, but i have to explain my work:

I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:

Read the following of the senior research scientist that is called Dave Dice:

Preemption tolerant MCS locks

https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks

As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:

1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.

this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies.

Thank you,
Amine Moulay Ramdane.


Bonita Montero

unread,
Mar 1, 2020, 1:57:15 PM3/1/20
to
> You have to understand my way of doing, I have gotten my Diploma in
> Microelectronics and informatics in 1988, it is not a college level
> diploma, my Diploma is a university level Diploma, it looks like an
> Associate degree or the french DEUG.

You certainly don't have one or the other diploma. You regularly post
articles about the content of which you really didn't think about in
the final analysis. For example, twice that certain CPUs with relaxed
memory ordering would have a performance disadvantage compared to CPUs
with TSO. It is the case that the comparison could only be made if the
RMO CPU had an equivalent CPU with TSO. Someone who really had your two
degrees would not make this mistake.

amin...@gmail.com

unread,
Mar 1, 2020, 4:18:09 PM3/1/20
to
I think that you are not thinking correctly.

Because look at what is saying the author of Cocoa Programming Developer's Handbook that is called David Chisnal about CPUs:

"The performance gain from allowing memory reordering is small, and it doesn't make up for the extra headaches that come from difficult-to-find failures."

Read more here:

https://www.informit.com/articles/article.aspx?p=1676714&seqNum=5


And here is the author of the saying above that is called David Chisnall:

https://www.cl.cam.ac.uk/~dc552/



As you notice David Chisnall is the Principal Researcher in the Confidential Computing Group at Microsoft Research Cambridge, where he works at the intersection of computer architecture, operating systems, programming language design, and security.


So as you notice that he is better than you.

So who do we have to believe ? Bonita Montero or David Chisnall ?

I think that you are too arrogant Bonita Montero.

Bonita Montero

unread,
Mar 1, 2020, 4:20:15 PM3/1/20
to
> I think that you are not thinking correctly.

I said what you quoted, and that was impossible to estimate.
And since you can't make a specific statement about it, you
can already doubt your alleged qualification.
0 new messages