Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A more precise proof...

0 views
Skip to first unread message

Ramine

unread,
Apr 18, 2016, 11:37:28 AM4/18/16
to
Hello,

I have wrote in my proof this:

"If the serial part of the Amdahl's equation is bigger and
system makes the chance higher to escape the contention when we test
and analyses with fewer threads an fewer cores with USL methodology,
that means that the chance is higher that the next step with 2X or 3X
or 4X or 5X or even 6X the number of cores and threads will be a better
approximation of the case when we test and analyses with fewer threads
an fewer cores with USL methodology."


I i will make you feel when if the serial part of the Amdahl's law is
bigger and the system makes the chance higher to escape the contention,
so if the parallel part of the Amdahl's law is variable that makes you
more escape contention at fewer cores and fewer threads by wich you
test with the USL methodology using nonlinear regression, this means
that there is a much higher chance that system is organized in such a
manner that the next steps at 2X and 3X and 4X to nX the number of
cores and number of threads will be the right approximation , this is
like probability and there is a much higher chance to happen and this
makes the forecasting of scalability of USL methodology open, that means
that you can forecast scalability effectively with USL methodology and
that means that USL methology is a great and amazing tool !

But if the serial part of the Amdahl's equation is bigger, there
is more chance to hit the contention with fewer cores and fewer threads
when you test and analyses with USL methodology, so this will allow
USL methodology to forecast farther scalability, even if the parallel
part of the Amdahl's law is variable there is a lower chance from
the empirical performance data to escape the contention , so when there
is a lower chance to espace contention , so the nonlinear regression
of USL will hit the contention and thus will be able to predict
with a good approximation the scalability.

But when the serial part of the Amdahl's law is smaller, that
means that the chance is higher to escape the contention when we test
and analyses with fewer threads an fewer cores with USL methodology, so
this will allow USL methodology to forecast farther scalability.

So in my opinion USL methodology is able to forecast farther scalability
and is a success and is a great and amazing tool !

You have seen me, in this post, giving a proof about the USL methodology
that it works..

But i think we can be confident with the USL methodology
from Dr. Gunther , because Dr. Gunther is an expert
that knows what he is doing, so i think USL methodology
is working well and it is a great tool that can predict
scalability.

Here is the website of the Dr. Gunther the author of USL
methodology.

http://www.perfdynamics.com/

And read here about it:

http://www.perfdynamics.com/Manifesto/USLscalability.html


I have included the 32 bit and 64 bit windows executables of my
programs inside the zip file to easy the job for you.

You can download my USL programs version 3.0 with the source code from:

https://sites.google.com/site/aminer68/universal-scalability-law-for-delphi-and-freepascal


Thank you,
Amine Moulay Ramdane.







Ramine

unread,
Apr 24, 2016, 2:03:52 PM4/24/16
to
Hello,


If in a parallel program the locked region is 1/8 the parallel region,
so at fewer cores and fewer threads, USL methodology can miss to give
a good approximation of the scalability, but this cases constitutes
a much much smaller part of the chance probabilisticaly that will
miss the possibilitity of forecasting correctly, but since
testing a database system or a parallel compression program will not
give the right and exact solution that optimizes efficiently
the criterion of the cost, so we can consider those cases benign
and because they are part of the those hasards of this world.


So i repeat:

Because in the USL methodology the much much greater part of the chance
probabilistically will hit and gives us the possibility of forecating up
to 10X the maximum number of cores and threads of the performance data
measurements, so it is a better approximation.

And because a much much smaller part of the chance probabilistically
will hit and gives us the possibility of forecating up to 5X the maximum
number of cores and threads of the performance data measurements.

So forecasting up to 10X the maximum number of cores and threads
of the performance data measurements is a good approximation with USL
methodology, so if you want to optimize the criterion of the cost, you
have to forecast up to 10X the maximum number of cores and threads of
the performance data measurements, and see the tendency, if it says
that you can scale more and more on for example NUMA architecture ,
so when you want to buy bigger NUMA systems, make sure that you buy them
with the right configuration that permit to add more processors
and more memory, and you have to go buying step by step more and more
processors and memory, and on each step you will be able to test
empirically again the Computer NUMA system that you have bought with my
USL programs,to better forecast again farther the scalability and
optimize more the criterion of the cost, so as you have noticed my USL
programs
are great tools and important tools !
0 new messages