Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

More of my philosophy about the evolution of genetics of humans and about the genetic algorithm and more of my thoughts..

6 views
Skip to first unread message

Amine Moulay Ramdane

unread,
Dec 23, 2022, 4:53:14 PM12/23/22
to
Hello,


More of my philosophy about the evolution of genetics of humans and about the genetic algorithm and more of my thoughts..

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..


The cost function of a neural network is in general neither convex nor concave, so in deep learning you can use evolutionary algorithms such as the genetic algorithm or PSO and such, so you have then to know that in such situations you have to loop in a number of iterations so that to find better solutions, so for example the genetics of humans has evolved in a such way , since i think that the great number of iterations with the crossover steps and the mutations and the selection of the process of evolution of genetics of humans that look like a genetic algorithm, is what made humans be so "optimized" by for example having a smart brain, and of course you have to read my following thoughts so that to understand the rest of the patterns that i have discovered with my fluid intelligence:

More precision of my philosophy about the Traveling Salesman Problem Using an Evolutionary Algorithm and more of my thoughts..

I invite you to look at the following interesting just new article
of Visual Studio Magazine of The Traveling Salesman Problem Using an Evolutionary Algorithm with C#:

https://visualstudiomagazine.com/articles/2022/12/20/traveling-salesman-problem.aspx


I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, and i have just understood rapidly the above program of The Traveling Salesman Problem using an evolutionary algorithm(a genetic algorithm) with C#, and i think that i am discovering the most important patterns with my fluid intelligence in the above program of the Traveling Salesman Problem using the genetic algorithm, and it is that the "crossover" steps in the genetic algorithm exploit better solution, and it means that they exploit locally the better solution, and using "mutation(s)" in the genetic algorithm you explore far away from the locally, and if the exploration finds a better solution , the exploitation will try to find a better solution near the found solution of the exploration, so this way of the genetic algorithm to balance the explore and the exploit is what makes the genetic algorithm interesting, so you have to understand it correctly so that to understand the genetic algorithm.

More of my philosophy about non-linear regression and about logic and about technology and more of my thoughts..


I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" , so i think that R-squared is invalid for non-linear regression, but i think that something that look like R-squared for non-linear regression is to use Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, but if you calculate just the standard error of the estimate (Mean Square Error), it is not sufficient since you have to know what is the size of the standard error of the estimate relatively to the curve and its axes, so read my following thoughts so that to understand more:

So the R-squared is invalid for non-linear regression, so you have to use the standard error of the estimate (Mean Square Error), and of course you have to calculate the Relative standard error that is the standard deviation of the mean of the sample divide by the Estimate that is the mean of the sample, and i think that the Relative standard Error is an important thing that brings more quality to the statistical calculations, and i will now talk to you more about my interesting software project for mathematics, so my new software project uses artificial intelligence to implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart algorithm using artificial intelligence that permits to avoid premature
convergence, and it is also one of the most important thing, and
it will also be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am
doing it this way so that to be professional and i will give you a tutorial that explains my algorithms that uses artificial intelligence so that you learn from them, and of course it will automatically calculate the above Standard error of the estimate and the Relative standard Error.

More of my philosophy about non-linear regression and more..

I think i am really smart, and i have also just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the Simplex algorithm to solve non-linear least squares problems, and i will soon implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression, but i have also noticed that in mathematics you have to take care of the variability of the y in non-linear least squares problems so that to approximate, also the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for solving generic curve-fitting problems. The Levenberg–Marquardt algorithm was found to be an efficient, fast and robust method which also has a good global convergence property. For these reasons, It has been incorporated into many good commercial packages performing non-linear regression. But my way of implementing the non-linear "multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many parts of my software project, so stay tuned !


More of my philosophy about the truth table of the logical implication and about automation and about artificial intelligence and more of my thoughts..


I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above", and now
i will ask a philosophical question of:

What is a logical implication in mathematics ?

So i think i have to discover patterns with my fluid intelligence
in the following truth table of the logical implication:

p q p -> q
0 0 1
0 1 1
1 0 0
1 1 1

Note that p and q are logical variables and the symbol -> is the logical implication.

And here are the patterns that i am discovering with my fluid intelligence that permit to understand the logical implication in mathematics:

So notice in the above truth table of the logical implication
that p equal 0 can imply both q equal 0 and q equal 1, so for
example it can model the following cases in reality:

If it doesn't rain , so it can be that you can take or not your umbrella, so the pattern is that you can take your umbrella since
it can be that another logical variable can be that it can rain
in the future, so you have to take your umbrella, so as you
notice that it permits to model cases of the reality ,
and it is the same for the case in the above truth table of the implication of if p equal 1, it imply that q equal 0 , since the implication is not causation, but p equal 1 means for example
that it rains in the present, so even if there is another logical variable that says that it will not rain in the future, so you have
to take your umbrella, and it is why in the above truth table
p equal 1 imply q equal 1 is false, so then of course i say that
the truth table of the implication permits to model the case of causation, and it is why it is working.

More of my philosophy about objective truth and subjective truth and more of my thoughts..

Today i will use my fluid intelligence so that to explain more
the way of logic, and i will discover patterns with my fluid intelligence so that to explain the way of logic, so i will start by asking the following philosophical question:

What is objective truth and what is subjective truth ?

So for example when we look at the the following equality: a + a = 2*a,
so it is objective truth, since it can be made an acceptable general truth, so then i can say that objective truth is a truth that can be made an acceptable general truth, so then subjective truth is a truth that can not be made acceptable general truth, like saying that Jeff Bezos is the best human among humans is a subjective truth. So i can say that we are in mathematics also using the rules of logic so that to logically prove that a theorem or the like is truth or not, so notice the following truth table of the logical implication:

p q p -> q
0 0 1
0 1 1
1 0 0
1 1 1

Note that p and q are logical variables and the symbol -> is the logical implication.

The above truth table of the logical implication permits us
to logically infer a rule in mathematics that is so important in logic and it is the following:

(p implies q) is equivalent to ((not p) or q)


And of course we are using this rule in logical proofs since
we are modeling with all the logical truth table of the
logical implication and this includes the case of the causation in it,
so it is why it is working.

And i think that the above rule is the most important rule that permits
in mathematics to prove like the following kind of logical proofs:

(p -> q) is equivalent to ((not(q) -> not(p))

Note: the symbol -> means implies and p and q are logical
variables.

or

(not(p) -> 0) is equivalent to p


And for fuzzy logic, here is the generalized form(that includes fuzzy logic) for the three operators AND,OR,NOT:

x AND y is equivalent to min(x,y)
x OR y is equivalent to max(x,y)
NOT(x) is equivalent to (1 - x)

So now you are understanding that the medias like CNN have to be objective by seeking the attain the objective truth so that democracy works correctly.


More of my philosophy about ChatGPT unbelievable AI Progress and more of my thoughts..

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..


I think i am highly smart, and I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, and now i invite you to look at the following really interesting video by an engineer called Anastasi about ChatGPT unbelievable AI Progress, so you have to look at it carefully, so here it is:

ChatGPT: Unbelievable AI Progress !

https://www.youtube.com/watch?v=WkjgIEheDFI


And of course you can read my following thoughts in the following web link about artificial intelligence and about economies of scale and about agile methodology etc.:

https://groups.google.com/g/alt.culture.morocco/c/gxGMjRUmTr0


More of my philosophy about the German model and about price elasticity of demand and more of my thoughts..


As i have just said that in economics the demand is very elastic when the demand is very sensitive to a change in price, so for example the price elasticity of demand can be calculated as the percentage change in quantity divided by the percentage change in price and it permits to have a better view of it, so as you notice you can also lower the price of a product or service by economies of scale and by automation and artificial intelligence so that to attract customers, as i am explaining it in my below thoughts, but of course you have to notice how the German model is about high quality, and here is what i say about it:

Why is Germany so successful in spite of least working hours?

So i think one of the most important factors are:

Of course the first factor is that Germany has good schools and vocational training - for everyone. This makes the average worker much more productive in terms of value add per hour.

And the second "really" important factor is the following:

It’s in the culture of Germany to focus on quality and being effective (all the way back to Martin Luther and his protestant work ethic)... Higher quality in every step of the chain leads to a massive reduction in defects and rework. This increases everyone’s productivity. But notice that i am also speaking in my below thoughts about the other ways to increase productivity by being specialization etc., and the way of the German model to focus on quality and being effective by also focusing on quality in every step of the chain that leads to a massive reduction in defects and rework, is also done by the following methodologies of quality control and Six Sigma etc., so read my following thoughts about them:

More of my philosophy about quality control and more of my thoughts..

I have just looked and understood quickly the following paper about SPC(Statistical process control):

https://owic.oregonstate.edu/sites/default/files/pubs/EM8733.pdf


I think i am highly smart, but i think that the above paper doesn't speak about the fact that you can apply the central limit theorem as following:

The central limit theorem states that the sampling distribution of the mean of any independent, random variable will be normal or nearly normal, if the sample size is large enough.

Also the above paper doesn't speak about the following very important things:

And I have quickly understood quality control with SPC(Statistical process control) and i have just discovered a smart pattern with my fluid intelligence and it is that with SPC(Statistical process control) we can debug the process, like in software programming, by looking at its variability, so if the variability doesn't follow a normal distribution, so it means that there are defects in the process, and we say that there is special causes that causes those defects, and if the variability follows a normal distribution, we say that the process is stable and it has only common causes, and it means that we can control it much more easily by looking at the control charts that permit to debug and control the variability by for example changing the machines or robots and looking at the control charts and measuring again with the control charts

More of my philosophy about the Post Graduate Program on lean Six Sigma and more..

More of my philosophy about Six Sigma and more..

I think i am smart, and now i will talk more about Six Sigma
since i have just talked about SPC(Statistical quality control), so
you have to know that Six Sigma needs to fulfill the following steps:

1- Define the project goals and customer (external and internal)
deliverables.

2- Control future performance so improved process doesn't degrade.

3- Measure the process so that to determine current performance and
quantify the problem.

4- Analyze and determine the root cause(s) of the defects.

5- Improve the process by eliminating the defects.


And you have to know that those steps are also important steps toward attaining ISO 9000 certification, and notice that you can use SPC(Statistical process control) and the control charts on step [4] and step [5] above.

Other than that i have just read the following interesting important paper about SPC(Statistical process control) that explains all the process of SPC(Statistical process control), so i invite you to read it
carefully:

https://owic.oregonstate.edu/sites/default/files/pubs/EM8733.pdf

So as you notice in the above paper that the central limit theorem
in mathematics is so important, but notice carefully that the necessary and important condition so that the central limit theorem works is that you have to use independent and random variables, and notice in the above paper that you have to do two things and it's that you have to reduce or eliminate the defects and you have to control the "variability" of the defects, and this is why the paper is talking about how to construct a control chart. Other than that the central limit theorem is not only related to SPC(Statistical process control), but it is also related to PERT and my PERT++ software project below, and notice that in my software project below that is called PERT++, i have provided you with two ways of how to estimate the critical path, first, by the way of CPM(Critical Path Method) that shows all the arcs of the estimate of the critical path, and the second way is by the way of the central limit theorem by using the inverse normal distribution function, and you have to provide my software project that is called PERT++ with three types of estimates that are the following:

Optimistic time - generally the shortest time in which the activity
can be completed. It is common practice to specify optimistic times
to be three standard deviations from the mean so that there is
approximately a 1% chance that the activity will be completed within
the optimistic time.

Most likely time - the completion time having the highest
probability. Note that this time is different from the expected time.

Pessimistic time - the longest time that an activity might require. Three standard deviations from the mean is commonly used for the pessimistic time.

And you can download my PERT++ from reading my following below thoughts:

More of my philosophy about the central limit theorem and about my PERT++ and more..

The central limit theorem states that the sampling distribution of the mean of any independent, random variable will be normal or nearly normal, if the sample size is large enough.

How large is "large enough"?

In practice, some statisticians say that a sample size of 30 is large enough when the population distribution is roughly bell-shaped. Others recommend a sample size of at least 40. But if the original population is distinctly not normal (e.g., is badly skewed, has multiple peaks, and/or has outliers), researchers like the sample size to be even larger. So i invite you to read my following thoughts about my software
project that is called PERT++, and notice that the PERT networks are referred to by some researchers as "probabilistic activity networks" (PAN) because the duration of some or all of the arcs are independent random variables with known probability distribution functions, and have finite ranges. So PERT uses the central limit theorem (CLT) to find the expected project duration.

And as you are noticing this Central Limit Theorem is also so important
for quality control, read the following to notice it(I also understood Statistical Process Control (SPC)):

An Introduction to Statistical Process Control (SPC)

https://www.engineering.com/AdvancedManufacturing/ArticleID/19494/An-Introduction-to-Statistical-Process-Control-SPC.aspx

Also PERT networks are referred to by some researchers as "probabilistic activity networks" (PAN) because the duration of some or all of the arcs are independent random variables with known probability distribution functions, and have finite ranges. So PERT uses the central limit theorem (CLT) to find the expected project duration.

So, i have designed and implemented my PERT++ that that is important for quality, please read about it and download it from my website here:

https://sites.google.com/site/scalable68/pert-an-enhanced-edition-of-the-program-or-project-evaluation-and-review-technique-that-includes-statistical-pert-in-delphi-and-freepascal

---


So I have provided you in my PERT++ with the following functions:


function NormalDistA (const Mean, StdDev, AVal, BVal: Extended): Single;

function NormalDistP (const Mean, StdDev, AVal: Extended): Single;

function InvNormalDist(const Mean, StdDev, PVal: Extended; const Less: Boolean): Extended;

For NormalDistA() or NormalDistP(), you pass the best estimate of completion time to Mean, and you pass the critical path standard deviation to StdDev, and you will get the probability of the value Aval or the probability between the values of Aval and Bval.

For InvNormalDist(), you pass the best estimate of completion time to Mean, and you pass the critical path standard deviation to StdDev, and you will get the length of the critical path of the probability PVal, and when Less is TRUE, you will obtain a cumulative distribution.


So as you are noticing from my above thoughts that since PERT networks are referred to by some researchers as "probabilistic activity networks" (PAN) because the duration of some or all of the arcs are independent random variables with known probability distribution functions, and have finite ranges. So PERT uses the central limit theorem (CLT) to find the expected project duration. So then you have to use my above functions
that are Normal distribution and inverse normal distribution functions, please look at my demo inside my zip file to understand better how i am doing it:

You can download and read about my PERT++ from my website here:

https://sites.google.com/site/scalable68/pert-an-enhanced-edition-of-the-program-or-project-evaluation-and-review-technique-that-includes-statistical-pert-in-delphi-and-freepascal

I think i am smart and i invite you to read carefully the following webpage of Alan Robinson Professor of Operations Management at University of Massachusetts and that is a full-time professor at the Isenberg School of Management of UMass and a consultant and book author specializing in managing ideas (idea-generation and idea-driven organization) and building high-performance organizations, creativity, innovation, quality, and lean management:

https://www.simplilearn.com/pgp-lean-six-sigma-certification-training-course?utm_source=google&utm_medium=cpc&utm_term=&utm_content=11174393172-108220153863-506962883161&utm_device=c&utm_campaign=Display-MQL-DigitalOperationsCluster-PG-QM-CLSS-UMass-YTVideoInstreamCustomIntent-US-Main-AllDevice-adgroup-QM-Desktop-CI&gclid=Cj0KCQiA3rKQBhCNARIsACUEW_ZGLHcUP2htLdQo46zP6Eo2-vX0MQYvc-o6GQP55638Up4tex85RBEaArn9EALw_wcB


And notice in the above webpage of the professor, that he is giving Post Graduate Program in Lean Six Sigma and on agile methodology, and i think that this Post Graduate Program is easy for me since i am really smart and i can easily understand lean Six Sigma or Six Sigma and i can easily understand agile methodology, and notice that i am in my below thoughts also explaining much more smartly what is agile methodology, and i think that the more difficult part of Six Sigma or lean Six Sigma is to understand the central limit theorem and to understand what is SPC(Statistical quality control) and how to use the control charts so that to control the variability of the defects, and notice that i am talking about it in my below thoughts, but i think that the rest of lean Six Sigma and Six Sigma is easy for me.


And you can read the rest of my thoughts about economies of scale and about artificial intelligence and about agile methodology etc. here:

https://groups.google.com/g/alt.culture.morocco/c/gxGMjRUmTr0




Thank you,
Amine Moulay Ramdane.

H H H H H H H H H H H H H H H H H H E

unread,
Jan 5, 2023, 6:07:44 AM1/5/23
to
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
0 new messages