Hello................
Read again my final corrected post... i correct some typos because i
write fast..
About scalability..
This is an interesting subject..
To be more proficient on scalability, more precisely , to render
my algorithms scalable on NUMA and multicores systems, i have used my
smartness to invent new algorithms..So look at my new algorithms of
my Scalable Parallel C++ Conjugate Gradient Linear System Solver
Library here:
https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
i have invented this new algorithms to be cache-aware and scalable on
NUMA and multicores systems, but what i want you to understand, is
that by doing so, it has become easy for me to understand Deep Learning
Artificial intelligence, because at the very heart of Deep Learning,
you have to minimize an error function and this result in a doing Matrix
calculations that are the most expensive part that you can parallelize,
and this is why Deep learning is scalable, it's why we are using GPU to
scale Deep Learning softwares.. this has become easy for me to
understand , because my Scalable Parallel C++ Conjugate Gradient Linear
System Solver Library above does Matrix calculations that are the most
expensive part and that are scalable, so this has enhanced my knowledge
efficiently , and now i am also capable of scaling Deep Learning
Artificial intelligence, this is why i am now capable of understanding
more easily Deep Learning Artificial intelligence.
Thank you,
Amine Moulay Ramdane.