Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

# Programming Challenge For Dumb Fucks

63 views

### Lester Thorpe

Feb 17, 2024, 10:22:03 AMFeb 17
to
Let's see if the dumb fucks can crack this one.

Matrix multiplication (matmult) is a very important task in computer science,
but it is also very computationally intensive. The basic routine for matmult
has a runtime of O(n^3) for an nxn matrix.

The following C program multiplies two matrices with n=512 using two different
methods. Both methods, however, are O(n^3), i.e. they perform the exact same
number of calculations.

But note the result:

./matmult

Run Time Mult 1: 0.0200255 seconds

Run Time Mult 2: 0.722272 seconds

Holy godzilla mutherfuckers! The first method is faster by 3700%!

Let's bite the bullet and go for a matrix of n=2048.

Result:

Run Time Mult 1: 1.9467 seconds

Run Time Mult 2: 100.011 seconds

Holy shiite! Now we have the first being faster by 5100%!

Let's have the dumb fuck code monkeys explain the differences.

IOW, lets have the code monkeys screech and bang the bars.

Ahahahahahahahahahahahahahahahahahahaha!

===============================
Begin C Program
===============================

#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>

int main()
{

int i, j, k, r, N=2048;

int *A = (int*)malloc(N * N * sizeof(int));
int *B = (int*)malloc(N * N * sizeof(int));
int *C = (int*)malloc(N * N * sizeof(int));

// vars for timing
double seconds;
struct timespec start, end;

// initialize matrices with random ints
for (i=0; i < N; i++) {
for (j=0; j < N; j++) {
A[i*N + j] = rand() % N;
B[i*N + j] = rand() % N;
}
}

/******** Matrix Multiply 1 **********/

// set start time

for (i=0; i < N; i++) {
for (k=0; k < N; k++) {
r = A[i*N+k];
for (j=0; j < N; j++) {
C[i*N+j] += B[k*N+j] * r;
}
}
}

// get and print end time
seconds = (double)((end.tv_sec-start.tv_sec)+(end.tv_nsec-start.tv_nsec)/1e9);
printf("\nRun Time Mult 1: %g seconds\n", seconds);

/******** Matrix Multiply 2 **********/

// set start time

for (j=0; j < N; j++) {
for (k=0; k < N; k++) {
r = B[k*N+j];
for (i=0; i < N; i++) {
C[i*N+j] += A[i*N+k] * r;
}
}
}

// get and print end time
seconds = (double)((end.tv_sec-start.tv_sec)+(end.tv_nsec-start.tv_nsec)/1e9);
printf("\nRun Time Mult 2: %g seconds\n", seconds);

}

===============================
End C Program
===============================

### rbowman

Feb 17, 2024, 12:42:59 PMFeb 17
to
On Sat, 17 Feb 2024 15:21:59 +0000, Lester Thorpe wrote:

> Let's have the dumb fuck code monkeys explain the differences.

Funny, after I corrected the way you fucked up the indices I get
equivalent times. You really should use numpy if you don't know what
you're doing.

### DFS

Feb 17, 2024, 3:45:48 PMFeb 17
to
On 2/17/2024 10:21 AM, Lester Thorpe wrote:
> Let's see if the dumb fucks can crack this one.
>
> Matrix multiplication (matmult) is a very important task in computer science,
> but it is also very computationally intensive. The basic routine for matmult
> has a runtime of O(n^3) for an nxn matrix.
>
> The following C program multiplies two matrices with n=512 using two different
> methods. Both methods, however, are O(n^3), i.e. they perform the exact same
> number of calculations.
>
> But note the result:
>
> ./matmult
>
> Run Time Mult 1: 0.0200255 seconds
>
> Run Time Mult 2: 0.722272 seconds
>
> Holy godzilla mutherfuckers! The first method is faster by 3700%!
>
> Let's bite the bullet and go for a matrix of n=2048.
>
> Result:
>
> Run Time Mult 1: 1.9467 seconds
>
> Run Time Mult 2: 100.011 seconds
>
> Holy shiite! Now we have the first being faster by 5100%!

Running your original crap-code on my WSL install:

N = 1024:
Run Time Mult 1: 2.80069 seconds
Run Time Mult 2: 5.08096 seconds

N = 2048:
Run Time Mult 1: 22.5953 seconds
Run Time Mult 2: 51.7866 seconds

> Let's have the dumb fuck code monkeys explain the differences.
>
> IOW, lets have the code monkeys screech and bang the bars.
>
> Ahahahahahahahahahahahahahahahahahahaha!

Your screwed up calculations in lines 3 and 5 of Matrix Multiply 2, in
which the order of the indices aren't consistent with the order used in
Matrix Multiply 1.

Correcting them makes the run times identical.

What a waste of time, you cringey fucknugget.

Well, at least YOU can learn from my programming below.

> ===============================
> Begin C Program
> ===============================

<snip subhuman code from the "C Programmer Extraordinaire">

Here's how a "REAL PROGRAMMER" does it. This will run for N = 2^1
through 2^11 (2048)
=============================================================================================
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>
#include <math.h>
void fillmatrices(int *A, int *B, int N) {
for (int i=0; i < N; i++) {
for (int j=0; j < N; j++) {
A[i*N + j] = rand() % N;
B[i*N + j] = rand() % N;
}
}
}
void matrixmultiply_1(int *A, int *B, int *C, int N) {
for (int i=0; i < N; i++) {
for (int k=0; k < N; k++) {
int r = A[i*N+k];
for (int j=0; j < N; j++) {
C[i*N+j] += B[k*N+j] * r;
}
}
}
}
void matrixmultiply_2(int *A, int *B, int *C, int N) {
for (int j=0; j < N; j++) {
for (int k=0; k < N; k++) {
int r = B[j*N+k];
for (int i=0; i < N; i++) {
C[j*N+i] += A[k*N+i] * r;
}
}
}
}
double elapsedtime(clock_t start) {
return (clock() - (double)start)/CLOCKS_PER_SEC;
}
int main(void) {
clock_t start;
srand(time(NULL));
for (int i = 1; i < 12; i++) {
int N = pow(2,i);
int arrsize = N * N * sizeof(int);
int *A = (int*)malloc(arrsize);
int *B = (int*)malloc(arrsize);
int *C = (int*)malloc(arrsize);
fillmatrices(A,B,N);
start = clock();
matrixmultiply_1(A,B,C,N);
printf("\nN = %d\n",N);
printf("Multiply 1 = %.6fs\n",elapsedtime(start));
start = clock();
matrixmultiply_2(A,B,C,N);
printf("Multiply 2 = %.6fs\n",elapsedtime(start));
free(A);
free(B);
free(C);
}
return(0);
}
=============================================================================================

dfs@Win11-EE:~\$ ./matmult

N = 2
Multiply 1 = 0.000030s
Multiply 2 = 0.000000s

N = 4
Multiply 1 = 0.000001s
Multiply 2 = 0.000001s

N = 8
Multiply 1 = 0.000002s
Multiply 2 = 0.000002s

N = 16
Multiply 1 = 0.000027s
Multiply 2 = 0.000011s

N = 32
Multiply 1 = 0.000121s
Multiply 2 = 0.000117s

N = 64
Multiply 1 = 0.000828s
Multiply 2 = 0.000785s

N = 128
Multiply 1 = 0.005100s
Multiply 2 = 0.005252s

N = 256
Multiply 1 = 0.040918s
Multiply 2 = 0.040801s

N = 512
Multiply 1 = 0.329828s
Multiply 2 = 0.329401s

N = 1024
Multiply 1 = 2.630907s
Multiply 2 = 2.628345s

N = 2048
Multiply 1 = 21.128377s
Multiply 2 = 21.150265s

(using corrected multiply2 code)

Note: srand(time(NULL)) isn't strictly necessary, but if you don't use
it each time you run the program you'll get arrays filled with the same
values for each N from the last run.

### L Thorpe

Feb 17, 2024, 4:22:41 PMFeb 17
to
On 17 Feb 2024 17:42:54 GMT, rbowman wrote:

>
>> Let's have the dumb fuck code monkeys explain the differences.
>
> Funny, after I corrected the way you fucked up the indices I get
> equivalent times. You really should use numpy if you don't know what
> you're doing.
>

I did not ask for a code "correction."

I asked to explain the difference between the two timings.

You did not do that. You FAILED.

Monkey get no banana.

### L Thorpe

Feb 17, 2024, 4:25:18 PMFeb 17
to
On Sat, 17 Feb 2024 15:45:43 -0500, DFS wrote:

>
> Correcting them makes the run times identical.
>

I did not ask for a code correction.

I asked for an explanation for the time differences.

You did not provide the answer. You FAILED.

Monkey get no banana.

### Joel

Feb 17, 2024, 4:31:08 PMFeb 17
to
Who do you think you're fooling, with your geeked out bullshit,
Russell? You're just an OCD case beyond imagination. Probably strung
out on crystal meth, I mean I can't conceive of what else would make
you behave as you do, and be such an utter loudmouth about your
preoccupation with running Linux without the distro overhead. Mint
Cinnamon runs like a dream on my box, just as Win11 did, and Win12
would've. What am I missing? What can't I do with it? Just
bullshit.

--
Joel W. Crump

Amendment XIV
Section 1.

[...] No state shall make or enforce any law which shall
abridge the privileges or immunities of citizens of the
United States; nor shall any state deprive any person of
life, liberty, or property, without due process of law;
nor deny to any person within its jurisdiction the equal
protection of the laws.

Dobbs rewrites this, it is invalid precedent. States are
liable for denying needed abortions, e.g. TX.

### rbowman

Feb 17, 2024, 6:45:11 PMFeb 17
to
The code correction is the difference between the two timings you ignorant
fuck. You fucked up the indices.

### Physfitfreak

Feb 17, 2024, 7:44:34 PMFeb 17
to
You're all using someone else's code. Why don't you guys think for
yourselves and manipulate matrices in a readable way if you want others
to comment on that.

Why would you use A[i + j] to refer to a matrix instead of the readable
A[i][j] ?

Did somebody tell you to look inside the computer and use the way that
computer stores matricies? Did somebody tell you to make it your
concerned?

Use A[i][j] for your matrix and rewrite the programs the way _you_ know
others, and they can comment on them.

--
This email has been checked for viruses by Avast antivirus software.
www.avast.com

### Physfitfreak

Feb 17, 2024, 7:57:51 PMFeb 17
to
On 2/17/2024 9:21 AM, Lester Thorpe wrote:
> A[i*N + j] = rand() % N;

Why aren't you using a two-dimensional array A[i][j] to represent a matrix?

### Lester Thorpe

Feb 18, 2024, 5:15:37 AMFeb 18
to
On Sat, 17 Feb 2024 15:21:59 +0000, Lester Thorpe wrote:

> Let's see if the dumb fucks can crack this one.
>

As predicted, none of the dumb fuck code monkeys explained
the issue regarding my PERFECT C code.

Ha, ha, ha! One dumb fuck actually had the audacity to claim
that I messed up the indices. What a stupid cluck! Ha, ha!

Nope. There ain't nothing amiss with my PERFECT C code.

The problem is CACHE HITS/MISSES.

The CPU operates only on cache. RAM is essentially just another
disk drive. Data and instructions from RAM memory are loaded
into the cache and the CPU processes that cached data.

In the first matmult method, the data is accessed sequentially
so that there are minimal cache misses.

In the second matmult method, the data access jumps around
the array and this increases the cache misses which severely
slows performance.

Now for another shocker.

The program is compiled generically:

gcc -O2 -march=x86-64 -o matmult2 matmult.c

Result:

Run Time Mult 1: 2.72307 seconds

Run Time Mult 2: 121.731 seconds

The program is compiled to fit the CPU cache:

gcc -O2 -march=native -o matmult2 matmult.c

Run Time Mult 1: 1.9467 seconds

Run Time Mult 2: 100.011 seconds

Holy moley! There is roughly a 35% speed improvement
in the optimized code versus the generic code.

Distro jockeys take serious note. Your distro code
is junk.

### Stéphane CARPENTIER

Feb 18, 2024, 5:24:42 AMFeb 18
to
Le 17-02-2024, L Thorpe <lt...@sixsixsix.net> a écrit :
> On Sat, 17 Feb 2024 15:45:43 -0500, DFS wrote:
>
>>
>> Correcting them makes the run times identical.
>>
>
> I did not ask for a code correction.

In fact, you did.

> I asked for an explanation for the time differences.

The explanation is the correction.

> You did not provide the answer.

He did. And rbowman did it, too.

> You FAILED.

You failed to understand two explanations. But you didn't failed to make
me laugh.

--
Si vous avez du temps à perdre :
https://scarpet42.gitlab.io

### Stéphane CARPENTIER

Feb 18, 2024, 5:39:23 AMFeb 18
to
Le 18-02-2024, Physfitfreak <Physfi...@gmail.com> a écrit :
>
> Why would you use A[i + j] to refer to a matrix instead of the readable
> A[i][j] ?

This one is really fun. I habitually don't read someone unable to remove
the useless part of the previous messages, but this one is really worth
it. I understand why you consider LP/NV/DG/FR/whatever as half a god now.

> computer stores matricies? Did somebody tell you to make it your
> concerned?

Yes, what you tell others to do, don't apply to you. I can see that.

### L Thorpe

Feb 18, 2024, 7:11:33 AMFeb 18
to
On Sat, 17 Feb 2024 18:57:48 -0600, Physfitfreak wrote:

> On 2/17/2024 9:21 AM, Lester Thorpe wrote:
>> A[i*N + j] = rand() % N;
>
>
> Why aren't you using a two-dimensional array A[i][j] to represent a matrix?
>

There really is no such thing as a 2-D (or N-D) matrix in C or any language.
All storage is strictly linear or 1-D.

The notation "A[i][j]" is translated by the compiler to A[i*N +j].

In addition, since the array was not defined as a 2-D array but
rather as a block of memory with a pointer to the start address
the notion "A[i][j]" would generate a compiler error.

The notation "A[i*N +j]" means to add the vaule i*N+j to the pointer
A which gives the memory location of the data.

### DFS

Feb 18, 2024, 9:12:09 AMFeb 18
to
On 2/18/2024 5:15 AM, Lester Thorpe wrote:

<snip the usual crazed and misplaced gloating>

> The program is compiled generically:
> gcc -O2 -march=x86-64 -o matmult2 matmult.c
> Run Time Mult 1: 2.72307 seconds
> Run Time Mult 2: 121.731 seconds

> The program is compiled to fit the CPU cache:
> gcc -O2 -march=native -o matmult2 matmult.c
> Run Time Mult 1: 1.9467 seconds
> Run Time Mult 2: 100.011 seconds

> Holy moley! There is roughly a 35% speed improvement
> in the optimized code versus the generic code.

For this extremely contrived example, which does billions of array accesses.

Once again, you can't get your bullshit by me.

> Distro jockeys take serious note. Your distro code
> is junk.

For this one piece of code the biggest improvement comes not from the
-march= flag but from optimization level 3, which contains a bunch of
loop optimizations:

-fipa-cp-clone
-floop-interchange
-floop-unroll-and-jam
-fpeel-loops
-fpredictive-commoning
-fsplit-loops
-fsplit-paths
-ftree-loop-distribution
-ftree-partial-pre
-funswitch-loops
-fvect-cost-model=dynamic
-fversion-loops-for-stride

https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html

On my Ubuntu WSL system

Multiply 1, N=1024
gcc, no architecture specified
optimizations speed
none 2.635s
-O1 0.413s
-O2 0.455s
-O3 0.144s

-march=x86-64
optimizations speed
none 2.627s
-O1 0.413s
-O2 0.457s
-O3 0.144s

-march=native
optimizations speed
none 2.781s
-O1 0.413s
-O2 0.455s
-O3 0.077s

From the slowest to the fastest is a drastic difference. I'll be sure
to test the -march=native and -O3 options in the future.

https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc.pdf
pg 401

3.18.58 x86 Options
These ‘-m’ options are defined for the x86 family of computers.
-march=cpu-type
Generate instructions for the machine type cpu-type. In contrast to
‘-mtune=cpu-type’, which merely tunes the generated code for the
specified cpu-type, ‘-march=cpu-type’ allows GCC to generate code that
may not run at all on processors other than the one indicated.
Specifying ‘-march=cpu-type’ implies ‘-mtune=cpu-type’.

The choices for cpu-type are:
‘native’
This selects the CPU to generate code for at compilation time
by determining the processor type of the compiling machine. Using
‘-march=native’ enables all instruction subsets supported by the
local machine (hence the result might not run on different machines).
Using ‘-mtune=native’ produces code optimized for the local machine
under the constraints of the selected instruction set.

‘x86-64’
A generic CPU with 64-bit extensions

### Lester Thorpe

Feb 18, 2024, 1:42:14 PMFeb 18
to
On Sun, 18 Feb 2024 09:12:07 -0500, DFS wrote:

>
> For this extremely contrived example
>

Certainly. Just like image processing, video processing,
audio processing, graphics processing, etc., etc., are
all "extremely contrived."

Ahahahahahahahahahahahahaha!

As per usual, you haven't a leg to stand on.

>
> Once again, you can't get your bullshit by me.
>

For your edification, I suggest that you enroll here.
They do have an opening for someone of your caliber:

Ahahahahahahahahahahahahaha!

The truth is always funny.

### Physfitfreak

Feb 18, 2024, 2:58:49 PMFeb 18
to
And you want us to use that picture to check for some matrix manipluation?

It is sadism :-) Hehe :)

Matrix algebra is complicated enough by itself. Why not using A[i][j]
for a matrix to point to a certain code behavior so we can focus on the
matrix algebra itself and not what some irrelevant innards of the
computer is.

Unless the answer to your problem _depends_ on the way matrices are
stored in the computer, the way to propose this problem is to at least
use A[i][j] for matrix.

Why sadism? Example: You put two glasses of coke in front of the readers
and tell them to taste them both and find out which one is diet, if they
can; otherwise they're just monkeys. But when we lift the glass to taste
the coke, we find that it is not a glass with coke in it, but a little
statue of a glass with coke in it. So we ask you where's your coke then?
And you reply, "Oh, you need to order that first from Walmart, using the
barcodes you see on the two statues." ...

That's why. Hahhahahah :)

### Physfitfreak

Feb 18, 2024, 3:11:08 PMFeb 18
to
But one could find this solution, if you had used A[i][j] for matrix
too. Even I, not knowing jack about RAM or CACHE, could just check and
see how matrix A[i][j] is stored and its elements accessed in this
problem, then see the cause of the inefficiency.

It's a nice problem regardless. In fact I remember in numerical methods
class they taught us to avoid making that unnecessary jumping back and
forth necessary.

### Physfitfreak

Feb 18, 2024, 3:27:58 PMFeb 18
to
I have a feeling if you had written the code in FORTRAN the result would
by the opposite of what you got. The inefficient one in C would become
the efficient one in FORTRAN. FORTRAN stored matrices by columns, not by
rows. At least in the version of it we used back then.

### Lester Thorpe

Feb 18, 2024, 4:35:59 PMFeb 18
to
On Sun, 18 Feb 2024 14:11:05 -0600, Physfitfreak wrote:

>
> It's a nice problem regardless. In fact I remember in numerical methods
> class they taught us to avoid making that unnecessary jumping back and
> forth necessary.
>

Cache is the hidden devil in all C programming.

The C programmer has to always be very careful that his code includes
proper cache management or else he will incur serious problems.

https://johnnysswlab.com/make-your-programs-run-faster-by-better-using-the-data-cache/

The competent C programmer has to be a master of cache.

### Physfitfreak

Feb 18, 2024, 5:16:10 PMFeb 18
to
On 2/18/2024 12:42 PM, Lester Thorpe wrote:
> For your edification, I suggest that you enroll here.
> They do have an opening for someone of your caliber:
>

With his female brain he'll love it.

### Physfitfreak

Feb 18, 2024, 5:44:02 PMFeb 18
to
On 2/18/2024 3:35 PM, Lester Thorpe wrote:
> https://johnnysswlab.com/make-your-programs-run-faster-by-better-using-the-data-cache/
>
> The competent C programmer has to be a master of cache.
>

Thanks. I'll keep the link till I relearn C.

### rbowman

Feb 18, 2024, 7:34:53 PMFeb 18
to
On Sun, 18 Feb 2024 16:43:57 -0600, Physfitfreak wrote:

> Thanks. I'll keep the link till I relearn C.

### DFS

Feb 18, 2024, 8:50:27 PMFeb 18
to
On 2/18/2024 1:42 PM, Lameass Larry Piet wrote:
> On Sun, 18 Feb 2024 09:12:07 -0500, DFS wrote:
>
>>
>> For this extremely contrived example
>>
>
> Certainly. Just like image processing, video processing,
> audio processing, graphics processing, etc., etc., are
> all "extremely contrived."
>
> Ahahahahahahahahahahahahaha!
>
> As per usual, you haven't a leg to stand on.

As per usual, you're a phony baloney who pretends he writes
sophisticated C code for image/video/audio/graphics.

What we have in common is this: we both know the truth about your
nonexistent "C Programmer Extraordinaire" skillz.

>> Once again, you can't get your bullshit by me.
>>
>
> For your edification, I suggest that you enroll here.
> They do have an opening for someone of your caliber:
>
>
> Ahahahahahahahahahahahahaha!
>
> The truth is always funny.

ALL your moments of truth happen in simpleton bash scripts.

### Joel

Feb 18, 2024, 8:59:30 PMFeb 18
to
DFS <nos...@dfs.com> wrote:
>On 2/18/2024 1:42 PM, Lameass Larry Piet wrote:
>> On Sun, 18 Feb 2024 09:12:07 -0500, DFS wrote:
>>
>>> For this extremely contrived example
>>
>> Certainly. Just like image processing, video processing,
>> audio processing, graphics processing, etc., etc., are
>> all "extremely contrived."
>>
>> Ahahahahahahahahahahahahaha!
>>
>> As per usual, you haven't a leg to stand on.
>
>As per usual, you're a phony baloney who pretends he writes
>sophisticated C code for image/video/audio/graphics.
>
>What we have in common is this: we both know the truth about your
>nonexistent "C Programmer Extraordinaire" skillz.

From what he's posted, I could believe he has the skillz to download
someone else's work and compile it, since his distro doesn't even
distribute shit as binaries already, which is a really great way to
advocate the use of Linux BTW, but the idea that his goofy programs to
do simple and pointless math problems, has anything to do with video
processing/etc., is laughable, yes. Russell is a fraud just like DJT.

### %

Feb 18, 2024, 9:11:15 PMFeb 18
to
Joel wrote:
> DFS <nos...@dfs.com> wrote:
>> On 2/18/2024 1:42 PM, Lameass Larry Piet wrote:
>>> On Sun, 18 Feb 2024 09:12:07 -0500, DFS wrote:
>>>
>>>> For this extremely contrived example
>>>
>>> Certainly. Just like image processing, video processing,
>>> audio processing, graphics processing, etc., etc., are
>>> all "extremely contrived."
>>>
>>> Ahahahahahahahahahahahahaha!
>>>
>>> As per usual, you haven't a leg to stand on.
>>
>> As per usual, you're a phony baloney who pretends he writes
>> sophisticated C code for image/video/audio/graphics.
>>
>> What we have in common is this: we both know the truth about your
>> nonexistent "C Programmer Extraordinaire" skillz.
>
>
> From what he's posted, I could believe he has the skillz to download
> someone else's work and compile it, since his distro doesn't even
> distribute shit as binaries already, which is a really great way to
> advocate the use of Linux BTW, but the idea that his goofy programs to
> do simple and pointless math problems, has anything to do with video
> processing/etc., is laughable, yes. Russell is a fraud just like DJT.
>
lets throw big rocks at them

### DFS

Feb 18, 2024, 10:01:32 PMFeb 18
to
On 2/18/2024 8:59 PM, Joel wrote:
> DFS <nos...@dfs.com> wrote:
>> On 2/18/2024 1:42 PM, Lameass Larry Piet wrote:
>>> On Sun, 18 Feb 2024 09:12:07 -0500, DFS wrote:
>>>
>>>> For this extremely contrived example
>>>
>>> Certainly. Just like image processing, video processing,
>>> audio processing, graphics processing, etc., etc., are
>>> all "extremely contrived."
>>>
>>> Ahahahahahahahahahahahahaha!
>>>
>>> As per usual, you haven't a leg to stand on.
>>
>> As per usual, you're a phony baloney who pretends he writes
>> sophisticated C code for image/video/audio/graphics.
>>
>> What we have in common is this: we both know the truth about your
>> nonexistent "C Programmer Extraordinaire" skillz.
>
>
> From what he's posted, I could believe he has the skillz to download
> someone else's work and compile it, since his distro doesn't even
> distribute shit as binaries already, which is a really great way to
> advocate the use of Linux BTW, but the idea that his goofy programs to
> do simple and pointless math problems, has anything to do with video
> processing/etc., is laughable, yes. Russell is a fraud just like DJT.

That he is. Even worse than Trump in some ways.

### DFS

Feb 18, 2024, 10:14:18 PMFeb 18
to
On 2/17/2024 7:44 PM, Physfitfreak wrote:

> You're all using someone else's code. Why don't you guys think for
> yourselves and manipulate matrices in a readable way if you want others
> to comment on that.

No, we are NOT using others' code (at least I'm not - other than the
multiply code he posted - but Feeb is big on 'forgetting' to attribute
code to the author). Very likely he found this code somewhere
explaining array traversal and cache hits/misses.

> Why would you use A[i + j] to refer to a matrix instead of the readable
> A[i][j] ?

A[i + j] or A[i*N + j] is just as readable as A[i][j]. Maybe moreso.

Be more flexible, old man. Flatten the data into one row in your mind,
and move across instead of up and down. It's easy.

> Did somebody tell you to look inside the computer and use the way that
> computer stores matricies? Did somebody tell you to make it your
> concerned?

You sound frustrated. On the autism spectrum, probably. Take your meds.

> Use A[i][j] for your matrix and rewrite the programs the way _you_ know
> how matrix multiplication is done.

Now would be a good time for you to do that simple assignment, in C.
After you make a real attempt and post it, I'll post the working A[i][j]
version I just wrote.

> Then your code becomes readable for others, and they can comment on them.

Your comments should be identical either way. The code is very much
readable. Spoon-feeding you is not necessary.

### DFS

Feb 19, 2024, 6:36:37 AMFeb 19
to
On 2/17/2024 10:21 AM, Larry Plagiarizer Piet wrote:

https://stackoverflow.com/questions/41452781/effect-of-cache-misses-on-time-of-matrix-multiplication

But since Feeb can't write C++ code, he translated the C++ into C. He
didn't even bother to change the array names from A,B,C. Cheap-ass
motherfucker.

Hey Feeb, why didn't you give credit to the author Cashif Ilyas for the
C++ code from which you plagiarized?

You thought r = A[i*N+k] after the 1st nested loop would fool me? Not a
chance.

> Let's see if the dumb fucks can crack this one.
>
> Matrix multiplication (matmult) is a very important task in computer science,
> but it is also very computationally intensive. The basic routine for matmult
> has a runtime of O(n^3) for an nxn matrix.
>
> The following C program multiplies two matrices with n=512 using two different
> methods. Both methods, however, are O(n^3), i.e. they perform the exact same
> number of calculations.
>
> ===============================
> Begin C Program
> ===============================
>
> #define _GNU_SOURCE
> #include <stdio.h>
> #include <stdlib.h>
> #include <unistd.h>
> #include <time.h>
>
> int main()
> {
>
> int i, j, k, r, N=2048;
>
> int *A = (int*)malloc(N * N * sizeof(int));
> int *B = (int*)malloc(N * N * sizeof(int));
> int *C = (int*)malloc(N * N * sizeof(int));
>
> // vars for timing
> double seconds;
> struct timespec start, end;
>
>
> // initialize matrices with random ints
> for (i=0; i < N; i++) {
> for (j=0; j < N; j++) {
> A[i*N + j] = rand() % N;
> B[i*N + j] = rand() % N;
> }
> }
>
> /******** Matrix Multiply 1 **********/
>
> // set start time
>
> for (i=0; i < N; i++) {
> for (k=0; k < N; k++) {
> r = A[i*N+k];
> for (j=0; j < N; j++) {
> C[i*N+j] += B[k*N+j] * r;
> }
> }
> }
>
> // get and print end time
> seconds = (double)((end.tv_sec-start.tv_sec)+(end.tv_nsec-start.tv_nsec)/1e9);
> printf("\nRun Time Mult 1: %g seconds\n", seconds);
>
>
> /******** Matrix Multiply 2 **********/
>
> // set start time
>
> for (j=0; j < N; j++) {
> for (k=0; k < N; k++) {
> r = B[k*N+j];
> for (i=0; i < N; i++) {
> C[i*N+j] += A[i*N+k] * r;
> }
> }
> }
>
> // get and print end time
> seconds = (double)((end.tv_sec-start.tv_sec)+(end.tv_nsec-start.tv_nsec)/1e9);
> printf("\nRun Time Mult 2: %g seconds\n", seconds);
>
>
> }
>
> ===============================
> End C Program
> ===============================
>

### Lord Master

Feb 19, 2024, 7:56:17 AMFeb 19
to
On Saturday, February 17, 2024 at 7:44:34 PM UTC-5, Physfitfreak wrote:
>
> You're all using someone else's code. Why don't you guys think for
> yourselves and manipulate matrices in a readable way if you want others
> to comment on that.
>

Matrix multiplication is so fundamental that the basic loop algorithm appears
in thousands of textbooks and probably even more web sites.

But let me tell you about these dumb fuck code monkeys.

Every time I post a sample of my inimitible and perfect C or other code they will
all begin a frenzied web search to see if they can locate something similar.
They will be searching for hours, even days, to find some scrap of similar
code which they will then proudly present as their "evidence."

Ha, ha, ha, ha, ha, ha, ha, ha, ha, ha! What a fucking waste of valuable time!

But that's the dumb fuck code monkey.

As the great poet Wolfgang Goethe once said: "We can know only what we are."

The dumb fuck code monkey is so fucking stupid that all he can ever know is
stupidity -- his own stupidity.

### Physfitfreak

Feb 19, 2024, 6:04:39 PMFeb 19
to
You call me "old man"? How old are you?

### Physfitfreak

Feb 19, 2024, 6:10:34 PMFeb 19
to
On 2/19/2024 6:56 AM, Lord Master wrote:
> Every time I post a sample of my inimitible and perfect C or other code they will
> all begin a frenzied web search to see if they can locate something similar.
> They will be searching for hours, even days, to find some scrap of similar
> code which they will then proudly present as their "evidence."

Hahhahahh :-)

That's cause they're your groupies :)

### Physfitfreak

Feb 19, 2024, 7:10:06 PMFeb 19
to
On 2/19/2024 5:36 AM, DFS wrote:
> On 2/17/2024 10:21 AM, Larry Plagiarizer Piet wrote:
>
>
>
> https://stackoverflow.com/questions/41452781/effect-of-cache-misses-on-time-of-matrix-multiplication
>
> But since Feeb can't write C++ code, he translated the C++ into C.  He
> didn't even bother to change the array names from A,B,C.  Cheap-ass
> motherfucker.
>
>
> Hey Feeb, why didn't you give credit to the author Cashif Ilyas for the
> C++ code from which you plagiarized?
>
> You thought r = A[i*N+k] after the 1st nested loop would fool me?  Not a
> chance.
>
>
>

Nevertheless, you're his groupie.

### DFS

Feb 19, 2024, 9:37:33 PMFeb 19
to
On 2/19/2024 6:04 PM, Physfitfreak wrote:
> On 2/18/2024 9:14 PM, DFS wrote:

>>> Use A[i][j] for your matrix and rewrite the programs the way _you_
>>> know how matrix multiplication is done.
>>
>> Now would be a good time for you to do that simple assignment, in C.
>> After you make a real attempt and post it, I'll post the working
>> A[i][j] version I just wrote.

notation? Don't run away - this is a code challenge of easy-medium
difficulty.

Here's the output of mine:

\$ gcc -O3 matrix_multiply_ij.c -o matmult -lm
\$ ./matmult

4x4 Random Data
Matrix A Matrix B
------------- -------------
1 2 1 1 3 1 1 2
0 3 1 2 0 0 0 0
0 0 2 3 3 3 2 2
2 0 1 2 0 3 0 3

Matrix C: multiply A * B
------------------------
6 7 3 7
3 9 2 8
6 15 4 13
9 11 4 12

N= 4: multiplication in 0.0000s

8x8 Random Data
Matrix A Matrix B
----------------------------- -----------------------------
4 4 6 0 1 7 4 6 1 5 1 6 3 6 5 0
0 7 0 5 2 6 2 3 6 6 2 3 1 4 0 6
1 3 0 6 7 0 2 3 7 7 3 1 5 3 6 2
5 0 5 0 7 3 7 7 2 5 5 7 6 1 6 0
5 0 6 7 3 0 5 7 2 5 6 5 7 5 3 2
5 0 5 2 7 5 4 6 7 2 0 4 5 6 4 1
6 7 4 1 5 4 4 5 6 4 6 7 1 2 3 2
3 4 5 4 4 2 5 1 5 0 6 4 1 0 1 3

Matrix C: multiply A * B
------------------------
175 121 96 127 98 113 105 71
125 97 81 116 86 83 69 65
72 96 109 118 96 63 71 45
152 129 146 159 118 112 116 62
132 137 148 163 120 80 134 49
147 131 132 156 136 120 126 55
165 154 121 157 115 134 108 87
127 138 106 130 106 95 105 57

N= 8: multiplication in 0.0000s
N= 16: multiplication in 0.0000s
N= 32: multiplication in 0.0000s
N= 64: multiplication in 0.0001s
N= 128: multiplication in 0.0020s
N= 256: multiplication in 0.0167s
N= 512: multiplication in 0.1799s
N=1024: multiplication in 2.7768s
N=2048: multiplication in 31.8283s

### Physfitfreak

Feb 20, 2024, 12:22:59 AMFeb 20
to
Highly suspect to be the result of an AI inquiry.

I know how to multiply matrices, and I can code it efficiently if I know
how it is stored in the computer (i.e., by column or by row). So I don't
feel the need to show that simple skill to others. Sorry :)

But if you feel you need to show your skills to me and others
(especially Farley), add the two e-based numbers I gave in one of my
challenge questions, with the result of course expressed in e-base with
14 significant digits. AI cannot do that, so if you code the conversion,
I'll know (for the first time) that you can program productively.

In fact if you can do it, I'm sure it'll make Farley pretty jealous of
you :)

### Physfitfreak

Feb 20, 2024, 12:48:04 AMFeb 20
to
Just in case you decided to do it, first do it manually, showing your
results step by step (digit by digit as you find them) in your message.
Only then you'd know exactly what to do and therefore could code it if
you're good at programming.

### DFS

Feb 20, 2024, 7:19:13 AMFeb 20
to
Suspect away.

I'm sure some AI tool will generate dot product results (of square
matrices) using the standard algorithm:

for (int i=0; i < N; i++) {
for (int j=0; j < N; j++) {
C[i][j] = 0;
for (int k=0; k < N; k++) {
C[i][j] += A[i][k] * B[k][j];
}
}
}

But intentionally passing off others' code as your own is a Feeb tactic
(I busted him several times doing it).

3. The perl code he posted here: <rkqoc...@news3.newsguy.com>
came from: https://perlmaven.com/count-words-in-text-using-perl

> I know how to multiply matrices, and I can code it efficiently if I know
> how it is stored in the computer (i.e., by column or by row).

Then code it. Don't just say you can (common Feeb tactic); do it.

> So I don't feel the need to show that simple skill to others. Sorry :)

wimp

You begged me: "Use A[i][j] for your matrix and rewrite the programs the
way _you_ know how matrix multiplication is done."

I DID write such a version in C, and the above is the output. But you
won't see the program until you submit your attempt.

> But if you feel you need to show your skills to me and others
> (especially Farley), add the two e-based numbers I gave in one of my
> challenge questions, with the result of course expressed in e-base with
> 14 significant digits.

Looks somewhat interesting. I might give it a try.

What 2 numbers?

> AI cannot do that, so if you code the conversion,
> I'll know (for the first time) that you can program productively.

Surely there's a website that will do it?

> In fact if you can do it, I'm sure it'll make Farley pretty jealous of
> you :)

He already is. But Feeb is a vicious, petulant child that only gives
credit to FOSS coders or suckups like you.

Example: years ago he whined to me:

> When are YOU gonna learn something useful, like colorizing the output
> of your namby-pamby python output to differentiate file types?

So I wrote a C program to do it:

----------------------------------------------------------------------
#include <dirent.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

void getfiletypeinfo(char *filecmd, char *farr[])
{
// use pipe to get results of system command
char pBuffer[50];
FILE *pPipe = popen(filecmd,"r");
fgets(pBuffer, 50, pPipe);
pclose(pPipe);

// assign color value and short descriptor to file type
if (strstr(pBuffer,"symbolic link" )!=NULL) {farr[0] = "31";
else if (strstr(pBuffer,"shared object" )!=NULL) {farr[0] = "32";
farr[1] = "shared object";}
else if (strstr(pBuffer,"Perl script" )!=NULL) {farr[0] = "33";
farr[1] = "Perl script";}
else if (strstr(pBuffer,"shell script" )!=NULL) {farr[0] = "34";
farr[1] = "shell script";}
else if (strstr(pBuffer,"python" )!=NULL) {farr[0] = "35";
farr[1] = "Python script";}
else if (strstr(pBuffer,"ASCII text" )!=NULL) {farr[0] = "36";
farr[1] = "text file";}
else if (strstr(pBuffer,"ELF 64-bit LSB")!=NULL) {farr[0] = "37";
farr[1] = "executable";}
}

int main(int argc, char *argv[])
{

if(argc<2) {printf("Enter a directory, ie /usr/sbin/\n");exit(0);}
if(opendir(argv[1])==NULL) {printf("%s is invalid\n",argv[1]);exit(0);}
if(argv[1][strlen(argv[1])-1] != '/') {printf("Last character of
directory name must be /\n");exit(0);}

int i = 0, filecnt = 0;
char fcmd[100], c[11], *farr[2];

struct dirent **dir;
filecnt = scandir(argv[1], &dir, NULL, alphasort);
for(i=0;i<filecnt;i++) {
sprintf(fcmd,"file %s%s -b",argv[1],dir[i]->d_name);
getfiletypeinfo(fcmd,farr);
sprintf(c,"\033[1;%sm",farr[0]);
printf("%s",c);
printf("\%3d. %-30s %s\n", i+1, dir[i]->d_name, farr[1]);
free(dir[i]);
}
free(dir);
return(0);
}
----------------------------------------------------------------------

Feeb's immature response?

"Nope. You did not do it. All terminal output on GNU/Linux should be
done through ncurses."

He cried like that because he was - and still is - unable to do it himself.

### DFS

Feb 20, 2024, 9:25:58 AMFeb 20
to
On 2/19/2024 7:56 AM, Lying Larry Piet wrote:

> Every time I post a sample of my inimitible and perfect C or other code

It's inimitable, brainiac.

I bet you can spell delusional, though.

> they will
> all begin a frenzied web search to see if they can locate something similar.

"they"? I'm the only one that exposes your blatant and repeated
plagiarism.

3. The perl code he posted here: <rkqoc...@news3.newsguy.com>
came from: https://perlmaven.com/count-words-in-text-using-perl

When busted you whine and lie like a 3-year-old.

> They will be searching for hours, even days, to find some scrap of similar
> code which they will then proudly present as their "evidence."

Where hours and days = a few minutes.

### Physfitfreak

Feb 20, 2024, 5:59:31 PMFeb 20
to
I don't remember the two numbers. They were both four digit numbers in
base e. e is the Euler's number (or the Neperian number depending on
which school you attended). It is the base used in "natural" logarithms,
instead of 10. For this problem, it's just a number between 2 and 3 and
in base 10 has infinite digits following the radix point.

But do use that same two numbers that I gave, so I can verify your
result faster.

### Physfitfreak

Feb 20, 2024, 6:13:24 PMFeb 20
to
On 2/20/2024 6:19 AM, DFS wrote:
>
>
>> In fact if you can do it, I'm sure it'll make Farley pretty jealous of
>> you :)
>
>
> He already is.  But Feeb is a vicious, petulant child that only gives
> credit to FOSS coders or suckups like you.
>

No, I think he will give you good credit if you do it successfully,
cause he said he'd never done that type of problem, and either has
passed, or is trying to find time to do it one of these days.

So it might even become a race between you and Farley :-) Whoever does
it correctly first, at least as far as I'm concerned, has done a better
job of actually programming something useful.

I successfully coded that same problem (the base conversion) in 1979 as
a homework programming problem in the PL/I course I was taking. It was
one of the few first programs I'd ever written, so I enjoyed the heck
out of it.

### candycanearter07

Feb 20, 2024, 8:13:06 PMFeb 20
to
On 2/18/24 06:11, L Thorpe wrote:
> On Sat, 17 Feb 2024 18:57:48 -0600, Physfitfreak wrote:
>
>> On 2/17/2024 9:21 AM, Lester Thorpe wrote:
>>> A[i*N + j] = rand() % N;
>>
>>
>> Why aren't you using a two-dimensional array A[i][j] to represent a matrix?
>>
>
> There really is no such thing as a 2-D (or N-D) matrix in C or any language.
> All storage is strictly linear or 1-D.
>
> The notation "A[i][j]" is translated by the compiler to A[i*N +j].
>
> In addition, since the array was not defined as a 2-D array but
> rather as a block of memory with a pointer to the start address
> the notion "A[i][j]" would generate a compiler error.
>
> The notation "A[i*N +j]" means to add the vaule i*N+j to the pointer
> A which gives the memory location of the data.

I'd like to add that technically, if you were to dynamically allocate a
2d array a certain way (by allocating the first array, then allocating
another array for each column) you can end up with an array that
requires a double [] notation.

Also, I prefer double [], but you can do what you like ^^
--
user <candycane> is generated from /dev/urandom

0 new messages