Hi OpenPiton Team,
I had an interesting scenario pop up while running some simulations, that I figured I would check in about. I am currently running some studies on how different processor configs affect the performance of a matrix multiplication program. To test different algorithms, I am running some fairly large square matrices, and some interesting things happen when I use large dimension sizes.
I am just using a single-tile Ariane core with default configs, and when I try running my algorithm with 128x128 matrices of 32-bit ints (for both multiplicands and output), the simulator eventually hangs up. No crashing, but an indefinite stall forcing a shut-down of the system, always during a L1.5 exchange it seems. I'll attach (1) the sim.log (first few and last lines) and (2) the custom C workload being run for some context. Interestingly, decreasing the matrices dimensions (to say, 100x100 or lower) seems to work fine, but at a certain point the program halts.
I am curious what could be the possible cause of this? I am quite sure that it has something to do with how much memory is being used, but I am not completely sure. Perhaps the stack is getting overflowed? I know there is no way to dynamically allocate memory, but there is a better way to declare very large data structures as opposed to simply declaring them?
As a workaround, I can just use smaller matrices and data sizes < 32 bit, I just found this "bug" interesting, and was wondering if anyone can shed light on what's happening. Thanks!
Zachary