Thanks for the details, I hadn't realized the numpy version was doing 10000 points. I'll do some modifications and see what I can come up with.
As for loops (plus, I still have to get my data in ndarray from my csv file) I am using one to create a sliding window.
I was never concerned about this before because in my previous attempts, this wasn't slow, but I can see where I'd want to remove it entirely or at least make it efficient.
Here is my sliding window loop in my eval_func (I use a0, b0, c0, etc as ARGs for easier tracking):
for row in range(start, len(datali)):
if row > 5:
a5,b5,c5, ... = datali[row - 5]
a4,b4,c4, ... = datali[row - 4]
a3,b3,c3, ... = datali[row - 3]
a2,b2,c2, ... = datali[row - 2]
a1,b1,c1, ... = datali[row - 1]
a0,b0,c0, ... = datali[row]
terms = [a0,b2,c3, ...] # not using all the terms from above in the calculation! i.e. there are holes in my matrix, first row is shorter than the rest
evaluated.append(code_comp(*terms))
<do more stuff>
I would imagine that this may need to change...
I can picture doing more efficient sliding window operations, this was just never slow enough before to worry about.
Other than that, my entire dataset is a single array of 2000 rows or so and somewhere less than 30 columns.
Thanks for your help.
-Mark Lefebvre