If I try the following example:
and set the array size to 100000 elements:
arr = numpy.random.normal(size=100000).astype(numpy.float32)
I get an assertion error, implying that the result do not coincide with the one obtained with numpy. I also plotted the outputs for both methods one against the other:
And saw that the errors were not small.
However, if I do the same for an array with comparable size but with a number of elements that is a power of 2 (for example, 2**18), I obtain a correct result.
I get a correct result if the array is small enough even if the size is not a power of 2.
I am aware that:
Current algorithm works most effectively with array dimensions being power of 2 This mostly applies to the axes over which the transform is performed, beacuse otherwise the computation falls back to the Bluestein’s algorithm, which effectively halves the performance.
But this should not affect the accuracy of the result, right?