My colleagues and I have been experimenting with Noddi on our system for a small amount of time now and are noticing that we can achieve computational speedups by changing the number of voxels noddi processes at once and in parallel.
Accessing /noddi_toolbox/fitting/batch_fitting.m you will see on line 72, a variable is listed called progressStepSize = 100;
If you change progressStepSize, you can significantly speed up noddi. From my understanding, this variable changes the amount of voxels that the program allocates to all of your cores at once. Default is set at 100.
We are running a 12 core machine and experimented with changing this value and measuring the speed in which it takes to process patient data. We processed a total of 12000 voxels for each experiment, and by extrapolating this information, we have come to the conclusion that changing progressStepSize=1200 (in our case, on a 12 core machine) allows us a computational speedup of 5 hours on a 650000 voxel image.
For our first test, we ran at progressStepSize=100 and our results are as follows:
Voxel 12000/647745. Time Elapsed: 0.39h. Est. time remaining: 23.343h
Second test, progressStepSize=1200:
Voxel 12000/647745, Time Elapsed: 0.280h. Est. time remaining: 14.831h
Third Test, progressStepSize=12000:
Voxel 12000/647745, Time Elapsed: 0.295. Est. time remaining: 15.637h
By our calculations, changing progressStepSize from 100 to 1200 can allow us computational speedups of up to (14.831+0.280 -(23.343+0.39))/(23.343+0.39) ==36% (for our specific machine). We saw limited returns when setting this value higher and actually a slight decrease in performance (but still increased performance overall)
Playing around with this value could significantly speed up your computations.